私的AI研究会 > PyTorch2a
「Chapter2」の転移学習結果をonnx 経由で OpenVINO™ で実行してみる。
「定点カメラの映像を認識する OpenVINO™ を利用する」 で変換した学習モデルがうまく動作していなかったので原因を調査する。
torch_onnx.export(model, dummy_input, model_onnx_path, export_params=True, opset_version, verbose, input_names, output_names)
引数 | 意味 |
model | 実行中のモデル |
input | モデルの入力値 |
model_onnx_path | onnx 出力先のパス |
export_params | モデルファイルに訓練した重みを保存するかどうか(True/False) |
opset_version | ONNXのバージョン |
verbose | 変換中の細かいログを吐く(True/False) |
input_names | モデルの入力値に関する表示名を指定 |
output_names | モデルの出力値に関する表示名を指定 |
(py37) $ python3 chapt02_3a.py [W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware. ONNX Convert start !! Finish !! Exported model has been tested with ONNXRuntime, and the result looks good! ---------- Inference results with PyTorch ---------- [[-0.49670157 1.970253 ]] ---------- Inference results with ONNX Runtime ---------- [[-0.49670076 1.9702525 ]] (py37) $ ls chapt02-model1.onnx chapt02_1a.py chapt02_3.py chapt02-model1.pth chapt02_2a.py chapt02_3a.py・変換前後でほぼ同じ数値が得られる。
[env_select.sh] Environment Select !! 1: Nomal 2: OpenVINO 3: Anaconda Prease input '1-3' : 2 ** OpenVINO environment select !! ** [openvino_setup.sh] OpenVINO environment initialized :
$ cd ~/workspace_py37/chapter02/ $ ls chapt02-model1.onnx chapt02_1a.py chapt02_3.py chapt02-model1.pth chapt02_2a.py chapt02_3a.py $ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model chapt02-model1.onnx --mean_values=input[123.675,116.28,103.53] --scale_values=input[58.395,57.12,57.375] --reverse_input_channels Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/mizutu/workspace_py37/chapter02/chapt02-model1.onnx - Path for generated IR: /home/mizutu/workspace_py37/chapter02/. - IR output name: chapt02-model1 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: input[123.675,116.28,103.53] - Scale values: input[58.395,57.12,57.375] - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: None - Reverse input channels: True ONNX specific parameters: - Inference Engine found in: /opt/intel/openvino_2021/python/python3.8/openvino Inference Engine version: 2021.4.0-3839-cd81789d294-releases/2021/4 Model Optimizer version: 2021.4.0-3839-cd81789d294-releases/2021/4 [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /home/mizutu/workspace_py37/chapter02/chapt02-model1.xml [ SUCCESS ] BIN file: /home/mizutu/workspace_py37/chapter02/chapt02-model1.bin [ SUCCESS ] Total execution time: 15.76 seconds. [ SUCCESS ] Memory consumed: 362 MB.
$ python3 chapt02_model_data.py --- OpenVINO™ Model Data Check --- 4.5.3-openvino OpenVINO inference_engine: 2021.4.0-3839-cd81789d294-releases/2021/4 OpenVINO™ Model Data Check: Starting application... - Model : ./chapt02-model1.xml - Device : CPU - Image file : ../sample/chapt02/chapt02-sample_off.jpg input blob: name='input', N=1, C=3, H=224, W=224 >>> Inference execution... >>> output >>> Type : <class 'dict'> Length : 1 KeyList : ['output'] ValueList: [array([[ 1.6031462, -0.7632908]], dtype=float32)] *** STER 1 *** {'output': array([[ 1.6031462, -0.7632908]], dtype=float32)} *** STER 2 *** [[ 1.6031462 -0.7632908]] Type : <class 'numpy.ndarray'> Shape : (1, 2) Dimension: 2 *** STER 3 *** [ 1.6031462 -0.7632908] Type : <class 'numpy.ndarray'> Shape : (2,) Dimension: 1 Finished.
$ python3 chapt02_4.py --- Surveillance camera --- 4.5.3-openvino OpenVINO inference_engine: 2021.4.0-3839-cd81789d294-releases/2021/4 Surveillance camera: Starting application... - Image File : ../sample/chapt02/chapt02-sample-a.mp4 - m_detect : chapt02-model1.xml - Device : CPU - Language : jp - Input Shape : input - Output Shape : output - Program Title: y - Speed flag : y - Processed out: non FPS average: 23.40 Finished.PyTorch(CPU) の動作の約 3倍の速度は出ているようだ。