私的AI研究会 > OpenVINO8

OpenVINO™ Toolkit for Linux

 OpenVINO™ツールキットのWindows版がしっくりこないのでLinux で試すことにする。
「Model Optimizer(MO) モデル変換ツール」の使用が目的。

※ 最終更新:2021/03/25 

「OpenVINO™ Toolkit for Linux」のインストール

事前準備

OpenVINO™ Toolkit のダウンロード

OpenVINO™ Toolkit のインストール

 オフィシャルサイトの手順に従ってインストール
 Install Intel® Distribution of OpenVINO™ toolkit for Linux*

  1. ダウンロードされたパッケージを解凍
    ~$ cd ダウンロード
    ~/ダウンロード$ ls
    l_openvino_toolkit_p_2021.2.185.tgz
    ~/ダウンロード$ tar -xvzf l_openvino_toolkit_p_2021.2.185.tgz
  2. 解凍したパッケージの中にあるインストーラを起動
    ~/ダウンロード$ ls
    l_openvino_toolkit_p_2021.2.185  l_openvino_toolkit_p_2021.2.185.tgz
    ~/ダウンロード$ cd l_openvino_toolkit_p_2021.2.185
    ~/ダウンロード/l_openvino_toolkit_p_2021.2.185$ ls
    EULA.txt        install.sh      install_openvino_dependencies.sh  rpm
    PUBLIC_KEY.PUB  install_GUI.sh  pset                              silent.cfg
    ~/ダウンロード/l_openvino_toolkit_p_2021.2.185$ sudo ./install_GUI.sh
    Cannot run setup in graphical mode.
    Setup will be continued in command-line mode.
    
    --------------------------------------------------------------------------------
    Initializing, please wait...
    --------------------------------------------------------------------------------
       :
       :
  3. 依存関係の外部パッケージをインストール
    ~/ダウンロード/l_openvino_toolkit_p_2021.2.185$ cd /opt/intel/openvino_2021/install_dependencies
    /opt/intel/openvino_2021/install_dependencies$ sudo -E ./install_openvino_dependencies.sh
    
    This script installs the following OpenVINO 3rd-party dependencies:
      1. GTK+, FFmpeg and GStreamer libraries used by OpenCV
      2. libusb library required for Myriad plugin for Inference Engine
      3. build dependencies for OpenVINO samples
      4. build dependencies for GStreamer Plugins
    
    ヒット:1 http://jp.archive.ubuntu.com/ubuntu focal InRelease
    取得:2 http://jp.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
       :
       :
  4. 環境変数の設定
    /opt/intel/openvino_2021/install_dependencies$ source /opt/intel/openvino_2021/bin/setupvars.sh
    [setupvars.sh] OpenVINO environment initialized
    シェルを起動時に自動的に環境変数を設定するため 「~/.bashrc」ファイルの最後に「source /opt/intel/openvino_2021/bin/setupvars.sh」の1行を追記する。

  5. Model Optimizer の設定
    /opt/intel/openvino_2021/install_dependencies$ cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
    /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites$ sudo ./install_prerequisites.sh
    ヒット:1 http://jp.archive.ubuntu.com/ubuntu focal InRelease
    ヒット:2 http://jp.archive.ubuntu.com/ubuntu focal-updates InRelease
    ヒット:3 http://jp.archive.ubuntu.com/ubuntu focal-backports InRelease
    取得:4 http://security.ubuntu.com/ubuntu focal-security InRelease [109 kB]
    109 kB を 1秒 で取得しました (72.9 kB/s)
       :
       :
  6. サンプルデモの実行1 demo_security_barrier_camera.sh
    mizutu@ubuntu2004dk:/opt/intel/openvino_2021/deployment_tools/demo$ ./demo_security_barrier_camera.sh
        :
    Downloading Intel models
    target_precision = FP16
    
    Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-license-plate-detection-barrier-0106 --output_dir /home/mizutu/openvino_models/ir --cache_dir /home/mizutu/openvino_models/cache
    
    ################|| Downloading vehicle-license-plate-detection-barrier-0106 ||################
        :
    Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name license-plate-recognition-barrier-0001 --output_dir /home/mizutu/openvino_models/ir --cache_dir /home/mizutu/openvino_models/cache
    
    ################|| Downloading license-plate-recognition-barrier-0001 ||################
        :
    
    Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name vehicle-attributes-recognition-barrier-0039 --output_dir /home/mizutu/openvino_models/ir --cache_dir /home/mizutu/openvino_models/cache
    
    ################|| Downloading vehicle-attributes-recognition-barrier-0039 ||################
        :
    
    ###################################################
    
    Run Inference Engine security_barrier_camera demo
    
    Run ./security_barrier_camera_demo -d CPU -d_va CPU -d_lpr CPU -i /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp -m /home/mizutu/openvino_models/ir/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml -m_lpr /home/mizutu/openvino_models/ir/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml -m_va /home/mizutu/openvino_models/ir/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
    
    [ INFO ] InferenceEngine: 	API version ......... 2.1
    	Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2
    [ INFO ] Files were added: 1
    [ INFO ]     /opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp
    [ INFO ] Loading device CPU
    [ INFO ] 	CPU
    	MKLDNNPlugin version ......... 2.1
    	Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2
    
    [ INFO ] Loading detection model to the CPU plugin
    [ INFO ] Loading Vehicle Attribs model to the CPU plugin
    [ INFO ] Loading Licence Plate Recognition (LPR) model to the CPU plugin
    [ INFO ] Number of InferRequests: 1 (detection), 3 (classification), 3 (recognition)
    [ INFO ] 4 streams for CPU
    [ INFO ] Display resolution: 1920x1080
    [ INFO ] Number of allocated frames: 3
    [ INFO ] Resizable input with support of ROI crop and auto resize is disabled
    0.1FPS for (3 / 1) frames
    Detection InferRequests usage: 0.0%
    
    [ INFO ] Execution successful
    
    
    ###################################################
    
    Demo completed successfully.
  7. サンプルデモの実行2 demo_squeezenet_download_convert_run.sh
    mizutu@ubuntu2004dk:/opt/intel/openvino_2021/deployment_tools/demo$ ./demo_squeezenet_download_convert_run.sh
    target_precision = FP16
    [setupvars.sh] OpenVINO environment initialized
    
    Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /home/mizutu/openvino_models/models --cache_dir /home/mizutu/openvino_models/cache
    
    ################|| Downloading squeezenet1.1 ||################
        :
    ###################################################
    
    Run Inference Engine classification sample
    
    Run ./classification_sample_async -d CPU -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m /home/mizutu/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml
    
    [ INFO ] InferenceEngine: 
    	API version ............ 2.1
    	Build .................. 2021.2.0-1877-176bdf51370-releases/2021/2
    	Description ....... API
    [ INFO ] Parsing input parameters
    [ INFO ] Parsing input parameters
    [ INFO ] Files were added: 1
    [ INFO ]     /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [ INFO ] Creating Inference Engine
    	CPU
    	MKLDNNPlugin version ......... 2.1
    	Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2
    
    [ INFO ] Loading network files
    [ INFO ] Preparing input blobs
    [ WARNING ] Image is resized from (787, 259) to (227, 227)
    [ INFO ] Batch size is 1
    [ INFO ] Loading model to the device
    [ INFO ] Create infer request
    [ INFO ] Start inference (10 asynchronous executions)
    [ INFO ] Completed 1 async request execution
    [ INFO ] Completed 2 async request execution
    [ INFO ] Completed 3 async request execution
    [ INFO ] Completed 4 async request execution
    [ INFO ] Completed 5 async request execution
    [ INFO ] Completed 6 async request execution
    [ INFO ] Completed 7 async request execution
    [ INFO ] Completed 8 async request execution
    [ INFO ] Completed 9 async request execution
    [ INFO ] Completed 10 async request execution
    [ INFO ] Processing output blobs
    
    Top 10 results:
    
    Image /opt/intel/openvino_2021/deployment_tools/demo/car.png
    
    classid probability label
    ------- ----------- -----
    817     0.6853030   sports car, sport car
    479     0.1835197   car wheel
    511     0.0917197   convertible
    436     0.0200694   beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
    751     0.0069604   racer, race car, racing car
    656     0.0044177   minivan
    717     0.0024739   pickup, pickup truck
    581     0.0017788   grille, radiator grille
    468     0.0013083   cab, hack, taxi, taxicab
    661     0.0007443   Model T
    
    [ INFO ] Execution successful
    
    [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
    
    
    ###################################################
    
    Demo completed successfully.
    ~
  8. サンプルデモの実行3 demo_benchmark_app.sh
    mizutu@ubuntu2004dk:/opt/intel/openvino_2021/deployment_tools/demo$ ./demo_benchmark_app.sh
    target_precision = FP16
    [setupvars.sh] OpenVINO environment initialized
    
    Run python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name squeezenet1.1 --output_dir /home/mizutu/openvino_models/models --cache_dir /home/mizutu/openvino_models/cache
    
    ################|| Downloading squeezenet1.1 ||################
        :
    ###################################################
    
    
    Run Inference Engine benchmark app
    
    Run ./benchmark_app -d CPU -i /opt/intel/openvino_2021/deployment_tools/demo/car.png -m /home/mizutu/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.xml -pc -niter 1000
    
    [Step 1/11] Parsing and validating input arguments
    [ INFO ] Parsing input parameters
    [ INFO ] Files were added: 1
    [ INFO ]     /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [Step 2/11] Loading Inference Engine
    [ INFO ] InferenceEngine: 
    	API version ............ 2.1
    	Build .................. 2021.2.0-1877-176bdf51370-releases/2021/2
    	Description ....... API
    [ INFO ] Device info: 
    	CPU
    	MKLDNNPlugin version ......... 2.1
    	Build ........... 2021.2.0-1877-176bdf51370-releases/2021/2
    
    [Step 3/11] Setting device configuration
    [ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
    [Step 4/11] Reading network files
    [ INFO ] Loading network files
    [ INFO ] Read network took 14.66 ms
    [Step 5/11] Resizing network to match image sizes and given batch
    [ INFO ] Network batch size: 1
    [Step 6/11] Configuring input of the model
    [Step 7/11] Loading the model to the device
    [ INFO ] Load network took 137.29 ms
    [Step 8/11] Setting optimal runtime parameters
    [Step 9/11] Creating infer requests and filling input blobs with images
    [ INFO ] Network input 'data' precision U8, dimensions (NCHW): 1 3 227 227 
    [ WARNING ] Some image input files will be duplicated: 4 files are required but only 1 are provided
    [ INFO ] Infer Request 0 filling
    [ INFO ] Prepare image /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [ WARNING ] Image is resized from (787, 259) to (227, 227)
    [ INFO ] Infer Request 1 filling
    [ INFO ] Prepare image /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [ WARNING ] Image is resized from (787, 259) to (227, 227)
    [ INFO ] Infer Request 2 filling
    [ INFO ] Prepare image /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [ WARNING ] Image is resized from (787, 259) to (227, 227)
    [ INFO ] Infer Request 3 filling
    [ INFO ] Prepare image /opt/intel/openvino_2021/deployment_tools/demo/car.png
    [ WARNING ] Image is resized from (787, 259) to (227, 227)
    [Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests using 4 streams for CPU, limits: 1000 iterations)
    [ INFO ] First inference took 10.22 ms
    
    [Step 11/11] Dumping statistics report
    [ INFO ] Pefrormance counts for 0-th infer request:
    data/mean_value_const_biases  NOT_RUN        layerType: Const              realTime: 0         cpu: 0               execType: unknown_FP32
        ;
        :
    [ INFO ] Pefrormance counts for 3-th infer request:
    data/mean_value_const_biases  NOT_RUN        layerType: Const              realTime: 0         cpu: 0               execType: unknown_FP32
    data/mean_value_const_weights NOT_RUN        layerType: Const              realTime: 0         cpu: 0               execType: unknown_FP32
    data/mean_value               EXECUTED       layerType: ScaleShift         realTime: 87        cpu: 87              execType: jit_avx2_I8
    conv1                         EXECUTED       layerType: Convolution        realTime: 719       cpu: 719             execType: jit_avx2_FP32
    relu_conv1                    NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    pool1                         EXECUTED       layerType: Pooling            realTime: 398       cpu: 398             execType: jit_avx_FP32
    fire2/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 103       cpu: 103             execType: jit_avx2_1x1_FP32
    fire2/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire2/expand1x1               EXECUTED       layerType: Convolution        realTime: 103       cpu: 103             execType: jit_avx2_1x1_FP32
    fire2/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire2/expand3x3               EXECUTED       layerType: Convolution        realTime: 735       cpu: 735             execType: jit_avx2_FP32
    fire2/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire2/concat                  EXECUTED       layerType: Concat             realTime: 4         cpu: 4               execType: unknown_FP32
    fire3/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 222       cpu: 222             execType: jit_avx2_1x1_FP32
    fire3/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire3/expand1x1               EXECUTED       layerType: Convolution        realTime: 102       cpu: 102             execType: jit_avx2_1x1_FP32
    fire3/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire3/expand3x3               EXECUTED       layerType: Convolution        realTime: 727       cpu: 727             execType: jit_avx2_FP32
    fire3/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire3/concat                  EXECUTED       layerType: Concat             realTime: 2         cpu: 2               execType: unknown_FP32
    pool3                         EXECUTED       layerType: Pooling            realTime: 196       cpu: 196             execType: jit_avx_FP32
    fire4/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 96        cpu: 96              execType: jit_avx2_1x1_FP32
    fire4/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire4/expand1x1               EXECUTED       layerType: Convolution        realTime: 89        cpu: 89              execType: jit_avx2_1x1_FP32
    fire4/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire4/expand3x3               EXECUTED       layerType: Convolution        realTime: 744       cpu: 744             execType: jit_avx2_FP32
    fire4/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire4/concat                  EXECUTED       layerType: Concat             realTime: 2         cpu: 2               execType: unknown_FP32
    fire5/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 197       cpu: 197             execType: jit_avx2_1x1_FP32
    fire5/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire5/expand1x1               EXECUTED       layerType: Convolution        realTime: 90        cpu: 90              execType: jit_avx2_1x1_FP32
    fire5/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire5/expand3x3               EXECUTED       layerType: Convolution        realTime: 746       cpu: 746             execType: jit_avx2_FP32
    fire5/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire5/concat                  EXECUTED       layerType: Concat             realTime: 2         cpu: 2               execType: unknown_FP32
    pool5                         EXECUTED       layerType: Pooling            realTime: 80        cpu: 80              execType: jit_avx_FP32
    fire6/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 66        cpu: 66              execType: jit_avx2_1x1_FP32
    fire6/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire6/expand1x1               EXECUTED       layerType: Convolution        realTime: 51        cpu: 51              execType: jit_avx2_1x1_FP32
    fire6/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire6/expand3x3               EXECUTED       layerType: Convolution        realTime: 434       cpu: 434             execType: jit_avx2_FP32
    fire6/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire6/concat                  EXECUTED       layerType: Concat             realTime: 1         cpu: 1               execType: unknown_FP32
    fire7/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 99        cpu: 99              execType: jit_avx2_1x1_FP32
    fire7/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire7/expand1x1               EXECUTED       layerType: Convolution        realTime: 50        cpu: 50              execType: jit_avx2_1x1_FP32
    fire7/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire7/expand3x3               EXECUTED       layerType: Convolution        realTime: 445       cpu: 445             execType: jit_avx2_FP32
    fire7/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire7/concat                  EXECUTED       layerType: Concat             realTime: 1         cpu: 1               execType: unknown_FP32
    fire8/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 140       cpu: 140             execType: jit_avx2_1x1_FP32
    fire8/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire8/expand1x1               EXECUTED       layerType: Convolution        realTime: 92        cpu: 92              execType: jit_avx2_1x1_FP32
    fire8/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire8/expand3x3               EXECUTED       layerType: Convolution        realTime: 779       cpu: 779             execType: jit_avx2_FP32
    fire8/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire8/concat                  EXECUTED       layerType: Concat             realTime: 1         cpu: 1               execType: unknown_FP32
    fire9/squeeze1x1              EXECUTED       layerType: Convolution        realTime: 179       cpu: 179             execType: jit_avx2_1x1_FP32
    fire9/relu_squeeze1x1         NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire9/expand1x1               EXECUTED       layerType: Convolution        realTime: 91        cpu: 91              execType: jit_avx2_1x1_FP32
    fire9/relu_expand1x1          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire9/expand3x3               EXECUTED       layerType: Convolution        realTime: 779       cpu: 779             execType: jit_avx2_FP32
    fire9/relu_expand3x3          NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    fire9/concat                  EXECUTED       layerType: Concat             realTime: 1         cpu: 1               execType: unknown_FP32
    conv10                        EXECUTED       layerType: Convolution        realTime: 2713      cpu: 2713            execType: jit_avx2_1x1_FP32
    relu_conv10                   NOT_RUN        layerType: ReLU               realTime: 0         cpu: 0               execType: undef
    pool10/reduce                 EXECUTED       layerType: Pooling            realTime: 61        cpu: 61              execType: jit_avx_FP32
    prob                          EXECUTED       layerType: SoftMax            realTime: 3         cpu: 3               execType: jit_avx2_FP32
    prob_nChw8c_nchw_out_prob     EXECUTED       layerType: Reorder            realTime: 7         cpu: 7               execType: jit_uni_FP32
    out_prob                      NOT_RUN        layerType: Output             realTime: 0         cpu: 0               execType: unknown_FP32
    Total time: 11437    microseconds
    
    Full device name: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
    
    Count:      1000 iterations
    Duration:   2918.03 ms
    Latency:    9.65 ms
    Throughput: 342.70 FPS
    
    
    ###################################################
    
    Inference Engine benchmark app completed successfully.
  9. 推論モデルファイルの一括ダウンロード
    python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/downloader.py --all
    ダウンロードされたモデルは、./public ./intel ディレクトリ配下に格納される。

  10. パブリックモデルの一括コンバート
    $ python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/converter.py --all
        :
        :
    FAILED:
    cocosnet
    colorization-siggraph
    colorization-v2
    densenet-121-caffe2
    efficientnet-b0-pytorch
    efficientnet-b5-pytorch
    efficientnet-b7-pytorch
    faceboxes-pytorch
    googlenet-v3-pytorch
    hbonet-0.25
    hbonet-0.5
    hbonet-1.0
    hrnet-v2-c1-segmentation
    human-pose-estimation-3d-0001
    midasnet
    mobilenet-v2-pytorch
    resnest-50-pytorch
    resnet-18-pytorch
    resnet-34-pytorch
    resnet-50-caffe2
    resnet-50-pytorch
    shufflenet-v2-x1.0
    single-human-pose-estimation-0001
    squeezenet1.1-caffe2
    vgg19-caffe2
    yolact-resnet50-fpn-pytorch
    変換できないファイルも結構ある。

  11. コンバートで「ModuleNotFoundError: No module named 'torch'」エラーがあるので Pytorch をインストールする。
    mizutu@ubuntu2004dk:~/model$ python3 $INTEL_OPENVINO_DIR/deployment_tools/tools/model_downloader/converter.py --name human-pose-estimation-3d-0001
    ========== Converting human-pose-estimation-3d-0001 to ONNX
    Conversion to ONNX command: /bin/python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/pytorch_to_onnx.py --model-path=/home/mizutu/model/public/human-pose-estimation-3d-0001 --model-name=PoseEstimationWithMobileNet --model-param=is_convertible_by_mo=True --import-module=model --weights=/home/mizutu/model/public/human-pose-estimation-3d-0001/human-pose-estimation-3d-0001.pth --input-shape=1,3,256,448 --input-names=data --output-names=features,heatmaps,pafs --output-file=/home/mizutu/model/public/human-pose-estimation-3d-0001/human-pose-estimation-3d-0001.onnx
    
    Traceback (most recent call last):
      File "/opt/intel/openvino_2021/deployment_tools/tools/model_downloader/pytorch_to_onnx.py", line 10, in <module>
        import torch
    ModuleNotFoundError: No module named 'torch'
    
    FAILED:
    human-pose-estimation-3d-0001
    オフィシャルサイト PyTorch FROM RESEARCH TO PRODUCTION にアクセスして、インストールパラメータを取得する。
    pip install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
    mizutu@ubuntu2004dk:~/model$ pip3 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
    Looking in links: https://download.pytorch.org/whl/torch_stable.html
    Collecting torch==1.8.0+cpu
      Downloading https://download.pytorch.org/whl/cpu/torch-1.8.0%2Bcpu-cp38-cp38-linux_x86_64.whl (169.1 MB)
         |████████████████████████████████| 169.1 MB 39 kB/s 
    Collecting torchvision==0.9.0+cpu
      Downloading https://download.pytorch.org/whl/cpu/torchvision-0.9.0%2Bcpu-cp38-cp38-linux_x86_64.whl (13.3 MB)
         |████████████████████████████████| 13.3 MB 14.6 MB/s 
    Collecting torchaudio==0.8.0
      Downloading torchaudio-0.8.0-cp38-cp38-manylinux1_x86_64.whl (1.9 MB)
         |████████████████████████████████| 1.9 MB 3.9 MB/s 
    Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from torch==1.8.0+cpu) (1.18.5)
    Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/dist-packages (from torch==1.8.0+cpu) (3.7.4.3)
    Requirement already satisfied: pillow>=4.1.1 in /usr/lib/python3/dist-packages (from torchvision==0.9.0+cpu) (7.0.0)
    Installing collected packages: torch, torchvision, torchaudio
      WARNING: The scripts convert-caffe2-to-onnx and convert-onnx-to-caffe2 are installed in '/home/mizutu/.local/bin' which is not on PATH.
      Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
    Successfully installed torch-1.8.0+cpu torchaudio-0.8.0 torchvision-0.9.0+cpu
  12. 再度、パブリックモデルの一括コンバート
    $ python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/converter.py --all
        :
        :
    FAILED:
    cocosnet
    efficientdet-d0-tf
    efficientdet-d1-tf
    かなりエラーが少なくなった。(2021/03/21)

  13. 学習済みモデルをまとめる。
    このままではアクセスしにくいので、出来上がったIRモデルをディレクトリ「~/model」に移動する。
    ~/model/intel/FP16
    ~/model/intel/FP32
    ~/model/public/FP16
    ~/model/public/FP32

更新履歴

参考資料


Last-modified: 2021-04-15 (木) 05:49:41