私的AI研究会 > OpenModelZoo3
OpenVINO™ Toolkit に付属するデモソフトを動かしてアプリケーションで推論エンジンを使用する方法を調べる。その3
GMCNN によるイメージ インペインティング。画像の穴を埋めるために適切なピクセル情報を推定する。
入力画像のパス <path_to_video>: ~/Images/ モデルのパス <path_to_model>: ~/model/public/FP32/ or ~/model/public/FP16/ 学習済モデル gmcnn-places2-tf
$ python3 image_inpainting_demo.py' -h usage: image_inpainting_demo.py [-h] -m MODEL [-i INPUT] [-d DEVICE] [-p PARTS] [-mbw MAX_BRUSH_WIDTH] [-ml MAX_LENGTH] [-mv MAX_VERTEX] [--no_show] [-o OUTPUT] [-ac C C C] [-ar] Options: -h, --help Show this help message and exit. -m MODEL, --model MODEL Required. Path to an .xml file with a trained model. -i INPUT, --input INPUT path to image. -d DEVICE, --device DEVICE Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. The demo will look for a suitable plugin for device specified. Default value is CPU -p PARTS, --parts PARTS Optional. Number of parts to draw mask. Ignored in GUI mode -mbw MAX_BRUSH_WIDTH, --max_brush_width MAX_BRUSH_WIDTH Optional. Max width of brush to draw mask. Ignored in GUI mode -ml MAX_LENGTH, --max_length MAX_LENGTH Optional. Max strokes length to draw mask. Ignored in GUI mode -mv MAX_VERTEX, --max_vertex MAX_VERTEX Optional. Max number of vertex to draw mask. Ignored in GUI mode --no_show Optional. Don't show output. Cannot be used in GUI mode -o OUTPUT, --output OUTPUT Optional. Save output to the file with provided filename. Ignored in GUI mode -ac C C C, --auto_mask_color C C C Optional. Use automatic (non-interactive) mode with color mask.Provide color to be treated as mask (3 RGB components in range of 0...255). Cannot be used together with -ar. -ar, --auto_mask_random Optional. Use automatic (non-interactive) mode with random mask for inpainting (with parameters set by -p, -mbw, -mk and -mv). Cannot be used together with -ac.
$ python3 image_inpainting_demo.py -m ~/model/public/FP32/gmcnn-places2-tf.xml -i ~/Images/car_m.jpg
$ python3 image_inpainting_demo.py -m ~/model/public/FP32/gmcnn-places2-tf.xml -i ~/Images/car_m.jpg -ar
$ python3 image_inpainting_demo.py -m ~/model/public/FP32/gmcnn-places2-tf.xml -i ~/Images/prants_ed_m.png -ac 255 255 255
$ python3 image_inpainting_demo.py -m ~/model/public/FP32/gmcnn-places2-tf.xml -i ~/Images/landscape_ed.png -ac 255 255 255左が標識を消した入力画像、右が推論エンジンで処理した画像。結構実用になりそう。
ニューラル ネットワークを使用してモノクロビデオの色付けをする。
入力画像のパス <path_to_video>: ~/Videos/ モデルのパス <path_to_model>: ~/model/public/FP32/ or ~/model/public/FP16/ 学習済モデル colorization-v2 colorization-siggraph
$ python3 colorization_demo.py -h usage: colorization_demo.py [-h] -m MODEL [-d DEVICE] -i "<path>" [--no_show] [-v] [-u UTILIZATION_MONITORS] Options: -h, --help Help with the script. -m MODEL, --model MODEL Required. Path to .xml file with pre-trained model. -d DEVICE, --device DEVICE Optional. Specify target device for infer: CPU, GPU, FPGA, HDDL or MYRIAD. Default: CPU -i "<path>", --input "<path>" Required. Input to process. --no_show Optional. Disable display of results on screen. -v, --verbose Optional. Enable display of processing logs on screen. -u UTILIZATION_MONITORS, --utilization_monitors UTILIZATION_MONITORS Optional. List of monitors to show initially.
$ python3 colorization_demo.py -m ~/model/public/FP32/colorization-v2.xml -i ~/Videos/mono03.mp4
$ python3 colorization_demo.py -m ~/model/public/FP32/colorization-v2.xml -i ~/Videos/mono01.mp4
手ぶれやそのほかの要因で起こるボケ画像を修正する。
このサンプルデモは「2021.3」で追加になった。インストールされたディレクトリで実行する。
入力画像のパス <path_to_video>: ~/Images/ モデルのパス <path_to_model>: ~/model/public/FP32/ or ~/model/public/FP16/ 学習済モデル deblurgan-v2
mizutu@ubuntu2004dk2:/opt/intel/openvino_2021/inference_engine/demos/deblurring_demo/python$ python3 deblurring_demo.py -h usage: deblurring_demo.py [-h] -m MODEL -i INPUT [-d DEVICE] [-nireq NUM_INFER_REQUESTS] [-nstreams NUM_STREAMS] [-nthreads NUM_THREADS] [--loop] [-o OUTPUT] [-limit OUTPUT_LIMIT] [--no_show] [-u UTILIZATION_MONITORS] Options: -h, --help Show this help message and exit. -m MODEL, --model MODEL Required. Path to an .xml file with a trained model. -i INPUT, --input INPUT Required. An input to process. The input must be a single image, a folder of images or anything that cv2.VideoCapture can process. -d DEVICE, --device DEVICE Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. The demo will look for a suitable plugin for device specified. Default value is CPU. Inference options: -nireq NUM_INFER_REQUESTS, --num_infer_requests NUM_INFER_REQUESTS Optional. Number of infer requests -nstreams NUM_STREAMS, --num_streams NUM_STREAMS Optional. Number of streams to use for inference on the CPU or/and GPU in throughput mode (for HETERO and MULTI device cases use format <device1>:<nstreams1>,<device2>:<nstreams2> or just <nstreams>). -nthreads NUM_THREADS, --num_threads NUM_THREADS Optional. Number of threads to use for inference on CPU (including HETERO cases). Input/output options: --loop Optional. Enable reading the input in a loop. -o OUTPUT, --output OUTPUT Optional. Name of output to save. -limit OUTPUT_LIMIT, --output_limit OUTPUT_LIMIT Optional. Number of frames to store in output. If 0 is set, all frames are stored. --no_show Optional. Don't show output. -u UTILIZATION_MONITORS, --utilization_monitors UTILIZATION_MONITORS Optional. List of monitors to show initially.
mizutu@ubuntu2004dk2:/opt/intel/openvino_2021/inference_engine/demos/deblurring_demo/python$ python3 deblurring_demo.py -i ~/Images/desk.png -m ~/model/public/FP32/deblurgan-v2.xml --loop [ INFO ] Initializing Inference Engine... [ INFO ] Loading network... [ INFO ] Reading network from IR... [ INFO ] Loading network to CPU plugin... [ INFO ] Starting inference... To close the application, press 'CTRL+C' here or switch to the output window and press ESC key Latency: 311.6 ms FPS: 3.2
mizutu@ubuntu2004dk2:/opt/intel/openvino_2021/inference_engine/demos/deblurring_demo/python$ python3 deblurring_demo.py -i ~/Images/deblurred_image.png -m ~/model/public/FP32/deblurgan-v2.xml --loop [ INFO ] Initializing Inference Engine... [ INFO ] Loading network... [ INFO ] Reading network from IR... [ INFO ] Loading network to CPU plugin... [ INFO ] Starting inference... To close the application, press 'CTRL+C' here or switch to the output window and press ESC key Latency: 307.8 ms FPS: 3.2
元の低解像度のイメージから高解像度のイメージを再構築するスーパー解像度(super resolution)のデモと、手ぶれやそのほかの要因で起こるボケ画像を修正する(deblurring)デモの選択したタイプでイメージ処理を行う。
このサンプルデモは「2021.4」で追加になった。インストールされたディレクトリで実行する。
Super Resolution C++ Demo は廃止になった。
入力画像のパス <path_to_video>: ~/Images/ モデルのパス <path_to_model>: ~/model/public/FP32/ or ~/model/public/FP16/ 学習済モデル single-image-super-resolution-1032 single-image-super-resolution-1033 text-image-super-resolution-0001 deblurgan-v2
$ ./image_processing_demo -h [ INFO ] InferenceEngine: IE version ......... 2021.4 Build ........... 0 image_processing_demo_async [OPTION] Options: -h Print a usage message. -at "<type>" Required. Type of the network, either 'sr' for Super Resolution task or 'deblur' for Deblurring -i "<path>" Required. An input to process. The input must be a single image, a folder of images, video file or camera id. -m "<path>" Required. Path to an .xml file with a trained model. -o "<path>" Optional. Name of the output file(s) to save. -limit "<num>" Optional. Number of frames to store in output. If 0 is set, all frames are stored. -l "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernel implementations. Or -c "<absolute_path>" Required for GPU custom kernels. Absolute path to the .xml file with the kernel descriptions. -d "<device>" Optional. Specify the target device to infer on (the list of available devices is shown below). Default value is CPU. Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device. -pc Optional. Enables per-layer performance report. -nireq "<integer>" Optional. Number of infer requests. If this option is omitted, number of infer requests is determined automatically. -nthreads "<integer>" Optional. Number of threads. -nstreams Optional. Number of streams to use for inference on the CPU or/and GPU in throughput mode (for HETERO and MULTI device cases use format <device1>:<nstreams1>,<device2>:<nstreams2> or just <nstreams>) -loop Optional. Enable reading the input in a loop. -no_show Optional. Do not show processed video. -output_resolution Optional. Specify the maximum output window resolution in (width x height) format. Example: 1280x720. Input frame size used by default. -u Optional. List of monitors to show initially. [E:] [BSL] found 0 ioexpander device Available target devices: CPU GNA
$ ./image_processing_demo -i ~/Images/image-low.bmp -m ~/model/intel/FP32/single-image-super-resolution-1033.xml -at sr [ INFO ] InferenceEngine: IE version ......... 2021.4 Build ........... 0 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] CPU MKLDNNPlugin version ......... 2021.4 Build ........... 0 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Loading model to the device [ WARN:0] global ../opencv/modules/highgui/src/window.cpp (661) createTrackbar UI/Trackbar(Orig/Diff | Res@Image Processing Demo - Super Resolution (press A for help)): Using 'value' pointer is unsafe and deprecated. Use NULL as value pointer. To fetch trackbar value setup callback.
$ ./image_processing_demo -i ~/Images/girl.bmp -m ~/model/intel/FP32/single-image-super-resolution-1033.xml -at sr [ INFO ] InferenceEngine: IE version ......... 2021.4 Build ........... 0 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading Inference Engine [ INFO ] Device info: [ INFO ] CPU MKLDNNPlugin version ......... 2021.4 Build ........... 0 Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Loading model to the device
使用可能なすべての推論エンジン デバイスを照会し、サポートされているメトリックと既定の構成値を出力する。
このサンプルでは、デバイス API のクエリ機能の使用方法を示す。
実行時のディレクトリ : /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/python/hello_query_device/ 実行ファイル : python3 hello_query_device.py
$ ./_hello_query_device.sh [hello_query_device.sh] 'hello_query_device' Run !! Available devices: [E:] [BSL] found 0 ioexpander device Device: CPU Metrics: AVAILABLE_DEVICES: SUPPORTED_METRICS: AVAILABLE_DEVICES, SUPPORTED_METRICS, FULL_DEVICE_NAME, OPTIMIZATION_CAPABILITIES, SUPPORTED_CONFIG_KEYS, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS FULL_DEVICE_NAME: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz OPTIMIZATION_CAPABILITIES: WINOGRAD, FP32, FP16, INT8, BIN SUPPORTED_CONFIG_KEYS: CPU_BIND_THREAD, CPU_THREADS_NUM, CPU_THROUGHPUT_STREAMS, DUMP_EXEC_GRAPH_AS_DOT, DYN_BATCH_ENABLED, DYN_BATCH_LIMIT, ENFORCE_BF16, EXCLUSIVE_ASYNC_REQUESTS, PERF_COUNT RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1 RANGE_FOR_STREAMS: 1, 8 Default values for device configuration keys: CPU_BIND_THREAD: YES CPU_THREADS_NUM: 0 CPU_THROUGHPUT_STREAMS: 1 DUMP_EXEC_GRAPH_AS_DOT: DYN_BATCH_ENABLED: NO DYN_BATCH_LIMIT: 0 ENFORCE_BF16: NO EXCLUSIVE_ASYNC_REQUESTS: NO PERF_COUNT: NO Device: GNA Metrics: GNA_LIBRARY_FULL_VERSION: 2.0.0.1047 FULL_DEVICE_NAME: GNA_SW OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1 SUPPORTED_CONFIG_KEYS: EXCLUSIVE_ASYNC_REQUESTS, GNA_COMPACT_MODE, GNA_DEVICE_MODE, GNA_FIRMWARE_MODEL_IMAGE, GNA_FIRMWARE_MODEL_IMAGE_GENERATION, GNA_LIB_N_THREADS, GNA_PRECISION, GNA_PWL_UNIFORM_DESIGN, GNA_SCALE_FACTOR, GNA_SCALE_FACTOR_0, PERF_COUNT, SINGLE_THREAD SUPPORTED_METRICS: GNA_LIBRARY_FULL_VERSION, FULL_DEVICE_NAME, OPTIMAL_NUMBER_OF_INFER_REQUESTS, SUPPORTED_CONFIG_KEYS, SUPPORTED_METRICS, AVAILABLE_DEVICES AVAILABLE_DEVICES: GNA_SW Default values for device configuration keys: EXCLUSIVE_ASYNC_REQUESTS: NO GNA_COMPACT_MODE: NO GNA_DEVICE_MODE: GNA_SW_EXACT GNA_FIRMWARE_MODEL_IMAGE: GNA_FIRMWARE_MODEL_IMAGE_GENERATION: GNA_LIB_N_THREADS: 1 GNA_PRECISION: I16 GNA_PWL_UNIFORM_DESIGN: NO GNA_SCALE_FACTOR: 1.000000 GNA_SCALE_FACTOR_0: 1.000000 PERF_COUNT: NO SINGLE_THREAD: YES Device: GPU Metrics: AVAILABLE_DEVICES: 0 SUPPORTED_METRICS: AVAILABLE_DEVICES, SUPPORTED_METRICS, FULL_DEVICE_NAME, OPTIMIZATION_CAPABILITIES, SUPPORTED_CONFIG_KEYS, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS FULL_DEVICE_NAME: Intel(R) Gen12LP HD Graphics (iGPU) OPTIMIZATION_CAPABILITIES: FP32, BIN, FP16, INT8 SUPPORTED_CONFIG_KEYS: CACHE_DIR, CLDNN_ENABLE_FP16_FOR_QUANTIZED_MODELS, CLDNN_GRAPH_DUMPS_DIR, CLDNN_MEM_POOL, CLDNN_NV12_TWO_INPUTS, CLDNN_PLUGIN_PRIORITY, CLDNN_PLUGIN_THROTTLE, CLDNN_SOURCES_DUMPS_DIR, CONFIG_FILE, DEVICE_ID, DUMP_KERNELS, DYN_BATCH_ENABLED, EXCLUSIVE_ASYNC_REQUESTS, GPU_THROUGHPUT_STREAMS, PERF_COUNT, TUNING_FILE, TUNING_MODE RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 2, 1 RANGE_FOR_STREAMS: 1, 2 Default values for device configuration keys: CACHE_DIR: CLDNN_ENABLE_FP16_FOR_QUANTIZED_MODELS: YES CLDNN_GRAPH_DUMPS_DIR: CLDNN_MEM_POOL: YES CLDNN_NV12_TWO_INPUTS: NO CLDNN_PLUGIN_PRIORITY: 0 CLDNN_PLUGIN_THROTTLE: 0 CLDNN_SOURCES_DUMPS_DIR: CONFIG_FILE: DEVICE_ID: DUMP_KERNELS: NO DYN_BATCH_ENABLED: NO EXCLUSIVE_ASYNC_REQUESTS: NO GPU_THROUGHPUT_STREAMS: 1 PERF_COUNT: NO TUNING_FILE: TUNING_MODE: TUNING_DISABLED Device: MYRIAD Metrics: DEVICE_THERMAL: UNSUPPORTED TYPE OPTIMIZATION_CAPABILITIES: FP16 RANGE_FOR_ASYNC_INFER_REQUESTS: 3, 6, 1 SUPPORTED_METRICS: DEVICE_THERMAL, OPTIMIZATION_CAPABILITIES, RANGE_FOR_ASYNC_INFER_REQUESTS, SUPPORTED_METRICS, SUPPORTED_CONFIG_KEYS, FULL_DEVICE_NAME, AVAILABLE_DEVICES SUPPORTED_CONFIG_KEYS: DEVICE_ID, EXCLUSIVE_ASYNC_REQUESTS, LOG_LEVEL, VPU_MYRIAD_FORCE_RESET, VPU_MYRIAD_PLATFORM, VPU_CUSTOM_LAYERS, PERF_COUNT, VPU_PRINT_RECEIVE_TENSOR_TIME, CONFIG_FILE, VPU_HW_STAGES_OPTIMIZATION, MYRIAD_THROUGHPUT_STREAMS, MYRIAD_ENABLE_FORCE_RESET, MYRIAD_ENABLE_RECEIVING_TENSOR_TIME, MYRIAD_CUSTOM_LAYERS, MYRIAD_ENABLE_HW_ACCELERATION FULL_DEVICE_NAME: Intel Movidius Myriad X VPU AVAILABLE_DEVICES: 3.4-ma2480 Default values for device configuration keys: DEVICE_ID: EXCLUSIVE_ASYNC_REQUESTS: NO LOG_LEVEL: LOG_NONE VPU_MYRIAD_FORCE_RESET: NO VPU_MYRIAD_PLATFORM: VPU_CUSTOM_LAYERS: PERF_COUNT: NO VPU_PRINT_RECEIVE_TENSOR_TIME: NO CONFIG_FILE: VPU_HW_STAGES_OPTIMIZATION: YES MYRIAD_THROUGHPUT_STREAMS: -1 MYRIAD_ENABLE_FORCE_RESET: NO MYRIAD_ENABLE_RECEIVING_TENSOR_TIME: NO MYRIAD_CUSTOM_LAYERS: MYRIAD_ENABLE_HW_ACCELERATION: YES