私的AI研究会 > RevYOLOv5
「PyTorch ではじめる AI開発」Chapter04 で使用する「YOLO V5」について復習する。
以前の作成ページ 物体検出アルゴリズム「YOLO V5」 を全面改定する
『YOLO』とは "You only live once”「人生一度きり」を引用した "You Only Look Once"「見るのは一度きり」が名の由来。
ID | coco.names | coco.names_jp | ID | coco.names | coco.names_jp |
0 | person | 人 | 40 | wine glass | ワイングラス |
1 | bicycle | 自転車 | 41 | cup | カップ |
2 | car | 車 | 42 | fork | フォーク |
3 | motorbike | バイク | 43 | knife | ナイフ |
4 | aeroplane | 飛行機 | 44 | spoon | スプーン |
5 | bus | バス | 45 | bowl | 丼鉢 |
6 | train | 列車 | 46 | banana | バナナ |
7 | truck | トラック | 47 | apple | リンゴ |
8 | boat | ボート | 48 | sandwich | サンドイッチ |
9 | traffic light | 信号機 | 49 | orange | オレンジ |
10 | fire hydrant | 消火栓 | 50 | broccoli | ブロッコリー |
11 | stop sign | 一時停止標識 | 51 | carrot | 人参 |
12 | parking meter | パーキングメーター | 52 | hot dog | ホットドッグ |
13 | bench | ベンチ | 53 | pizza | ピザ |
14 | bird | 鳥 | 54 | donut | ドーナッツ |
15 | cat | 猫 | 55 | cake | ケーキ |
16 | dog | 犬 | 56 | chair | 椅子 |
17 | horse | 馬 | 57 | sofa | ソファー |
18 | sheep | 羊 | 58 | pottedplant | 鉢植え |
19 | cow | 牛 | 59 | bed | ベッド |
20 | elephant | 象 | 60 | diningtable | ダイニングテーブル |
21 | bear | 熊 | 61 | toilet | トイレ |
22 | zebra | シマウマ | 62 | tvmonitor | テレビ |
23 | giraffe | キリン | 63 | laptop | ラップトップコンピューター |
24 | backpack | バックパック | 64 | mouse | マウス |
25 | umbrella | 傘 | 65 | remote | リモコン |
26 | handbag | ハンドバック | 66 | keyboard | キーボード |
27 | tie | ネクタイ | 67 | cell phone | 携帯電話 |
28 | suitcase | スーツケース | 68 | microwave | 電子レンジ |
29 | frisbee | フリスビー | 69 | oven | オーブン |
30 | skis | スキー板 | 70 | toaster | トースター |
31 | snowboard | スノーボード | 71 | sink | キッチン・シンク |
32 | sports ball | スポーツボール | 72 | refrigerator | 冷蔵庫 |
33 | kite | 凧 | 73 | book | 本 |
34 | baseball bat | 野球のバット | 74 | clock | 時計 |
35 | baseball glove | 野球のグローブ | 75 | vase | 花瓶 |
36 | skateboard | スケートボード | 76 | scissors | ハサミ |
37 | surfboard | サーフボード | 77 | teddy bear | テディベア |
38 | tennis racket | テニスラケット | 78 | hair drier | ヘアドライヤー |
39 | bottle | 瓶 | 79 | toothbrush | 歯ブラシ |
(base) conda activate py_learn
(py_learn) PS > cd /anaconda_win/workspace_pylearnLinux の場合
(py_learn) $ cd ~/workspace_pylearn
(py_learn) git clone https://github.com/ultralytics/yolov5・パッケージ構成ファイル「requirements.txt」は使わず現在の環境で不足パッケージのみインストールする
c:\anaconda_win\workspace_pylearn\ ← Windows の場合 ~/workspace_pylearn/ ← Linux の場合 ├ chapter01 ├ chapter02 ├ forest-path-movie-dataset ├ sample │ : └ yolov5
(py_learn) cd yolov5 (py_learn) python detect.py --source 0・実行結果
(py_learn) python detect.py --source 0 Traceback (most recent call last): File "C:\anaconda_win\workspace_pylearn\yolov5\detect.py", line 46, in <module> from ultralytics.utils.plotting import Annotator, colors, save_one_box ModuleNotFoundError: No module named 'ultralytics'
(py_learn) pip install ultralytics
(py_learn) python detect.py --source 0・実行結果
(py_learn) python detect.py --source 0 detect: weights=yolov5s.pt, source=0, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs 1/1: 0... Success (inf frames 640x480 at 30.00 FPS) 0: 480x640 1 person, 1 chair, 198.1ms 0: 480x640 1 person, 8.0ms 0: 480x640 1 person, 1 chair, 5.0ms 0: 480x640 1 person, 1 chair, 4.0ms : : 0: 480x640 1 person, 2 chairs, 6.0ms 0: 480x640 1 person, 1 chair, 16.0ms Traceback (most recent call last): : : KeyboardInterrupt
## Official YOLOv5 https://github.com/ultralytics/yolov5 ## ## detect2.py (original: detect.py) ## ver 0.01 2024.03.12 'Esc' key Break :
: # Run inference model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz)) # warmup seen, windows, dt = 0, [], (Profile(device=device), Profile(device=device), Profile(device=device)) break_flag = False # 'Esc' key Break 2024/03/12 for path, im, im0s, vid_cap, s in dataset: if break_flag: # 'Esc' key Break 2024/03/12 break with dt[0]: :
: # Stream results im0 = annotator.result() if view_img: # if platform.system() == "Linux" and p not in windows: # windows.append(p) # cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) # cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.namedWindow(str(p), flags=cv2.WINDOW_AUTOSIZE | cv2.WINDOW_GUI_NORMAL) # 2024/03/12 cv2.imshow(str(p), im0) # cv2.waitKey(1) # 1 millisecond ## 'Esc' key Break 2023/06/18 c = cv2.waitKey(1) # 1 millisecond if c == 27: break_flag = True break # Save results (image with detections) :
# Print time (inference-only) # 途中表示なし 2024/03/12 # LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")
(py_learn) python detect2.py --source 0・実行結果
(py_learn) python detect2.py --source 0 detect2: weights=yolov5s.pt, source=0, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs 1/1: 0... Success (inf frames 640x480 at 30.00 FPS) Speed: 0.4ms pre-process, 8.9ms inference, 3.6ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp10
(py_learn) python detect2.py・実行結果
(py_learn) python detect2.py detect: weights=yolov5s.pt, source=data\images, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs image 1/2 C:\anaconda_win\workspace_pylearn\yolov5\data\images\bus.jpg: 640x480 4 persons, 1 bus, 48.9ms image 2/2 C:\anaconda_win\workspace_pylearn\yolov5\data\images\zidane.jpg: 384x640 2 persons, 2 ties, 52.8ms Speed: 0.0ms pre-process, 50.8ms inference, 74.6ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp3
入力ソース名 | 種類 |
0 | webcam(0,1,...) |
img.jpg | image |
vid.mp4 | video |
screen | screenshot |
path/ | directory |
list.txt | list of images |
list.streams | list of streams |
'path/*.jpg' | glob |
'https://youtu.be/LNwODJXcvt4' | YouTube |
'rtsp://example.com/media.mp4' | RTSP, RTMP, HTTP stream |
学習モデル名 | 種類 |
yolov5s.pt | PyTorch |
yolov5s.torchscript | TorchScript |
yolov5s.onnx | ONNX Runtime or OpenCV DNN with --dnn |
yolov5s_openvino_model | OpenVINO |
yolov5s.engine | TensorRT |
yolov5s.mlmodel | CoreML (macOS-only) |
yolov5s_saved_model | TensorFlow SavedModel |
yolov5s.pb | TensorFlow GraphDef |
yolov5s.tflite | TensorFlow Lite |
yolov5s_edgetpu.tflite | TensorFlow Edge TPU |
yolov5s_paddle_model | PaddlePaddle |
コマンドオプション | 引数 | 初期値 | 意味 |
--weights | str | yolov7s.pt | 学習済み重みモデルファイル |
--source | str | data/images | 推論対象の画像ソース(file/folder) のパス(0,1,... = Webカメラ) |
--imgsz | int | (640, 480) | 推論対象の画像のサイズ(pixel) |
--conf-thres | 0.25 | float | クラス判定の閾値 (数値が小さい程オブジェクトは増えるが、ノイズも増える |
--iou-thres | 0.45 | float | iou は Intersection Over Union (検出領域が重なっている割合、数値が大きいほど重なり度合いが高い) |
--max_det | int | 1000 | maximum detections per image |
--device | str | 使用プロセッサの指定(0 or 0,1,2,3 or cpu) (指定なしの場合 cuda) | |
--view-img | なし | False | 推論結果の表示 (指定すれば表示する) |
--save-txt | なし | False | 推論結果(検出座標と予測クラス)をテキストファイルで残す (*.txt) |
--save-conf | なし | False | 推論結果(クラスの確率)をテキストファイルで残す (*.txt) |
--save_crop | なし | False | save cropped prediction boxes |
--nosave | なし | False | 推論結果の記録 (指定すれば残さない) |
--classes | str | None | クラスフィルタ(--class 0, or --class 0 2 3) |
--agnostic-nms | なし | False | class-agnostic NMS |
--augment | なし | False | 拡張推論 |
--visualize | なし | False | visualize features |
--update | なし | False | モデルをアップデートする |
--project | str | runs/detect | 推論結果の記録フォルダパス |
--name | str | exp | 推論結果の記録フォルダの下のフォルダ名(推論ごとにインクリメント) |
--exist-ok | なし | False- | 推論結果を上書き保存(指定すれば上書き) |
--line_thickness | int | 3 | bounding box thickness (pixels) |
--hide_labels | なし | False | hide labels |
--hide_conf | なし | False | hide confidences |
--half | なし | False | use FP16 half-precision inference |
--dnn | なし | False | use OpenCV DNN for ONNX inference |
--vid_stride | int | 1 | video frame-rate stride |
フォーマット | パラメータ `export.py --include` | 変換モデルファイル名称 |
PyTorch | - | yolov5s.pt |
TorchScript | `torchscript` | yolov5s.torchscript |
ONNX | `onnx` | yolov5s.onnx |
OpenVINO | `openvino` | yolov5s_openvino_model/ ※ |
TensorRT | `engine` | yolov5s.engine |
CoreML | `coreml` | yolov5s.mlmodel |
TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ ※ |
TensorFlow GraphDef | `pb` | yolov5s.pb |
TensorFlow Lite | `tflite` | yolov5s.tflite |
TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite |
TensorFlow.js | `tfjs` | yolov5s_web_model/ ※ |
PaddlePaddle | `paddle` | yolov5s_paddle_model/ ※ |
(py_learn2) python export.py --weights yolov5s.pt --include onnx openvino・実行結果
(py_learn2) python export.py --weights yolov5s.pt --include onnx openvino export: data=C:\anaconda_win\workspace_pylearn\yolov5\data\coco128.yaml, weights=['yolov5s.pt'], imgsz=[640, 640], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, per_tensor=False, dynamic=False, simplify=False, opset=17, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx', 'openvino'] YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CPU Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs PyTorch: starting from yolov5s.pt with output shape (1, 25200, 85) (14.1 MB) ONNX: starting export with onnx 1.15.0... ONNX: export success 0.8s, saved as yolov5s.onnx (28.0 MB) OpenVINO: starting export with openvino 2024.0.0-14509-34caeefd078-releases/2024/0... OpenVINO: export success 1.4s, saved as yolov5s_openvino_model\ (28.2 MB) Export complete (2.8s) Results saved to C:\anaconda_win\workspace_pylearn\yolov5 Detect: python detect.py --weights yolov5s_openvino_model\ Validate: python val.py --weights yolov5s_openvino_model\ PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s_openvino_model\') Visualize: https://netron.app
(py_learn2) python -V Python 3.11.8 (py_learn2) conda list : onnx 1.15.0 pypi_0 pypi onnxruntime 1.17.1 pypi_0 pypi opencv 4.6.0 py311h5d08a89_5 opencv-python 4.9.0.80 pypi_0 pypi openjpeg 2.4.0 h4fc8c34_0 openssl 3.0.13 h2bbff1b_0 openvino 2024.0.0 pypi_0 pypi openvino-dev 2024.0.0 pypi_0 pypi openvino-telemetry 2023.2.1 pypi_0 pypi :
(py_learn) mo --input_model yolov5s.onnx・実行結果
(py_learn) mo --input_model yolov5s.onnx [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API. Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: C:\anaconda_win\workspace_pylearn\yolov5\yolov5s.xml [ SUCCESS ] BIN file: C:\anaconda_win\workspace_pylearn\yolov5\yolov5s.bin
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img・実行結果
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img detect2: weights=yolov5s.pt, source=../../Videos/car_m.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Speed: 0.4ms pre-process, 5.7ms inference, 2.6ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp17
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img --weights yolov5s.onnx・実行結果
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img --weights yolov5s.onnx detect2: weights=['yolov5s.onnx'], source=../../Videos/car_m.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Loading yolov5s.onnx for ONNX Runtime inference... requirements: Ultralytics requirement ['onnxruntime-gpu'] not found, attempting AutoUpdate... ERROR: Could not install packages due to an OSError: [WinError 5] アクセスが拒否されました。: 'C:\\Users\\izuts\\anaconda3\\envs\\py_learn2\\Lib\\site-packages\\onnxruntime\\capi\\onnxruntime_providers_shared.dll' Consider using the `--user` option or check the permissions. requirements: ❌ Command 'pip install --no-cache "onnxruntime-gpu" ' returned non-zero exit status 1. Speed: 1.3ms pre-process, 29.4ms inference, 4.1ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp18
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img --weights yolov5s_openvino_model・実行結果
(py_learn2) python detect2.py --source ../../Videos/car_m.mp4 --view-img --weights yolov5s_openvino_model detect2: weights=['yolov5s_openvino_model'], source=../../Videos/car_m.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Loading yolov5s_openvino_model for OpenVINO inference... Speed: 1.3ms pre-process, 29.3ms inference, 3.8ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp19
(py_learn) python yolov5-test2.py・実行結果
(py_learn) python yolov5-test2.py Using cache found in C:\Users\<User>/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-3-13 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape... Saved 2 images to runs\detect\exp13 image 1/2: 720x1280 2 persons, 2 ties image 2/2: 1080x810 4 persons, 1 bus Speed: 8.7ms pre-process, 30.0ms inference, 79.0ms NMS per image at shape (2, 3, 640, 640) 最初の画像からの検出 tensor([[7.42863e+02, 4.79508e+01, 1.14113e+03, 7.16857e+02, 8.80750e-01, 0.00000e+00], [4.42037e+02, 4.37341e+02, 4.96715e+02, 7.09926e+02, 6.87170e-01, 2.70000e+01], [1.25252e+02, 1.93575e+02, 7.10963e+02, 7.13103e+02, 6.41552e-01, 0.00000e+00], [9.82882e+02, 3.08400e+02, 1.02733e+03, 4.20228e+02, 2.62887e-01, 2.70000e+01]], device='cuda:0') 2番目の画像からの検出 tensor([[2.20872e+02, 4.07374e+02, 3.45721e+02, 8.74728e+02, 8.35223e-01, 0.00000e+00], [6.62591e+02, 3.86202e+02, 8.10000e+02, 8.80324e+02, 8.28926e-01, 0.00000e+00], [5.75802e+01, 3.97293e+02, 2.14777e+02, 9.18263e+02, 7.85060e-01, 0.00000e+00], [1.47090e+01, 2.22154e+02, 7.98415e+02, 7.84966e+02, 7.81528e-01, 5.00000e+00], [0.00000e+00, 5.53392e+02, 7.24685e+01, 8.74691e+02, 4.64727e-01, 0.00000e+00]], device='cuda:0') 全てのクラス {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}・結果画像は「runs/detect/exp(2・3・4 …)」ディレクトリに保存されている。
coco.names ← 英語版 coco.names_jp ← 日本語版
コマンドオプション | 初期値 | 意味 |
-i , --image | '../../Videos/car_m.mp4' | 入力ソースのパス またはカメラ(cam/cam0~cam9) |
-y , --yolov5 | 'ultralytics/yolov5' | yolov5ディレクトリのパス(ローカルの場合は yolov5 のパス) |
-m , --models | 'yolov5s' | モデル名(ローカルの場合は モデルファイルのパス)※1 |
-l , --labels | 'coco.names_jp' | ラベルファイルのパス(coco.name, coco_name_jp) |
-c , --conf | 0.25 | オブジェクト検出レベルの閾値 |
-t , --title | 'y' | タイトルの表示(y/n) |
-s , --speed | 'y' | 速度の表示(y/n) |
-o , --out | 'non' | 出力結果の保存パス <path/filename> ※2 |
-cpu | - | CPUフラグ(指定すれば 常に CPU動作) |
-y ultralytics/yolov5 ← オンライン(TorchHub)<default> -y ./ ← オフライン(ローカル)※ 初回起動時にキャッシュにダウンロードされ以後はキャッシュで動作する
-m yolov5s ← オンライン(TorchHub)<default> -m ./test/yolov5s.pt ← オフライン(ローカル)※ モデルが指定場所にない場合は、初回実行時に自動的にダウンロードされる
(py_learn) python detect2_yolov5.py・実行結果
(py_learn) python detect2_yolov5.py Object detection YoloV5 in PyTorch Ver. 0.05: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Videos/car_m.mp4 - YOLO v5 : ultralytics/yolov5 - Pretrained : yolov5s - Confidence lv: 0.25 - Label file : coco.names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-3-13 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape... FPS average: 30.90 Finished.
(py_learn) python detect2_yolov5.py -y ./・実行結果
(py_learn) python detect2_yolov5.py -y ./ Object detection YoloV5 in PyTorch Ver. 0.05: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Videos/car_m.mp4 - YOLO v5 : ./ - Pretrained : yolov5s - Confidence lv: 0.25 - Label file : coco.names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape... FPS average: 20.80 Finished.
マシン・OS | モデル | car_m.mp4 | car1_mp4 | car2.mp4 | |||
GPU | CPU | GPU | CPU | GPU | CPU | ||
HP ENVY Windows 11 | yolov5n | 32.2 | 15.9 | 49.7 | 17.9 | 56.0 | 20.7 |
yolov5s | 31.3 | 12.7 | 38.7 | 14.8 | 48.9 | 15.3 | |
yolov5m | 28.8 | 8.7 | 31.8 | 9.4 | 42.4 | 9.5 | |
yolov5l | 25.1 | 5.7 | 31.5 | 5.9 | 32.0 | 6.0 | |
yolov5x | 23.8 | 3.9 | 30.8 | 4.0 | 31.8 | 4.1 | |
HP ENVY Ubuntu 22.04LTS | yolov5n | 64.0 | 30.0 | 86.0 | 34.0 | 91.0 | 37.0 |
yolov5s | 53.9 | 21.1 | 72.3 | 25.5 | 87.5 | 27.1 | |
yolov5m | 49.8 | 13.0 | 63.2 | 14.7 | 78.0 | 16.0 | |
yolov5l | 44.3 | 8.3 | 54.7 | 8.1 | 70.7 | 8.9 | |
yolov5x | 37.7 | 5.4 | 46.4 | 4.9 | 57.4 | 5.1 | |
HP ELITE Windows 10 | yolov5n | 27.3 | 9.4 | 39.4 | 10.6 | 48.3 | 11.1 |
yolov5s | 19.6 | 5.4 | 27.9 | 5.9 | 30.2 | 6.3 | |
yolov5m | 15.4 | 2.9 | 18.5 | 3.1 | 22.3 | 3.2 | |
yolov5l | 11.1 | 1.7 | 12.7 | 1.7 | 14.8 | 1.8 | |
yolov5x | 7.6 | 1.0 | 8.3 | 1.0 | 9.2 | 1.4 | |
DELL Latitude Ubuntu 20.04LTS | yolov5n | - | 10.8 | - | 13.7 | - | 14.7 |
yolov5s | - | 8.9 | - | 8.6 | - | 9.4 | |
yolov5m | - | 3.9 | - | 4.2 | - | 4.5 | |
yolov5l | - | 2.3 | - | 2.4 | - | 2.7 | |
yolov5x | - | 1.5 | - | 1.5 | - | 1.6 |
(py_learn) python detect2_yolov5.py -i ../../Videos/car_m.mp4 -m yolov5n (py_learn) python detect2_yolov5.py -i ../../Videos/car_m.mp4 -m yolov5n -cpu (py_learn) python detect2_yolov5.py -i ../../Videos/car1_m.mp4 -m yolov5n (py_learn) python detect2_yolov5.py -i ../../Videos/car1_m.mp4 -m yolov5n -cpu (py_learn) python detect2_yolov5.py -i ../../Videos/car2_m.mp4 -m yolov5n (py_learn) python detect2_yolov5.py -i ../../Videos/car2_m.mp4 -m yolov5n -cpu
機種 | OS | CPU | GPU |
HP ENVY Desktop TE02-1097jp | Windows11/Ubuntu22.04LTS | 13th Gen Core™ i9-13900 | GeForce RTX 4070 Ti 12GB |
HP EliteDesk 800 G2 SFF | Windows10 | 6 th Gen Core™ i7-6700 | GeForce GTX 1050 Ti 4GB |
DELL Latitude 7520 NoteBook | Ubuntu20.04LTS | 11th Gen Core™ i7-1185G7 | - |
(py_learn) git clone https://github.com/violet17/yolov5_demo.git・実行ログ
(py_learn) git clone https://github.com/violet17/yolov5_demo.git Cloning into 'yolov5_demo'... remote: Enumerating objects: 31, done. remote: Counting objects: 100% (31/31), done. remote: Compressing objects: 100% (30/30), done. remote: Total 31 (delta 13), reused 0 (delta 0), pack-reused 0Receiving objects: 58% (18/31) Receiving objects: 100% (31/31), 59.87 KiB | 5.99 MiB/s, done. Resolving deltas: 100% (13/13), done.
(py_learn2) python yolov5_demo_sync_ov2023.py -i ../yolov5/data/images/zidane.jpg -m ../yolov5/yolov5s_openvino_model/yolov5s.xml・オリジナルの API2.0 対応デモプログラム「yolov5_demo_sync_ov2023.py」
(py_learn2) python yolov5_demo_sync_ov2023.py -i ../yolov5/data/images/zidane.jpg -m ../yolov5/yolov5s_openvino_model/yolov5s.xml [ INFO ] Creating OpenVINO Runtime Core... [ INFO ] Reading the model: ../yolov5/yolov5s_openvino_model/yolov5s.xml [ INFO ] Preparing inputs *********** [1,3,640,640] --------- ../yolov5/data/images/zidane.jpg [ INFO ] Loading model to the plugin [ INFO ] Starting inference... [ INFO ] classes : 80 [ INFO ] num : 3 [ INFO ] coords : 4 [ INFO ] anchors : [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] Traceback (most recent call last): File "C:\anaconda_win\workspace_pylearn\yolov5_demo\yolov5_demo_sync_ov2023.py", line 349, in <module> sys.exit(main() or 0) ^^^^^^ File "C:\anaconda_win\workspace_pylearn\yolov5_demo\yolov5_demo_sync_ov2023.py", line 281, in main objects += parse_yolo_region(out_blob, in_frame.shape[2:], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\anaconda_win\workspace_pylearn\yolov5_demo\yolov5_demo_sync_ov2023.py", line 153, in parse_yolo_region out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shape ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 4, got 3)※ 変換した学習済みモデルには対応できないよう
(py_learn) python yolov5_demo_sync_ov2023.py -i ../yolov5/data/images/zidane.jpg -m yolov5s_v3.xml -show・実行結果
(py_learn) python yolov5_demo_sync_ov2023.py -i ../yolov5/data/images/zidane.jpg -m yolov5s_v3.xml -show [ INFO ] Creating OpenVINO Runtime Core... [ INFO ] Reading the model: yolov5s_v3.xml [ INFO ] Preparing inputs *********** [1,3,640,640] --------- ../yolov5/data/images/zidane.jpg [ INFO ] Loading model to the plugin [ INFO ] Starting inference... [ INFO ] classes : 80 [ INFO ] num : 3 [ INFO ] coords : 4 [ INFO ] anchors : [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] [ INFO ] classes : 80 [ INFO ] num : 3 [ INFO ] coords : 4 [ INFO ] anchors : [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] [ INFO ] classes : 80 [ INFO ] num : 3 [ INFO ] coords : 4 [ INFO ] anchors : [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 326.0] (720, 1280)※ 以前の学習済みモデル(V3) では問題なく動作する
(py_learn) cp ../yolov5/coco.names ./ (py_learn) cp ../yolov5/coco.names_jp ./・修正した「yolov5_demo_sync_ov2023x.py」の実行
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../yolov5/data/images/zidane.jpg -r・実行結果
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../yolov5/data/images/zidane.jpg -r --- YOLO V5 OpenVINO(API 2.0) demoprogram Ver 0.01 --- OpenCV: 4.9.0 OpenVINO inference_engine: 2024.0.0-14509-34caeefd078-releases/2024/0 Creating OpenVINO Runtime Core... Reading the model: yolov5s_v3.xml Label file : coco.names_jp Input source: ../yolov5/data/images/zidane.jpg Starting inference... [ INFO ] Detected boxes for batch 1: [ INFO ] Class ID | Confidence | XMIN | YMIN | XMAX | YMAX | COLOR [ INFO ] 人 | 0.873057 | 747 | 39 | 1148 | 711 | (0, 80, 0) [ INFO ] 人 | 0.816089 | 116 | 197 | 1003 | 711 | (0, 80, 0) [ INFO ] ネクタイ | 0.778782 | 422 | 430 | 517 | 719 | (128, 0, 128) FPS average: 11.80 Finished.・動画入力の実行
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Videos/car1_m.mp4・実行結果
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Videos/car1_m.mp4 --- YOLO V5 OpenVINO(API 2.0) demoprogram Ver 0.01 --- OpenCV: 4.9.0 OpenVINO inference_engine: 2024.0.0-14509-34caeefd078-releases/2024/0 Creating OpenVINO Runtime Core... Reading the model: yolov5s_v3.xml Label file : coco.names_jp Input source: ../../Videos/car1_m.mp4 Starting inference... FPS average: 9.20 Finished.・カメラ入力の実行
(py_learn) python yolov5_demo_sync_ov2023x.py・実行結果
(py_learn) python yolov5_demo_sync_ov2023x.py --- YOLO V5 OpenVINO(API 2.0) demoprogram Ver 0.01 --- OpenCV: 4.9.0 OpenVINO inference_engine: 2024.0.0-14509-34caeefd078-releases/2024/0 Creating OpenVINO Runtime Core... Reading the model: yolov5s_v3.xml Label file : coco.names_jp Input source: 0 Starting inference... FPS average: 10.40 Finished.
コマンド・オプション | 初期値 | 意味 |
-i , --input | 'cam' | 入力ソースのパス or cam/cam0/cam1 |
-m , --mode | 'yolov5s_v3.xml' | 学習済みモデルのパス |
-d , --device | 'CPU' | 推論デバイス(CPU, GPU, FPGA, HDDL or MYRIAD) |
--labels | 'coco.names_jp' | ラベルファイルのパス(coco.name, coco_name_jp) |
-show | - | 表示禁止フラグ(指定すると画面表示をしない) |
-r, --raw_output_message | - | メッセージ出力フラグ |
-x, --debug_message | - | デバッグ・メッセージ出力フラグ |
「yolov5_demo_sync_ov2023x.py」がエラーとなる原因を調べる
: # インポート処理 import object_check # 2024/03/20 :
: objects = list() for idx in range(len(results)): out_blob = results[idx] layer_params = YoloParams(side=out_blob.shape[2]) # オブジェクト・チェック(DEBUG) # 2024/03/20 if args.debug_message: object_check.chk_object(results, 'results') object_check.chk_object(out_blob, 'out_blob') :・学習済みモデル(V3)「yolov5s_v3.xml」の推論結果で得られるオブジェクト
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Images/cat.jpg -x'result object'
{<ConstOutput: names[668, Conv_487] shape[1,255,20,20] type: f32>: array([[[[ ..., ]]]], dtype=float32), <ConstOutput: names[648, Conv_471] shape[1,255,40,40] type: f32>: array([[[[ ..., ]]]], dtype=float32), <ConstOutput: names[628, Conv_455] shape[1,255,80,80] type: f32>: array([[[[ ..., ]]]], dtype=float32)}・学習済みモデル(V7)「yolov5s.xml」の推論結果で得られるオブジェクト
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Images/cat.jpg -m ../yolov5/yolov5s.xml -x'result' object
{<ConstOutput: names[output0] shape[1,25200,85] type: f32>: array([[[ ..., ]]], dtype=float32)}
参考サイト:→ Object Detection & YOLOs
(py_learn) mo --input_model yolov5s.onnx --model_name yolov5s_v7 -s 255 --reverse_input_channels --output '/model.24/m.0/Conv','/model.24/m.1/Conv','/model.24/m.2/Conv'・実行結果
(py_learn) mo --input_model yolov5s.onnx --model_name yolov5s_v7 -s 255 --reverse_input_channels --output '/model.24/m.0/Conv','/model.24/m.1/Conv','/model.24/m.2/Conv' [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API. Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: C:\anaconda_win\workspace_pylearn\yolov5\yolov5s_v7.xml [ SUCCESS ] BIN file: C:\anaconda_win\workspace_pylearn\yolov5\yolov5s_v7.bin
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Images/cat.jpg -m ../yolov5/yolov5s_v7.xml・実行結果
(py_learn) python yolov5_demo_sync_ov2023x.py -i ../../Images/cat.jpg -m ../yolov5/yolov5s_v7.xml --- YOLO V5 OpenVINO(API 2.0) demoprogram Ver 0.01 --- OpenCV: 4.9.0 OpenVINO inference_engine: 2024.0.0-14509-34caeefd078-releases/2024/0 Creating OpenVINO Runtime Core... Reading the model: ../yolov5/yolov5s_v7.xml Label file : coco.names_jp Input source: ../../Images/cat.jpg Starting inference... FPS average: 11.90 Finished.
(py_learn) PS > cd /anaconda_win/workspace_pylearn/yolov5Linux の場合
(py_learn) $ cd ~/workspace_pylearn/yolov5
(py_learn) python yolov5_OV2.py・コマンドライン引数
コマンドオプション | デフォールト設定 | 意味 |
-h, --help | - | ヘルプ表示 |
-i, --input | cam | カメラ(cam/cam0~cam9)または動画・静止画像ファイル ※ |
-m, --model | yolov5s_v7.xml | 学習済みモデル(IR) |
-d, --device | CPU | デバイス指定 (CPU/GPU/MYRIAD) |
-l, --label | coco.names_jp | ラベル・ファイル |
-t, --prob_threshold | 0.5 | クラス判定の閾値 (数値が小さい程オブジェクトは増えるが、ノイズも増える |
-iout, --iou_threshold | 0.4 | Intersection Over Union(検出領域が重なっている割合、数値が大きいほど重なり度合いが高い) |
-t, --title | y | タイトル表示 (y/n) |
-s, --speed | y | スピード計測表示 (y/n) |
-o, --out | non | 処理結果を出力する場合のファイル名 |
(py_learn) python yolov5_OV2.py -h usage: yolov5_OV2.py [-h] [-i INPUT] [-m MODEL] [-d DEVICE] [--labels LABELS] [-t PROB_THRESHOLD] [-iout IOU_THRESHOLD] [--titlef TITLE] [--speedf SPEED] [-o IMAGE_OUT] Options: -h, --help Show this help message and exit. -i INPUT, --input INPUT Required. Path to an image/video file. (Specify 'cam','cam0','cam1') -m MODEL, --model MODEL Required. Path to an .xml file with a trained model. -d DEVICE, --device DEVICE Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. The sample will look for a suitable plugin for device specified. Default value is CPU --labels LABELS Optional. Labels mapping file -t PROB_THRESHOLD, --prob_threshold PROB_THRESHOLD Optional. Probability threshold for detections filtering -iout IOU_THRESHOLD, --iou_threshold IOU_THRESHOLD Optional. Intersection over union threshold for overlapping detections filtering --titlef TITLE Program title flag.(y/n) Default value is 'y' --speedf SPEED Speed display flag.(y/n) Default calue is 'y' -o IMAGE_OUT, --out IMAGE_OUT Output image file path. Default value is 'non'
(py_learn) python yolov5_OV2.py・実行結果
(py_learn) python yolov5_OV2.py YOLO V5 in OpenVINO(API 2.0) Ver 0.01: Starting application... OpenVINO inference_engine: 2024.0.0-14509-34caeefd078-releases/2024/0 OpenCV virsion : 4.9.0 - Input source : cam - Pretrained : yolov5s_v7.xml - Label file : coco.names_jp - Use device : CPU - prob threshold : 0.5 - iou threshold : 0.4 - Output path : non - Program Title : y - Speed flag : y FPS average: 5.80 Finished.・カメラ・デバイス1を使う場合
(py_learn) python yolov5_OV2.py -i cam1・動画ファイル指定の場合
(py_learn) python yolov5_OV2.py -i ../../Videos/car1_m.mp4
(py_learn) python yolov5_OV2.py -i ../../Videos/car2_m.mp4・静止画ファイル指定の場合
(py_learn) python yolov5_OV2.py -i ../../Images/desk-image.jpg
(py_learn) python yolov5_OV2.py -i ../../Images/car-person.jpg
(py_learn) python yolov5_OV2.py -i ../../Images/bus.jpg
(py_learn) python yolov5_OV2.py -i ../../Images/zidane.jpg
マシン・OS | car_m.mp4 | car1_mp4 | car2.mp4 | |||
GPU | CPU | GPU | CPU | GPU | CPU | |
HP ENVY windows11 | 11.5 | 11.9 | 12.7 | 12.4 | 14.2 | 12.9 |
HP ENVY Ubuntu22.04LTS | 10.6 | 10.3 | 11.3 | 10.5 | 12.4 | 11.0 |
DELL XPS windows11 | 10.7 | 7.4 | 12.0 | 7.9 | 12.7 | 8.0 |
DELL Latitude Ubuntu20.04LTS | 9.7 | 6.5 | 10.7 | 6.8 | 11.2 | 7.1 |
HP ELITE windows10 | 6.6 | 4.5 | 7.2 | 4.7 | 7.7 | 5.0 |
(py_learn) python yolov5_OV2.py -i ../../Videos/car_m.mp4 (py_learn) python yolov5_OV2.py -i ../../Videos/car_m.mp4 -d GPU (py_learn) python yolov5_OV2.py -i ../../Videos/car1_m.mp4 (py_learn) python yolov5_OV2.py -i ../../Videos/car1_m.mp4 -d GPU (py_learn) python yolov5_OV2.py -i ../../Videos/car2_m.mp4 (py_learn) python yolov5_OV2.py -i ../../Videos/car2_m.mp4 -d GPU
機種 | OS | CPU | GPU |
HP ENVY Desktop TE02-1097jp | Windows11/Ubuntu22.04LTS | 13th Gen Core™ i9-13900 | UHD Graphics 770 |
DELL XPS Plus 9320 NoteBook | Windows11 | 12th Gen Core™ i7-1260P | Iris® Xe Graphics |
DELL Latitude 7520 NoteBook | Ubuntu20.04LTS | 11th Gen Core™ i7-1185G7 | Iris® Xe Graphics |
HP EliteDesk 800 G2 SFF | Windows10 | 6 th Gen Core™ i7-6700 | HD Graphics 530 |
(py_test) python detect2_yolov5.py -i ../../Videos/car_m.mp4 -m yolov5x : Using cache found in /home/USER/.cache/torch/hub/ultralytics_yolov5_master YOLOv5 噫 2021-9-16 torch 2.2.1+cpu CPU Fusing layers... /home/USER/anaconda3/envs/py_learn/lib/python3.11/site-packages/torch/functional.py:507: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3549.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 444 layers, 86705005 parameters, 0 gradients Adding AutoShape... Traceback (most recent call last): :
/home/USER/.cache/torch/hub/ultralytics_yolov5_master2. 再度実行する
【Ubuntu20.04LTSで発生】
(py_test) python detect2.py --source 0 Traceback (most recent call last): File "/home/mizutu/workspace_pylearn/yolov5/detect2.py", line 58, in <module> : ImportError: /lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /home/mizutu/anaconda3/envs/py_test/lib/python3.11/site-packages/cv2/python-3.11/cv2.cpython-311-x86_64-linux-gnu.so)
$ls -l ~/anaconda3/envs/py_test/lib/ : lrwxrwxrwx 1 mizutu mizutu 19 3月 16 17:01 libstdc++.so.6 -> libstdc++.so.6.0.29 -rwxrwxr-x 4 mizutu mizutu 17981480 6月 1 2022 libstdc++.so.6.0.29 :
$ ls -l /lib/x86_64-linux-gnu/ : lrwxrwxrwx 1 root root 19 7月 9 2023 libstdc++.so.6 -> libstdc++.so.6.0.28 -rw-r--r-- 1 root root 1956992 7月 9 2023 libstdc++.so.6.0.28 :2.「libstdc++.so.6.0.29」をシステム側にコピーしリンクを再作成
$sudo cp /home/mizutu/anaconda3/envs/py_test/lib/libstdc++.so.6.0.29 /lib/x86_64-linux-gnu $cd /lib/x86_64-linux-gnu $sudo ln -sb libstdc++.so.6.0.29 libstdc++.so.6 $sudo chmod 644 libstdc++.so.6.0.293. ファイルの確認
$ ls -l /lib/x86_64-linux-gnu/ : lrwxrwxrwx 1 root root 19 3月 17 05:37 libstdc++.so.6 -> libstdc++.so.6.0.29 -rw-r--r-- 1 root root 1956992 7月 9 2023 libstdc++.so.6.0.28 -rw-r--r-- 1 root root 17981480 3月 17 05:34 libstdc++.so.6.0.29 lrwxrwxrwx 1 root root 19 7月 9 2023 libstdc++.so.6~ -> libstdc++.so.6.0.28 :
"This undoes exactly what a virtual environment like anaconda is meant to achieve: not having to replace system libraries in order to satisfy dependencies."