私的AI研究会 > RevYOLOv5_2
「PyTorch ではじめる AI開発」Chapter04 で使用する「YOLO V5」について復習する。
「YOLO V5」の学習を深堀する
update ├─workspace_py37 │ └─mylib └─workspace_pylearn └─yolov5 ├─data │ ├─images │ ├─janken4_dataset ※1 学習用データセット │ ├─mask_dataset ※1 │ ├─ts0_dataset ※1 │ └─ts_dataset ※1 └─runs └─train ├─janken4_yolov5s ※2 学習結果 ├─mask_yolov5s ※2 ├─ts0_yolov5s_ep30 ※2 └─ts_yolov5s_ep30 ※2※1 以下のプロジェクトの『前準備』を行った結果のデータセット
コマンドオプション | 引数 | 初期値 | 意味 |
--weights | str | yolov7s.pt | 学習済み重みモデルファイル |
--cfg | str | 使用するYOLOのバージョン(cfgフォルダからモデルの種類に合わせて指定) | |
--data | str | data/coco.yaml | 学習データの情報を記載したyamlファイルのパス |
--hyp | str | data/hyp.scratch.p5.yaml | ハイパーパラメータのパス |
--epochs | int | 300 | 学習回数指定 |
--batch-size | int | 16 | バッチサイズ指定 |
--img-size | list | [640, 640] | 学習画像のサイズを指定(--img-size 640 640) |
--rect | なし | - | rectangular training |
--resume | なし | - | 前回中断した学習(training)を途中から再開する |
--nosave | なし | - | 最終 checkpoint のみ保存 |
--notest | なし | - | only test final epoch |
--noautoanchor | なし | - | 自動アンカーチェックの無効化 |
--evolve | なし | - | evolve hyperparameters |
--bucket | str | gsutil bucket | |
--cache-images | str | ram | cashの保存先指定 (ram or disk) |
--image-weights | なし | - | use weighted image selection for training |
--device | str | 使用プロセッサの指定(0 or 0,1,2,3 or cpu) (指定なしの場合 cuda) | |
--multi-scale | なし | - | 画像サイズを±50%変化させる |
--single-cls | なし | - | train multi-class data as single-class |
--adam | なし | - | Adam optimizerを使用する (指定しなければ SGD が使用される) |
--sync-bn | なし | - | SyncBatchNormを使用する (DDPモードでのみ使用可能) |
--local_rank | int | -1 | DDP parameter, do not modify |
--workers | int | 8 | データローダーワーカーの最大数 |
--project | str | runs/train | 学習結果の記録フォルダパス |
--name | str | project/name | 学習結果の記録フォルダの下のフォルダ名(推論ごとにインクリメント) |
--entity | なし | None | W&B entity |
--exist-ok | なし | - | 学習結果を上書き保存(指定すれば上書き) |
--quad | なし | - | quad dataloader |
--linear-lr | なし | - | linear LR |
--label-smoothing | 0.0 | Label smoothing epsilon | |
--upload_dataset | なし | - | Upload dataset as W&B artifact table |
--bbox_interval | なし | -1 | Set bounding-box image logging interval for W&B' |
--save_period | int | -1 | 指定 epoch 毎にチェックポイントを保存する (-1 = (無効)) |
--artifact_alias | なし | latest | version of dataset artifact to be used |
--freeze | list | [0] | 学習しない層を指定 backbone of yolov7=50, first3=0 1 2' |
--v5-metric | なし | - | assume maximum recall as 1.0 in AP calculation |
80, prohibitory 81, danger 82, mandatory 83, other
(py_learn) python data_select1.py・「data_select1.py」実行ログ
(py_learn) python data_select1.py : archive/ts/ts/00892.txt が見つかりませんでした filename: archive/ts/ts/00893.txt 83 0.6139705882352942 0.62 0.020588235294117647 0.035 filename: archive/ts/ts/00894.txt 80 0.2963235294117647 0.67625 0.020588235294117647 0.035 80 0.8180147058823529 0.685625 0.028676470588235293 0.04875 83 0.8158088235294118 0.628125 0.03602941176470588 0.06125 filename: archive/ts/ts/00895.txt 80 0.15845588235294117 0.630625 0.03308823529411765 0.05625 80 0.7279411764705882 0.5525 0.03529411764705882 0.06 filename: archive/ts/ts/00896.txt 80 0.6080882352941176 0.53875 0.027941176470588237 0.0475 filename: archive/ts/ts/00897.txt 83 0.6 0.6725 0.01764705882352941 0.03 filename: archive/ts/ts/00898.txt 80 0.24926470588235294 0.65 0.023529411764705882 0.04 80 0.6125 0.6575 0.023529411764705882 0.04 filename: archive/ts/ts/00899.txt 81 0.65625 0.63625 0.04191176470588235 0.0625・ソースファイル
(py_learn) python data_select2.py・「data_select2.pu」実行ログ
(py_learn) python data_select2.py ./archive/ts/ts/00000.jpg を変換 ./archive/ts/ts/00001.jpg を変換 ./archive/ts/ts/00002.jpg を変換 ./archive/ts/ts/00003.jpg を変換 : ./archive/ts/ts/00896.jpg を変換 ./archive/ts/ts/00897.jpg を変換 ./archive/ts/ts/00898.jpg を変換 ./archive/ts/ts/00899.jpg を変換・ソースファイル
mkdir -p ts_dataset/images/train mkdir -p ts_dataset/images/val mkdir -p ts_dataset/labels/train mkdir -p ts_dataset/labels/val・作成結果
(py_learn) PS > tree yolov5_test/ ├─archive │ └─ts │ └─ts └─ts_dataset ├─images │ ├─train │ └─val └─labels ├─train └─val
move archive/ts/ts/*1.txt ts_dataset/labels/val move archive/ts/ts/*5.txt ts_dataset/labels/val move archive/ts/ts/*1.jpg ts_dataset/images/val move archive/ts/ts/*5.jpg ts_dataset/images/val・学習用は残りのファイルを移動{741-150 = 531個)
move archive/ts/ts/*.txt ts_dataset/labels/train move archive/ts/ts/*.jpg ts_dataset/images/train
train: data/ts_dataset/images/train val: data/ts_dataset/images/val nc: 84 names: 0: person 1: bicycle 2: car : 79: toothbrush 80: prohibitory 81: danger 82: mandatory 83: other
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30・GPU を使用しない場合は以下のコマンドを実行する
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30 --device cpu
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30 train: weights=yolov5s.pt, cfg=, data=data/ts_dataset/ts.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=30, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=ts_yolov5s_ep30, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False remote: Enumerating objects: 24, done. remote: Counting objects: 100% (24/24), done. remote: Compressing objects: 100% (24/24), done. remote: Total 24 (delta 13), reused 1 (delta 0), pack-reused 0 Unpacking objects: 100% (24/24), 9.93 KiB | 462.00 KiB/s, done. From https://github.com/ultralytics/yolov5 d07d0cf6..cf8b67b7 master -> origin/master github: YOLOv5 is out of date by 9 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update. YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=84 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] : Starting training for 30 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/29 3.42G 0.1287 0.02085 0.1002 46 640: 100%|██████████| 37/37 [00:04<00:00, 8. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 5/5 [00:00<0 all 150 230 0.00304 0.0611 0.00226 0.000532 : Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 29/29 3.65G 0.02481 0.007517 0.01037 49 640: 100%|██████████| 37/37 [00:03<00:00, 11. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 5/5 [00:00<0 all 150 230 0.912 0.884 0.933 0.684 30 epochs completed in 0.036 hours. Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\last.pt, 14.8MB Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\best.pt, 14.8MB Validating runs\train\ts_yolov5s_ep30\weights\best.pt... Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 5/5 [00:01<0 all 150 230 0.905 0.89 0.932 0.689 prohibitory 150 108 0.954 0.953 0.98 0.763 danger 150 38 0.934 0.921 0.959 0.684 mandatory 150 35 0.876 0.81 0.9 0.687 other 150 49 0.854 0.878 0.89 0.621 Results saved to runs\train\ts_yolov5s_ep30
(py_learn) ls runs/train/ts_yolov5s_ep30 Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 2024/04/15 9:20 weights -a---- 2024/04/15 9:23 558388 confusion_matrix.png -a---- 2024/04/15 9:23 226814 F1_curve.png -a---- 2024/04/15 9:19 401 hyp.yaml -a---- 2024/04/15 9:20 126318 labels.jpg -a---- 2024/04/15 9:20 223809 labels_correlogram.jpg -a---- 2024/04/15 9:19 1208 opt.yaml -a---- 2024/04/15 9:23 124885 PR_curve.png -a---- 2024/04/15 9:23 172410 P_curve.png -a---- 2024/04/15 9:23 9145 results.csv -a---- 2024/04/15 9:23 289059 results.png -a---- 2024/04/15 9:23 175122 R_curve.png -a---- 2024/04/15 9:20 501238 train_batch0.jpg -a---- 2024/04/15 9:20 483837 train_batch1.jpg -a---- 2024/04/15 9:20 496065 train_batch2.jpg -a---- 2024/04/15 9:23 448059 val_batch0_labels.jpg -a---- 2024/04/15 9:23 451714 val_batch0_pred.jpg -a---- 2024/04/15 9:23 447649 val_batch1_labels.jpg -a---- 2024/04/15 9:23 453702 val_batch1_pred.jpg -a---- 2024/04/15 9:23 389031 val_batch2_labels.jpg -a---- 2024/04/15 9:23 395697 val_batch2_pred.jpg・学習結果モデルは「runs/train/exp*/weights」評価指標は「runs/train/exp*/」
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/ts_dataset/images/val/00001.jpg・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/ts_dataset/images/val/00001.jpg detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/ts_dataset/images/val/00001.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Speed: 1.0ms pre-process, 52.0ms inference, 46.1ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp25
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/traffic-sign-to-test.mp4・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/traffic-sign-to-test.mp4 detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/images/traffic-sign-to-test.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Speed: 0.2ms pre-process, 5.1ms inference, 5.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp32
人 自転車 車 : ヘアドライヤー 歯ブラシ 標識-禁止 標識-危険 標識-必須 標識-その他・「ts_names」英語(「coco.names」をコピーし編集)
person bicycle car : toothbrush prohibitory danger mandatory other
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/ts_dataset/images/val/00001.jpg -l ts_names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/ts_dataset/images/val/00001.jpg -l ts_names_jp Object detection YoloV5 in PyTorch Ver. 0.05: Starting application... OpenCV virsion : 4.9.0 - Image File : data/ts_dataset/images/val/00001.jpg - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/ts_yolov5s_ep30/weights/best.pt - Confidence lv: 0.25 - Label file : ts_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Adding AutoShape... FPS average: 7.90 Finished.
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/images/traffic-sign-to-test.mp4 -l ts_names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/images/traffic-sign-to-test.mp4 -l ts_names_jp Object detection YoloV5 in PyTorch Ver. 0.05: Starting application... OpenCV virsion : 4.9.0 - Image File : data/images/traffic-sign-to-test.mp4 - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/ts_yolov5s_ep30/weights/best.pt - Confidence lv: 0.25 - Label file : ts_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Adding AutoShape... FPS average: 52.70 Finished.
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/bus.jpg detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/images/bus.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs Speed: 1.0ms pre-process, 50.0ms inference, 45.1ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp8・元々の yolov5s.pt モデルでの検出(80クラス)ログ
(py_learn) python detect2.py --source data/images/bus.jpg detect2: weights=yolov5s.pt, source=data/images/bus.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Speed: 0.0ms pre-process, 51.9ms inference, 47.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp9
(py_learn) python train.py --data data/ts0_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts0_yolov5s_ep30・GPU を使用しない場合は以下のコマンドを実行する
(py_learn) python train.py --data data/ts0_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts0_yolov5s_ep30 --device cpu
: Validating runs\train\ts0_yolov5s_ep30\weights\best.pt... Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 5/5 [00:01<0 all 150 230 0.924 0.89 0.929 0.683 prohibitory 150 108 0.96 0.944 0.988 0.793 danger 150 38 0.973 0.945 0.951 0.691 mandatory 150 35 0.85 0.812 0.887 0.621 other 150 49 0.911 0.857 0.889 0.627 Results saved to runs\train\ts0_yolov5s_ep30・学習結果モデルは「runs/train/ts0_dataset_ep30/weights」評価指標は「runs/train/ts0_dataset_ep30/」
人 ← coco データ ID: 0 自転車 車 : ヘアドライヤー 歯ブラシ ← coco データ ID:79 標識-禁止 ← Traffic Signs データ ID:80 (0) 標識-危険 標識-必須 標識-その他 ← Traffic Signs データ ID:83 (3)・「ts_names」英語
person ← coco データ ID: 0 bicycle car : toothbrush ← coco データ ID:79 prohibitory ← Traffic Signs データ ID:80 (0) danger mandatory other ← Traffic Signs データ ID:83 (3)
コマンドオプション | 初期値 | 意味 |
-i , --image | '../../Videos/car_m.mp4' | 入力ソースのパス またはカメラ(cam/cam0~cam9) |
-y , --yolov5 | 'ultralytics/yolov5' | yolov5ディレクトリのパス(ローカルの場合は yolov5 のパス) |
-m , --models | 'yolov5s' | モデル名(ローカルの場合は モデルファイルのパス)※1 |
-ms , --models2 | '' | 2番目に推論するモデル名(モデルファイルのパス) |
-l , --labels | 'coco.names_jp' | ラベルファイルのパス(coco.name, coco_name_jp) |
-c , --conf | 0.25 | オブジェクト検出レベルの閾値 |
-t , --title | 'y' | タイトルの表示(y/n) |
-s , --speed | 'y' | 速度の表示(y/n) |
-o , --out | 'non' | 出力結果の保存パス <path/filename> ※2 |
-cpu | - | CPUフラグ(指定すれば 常に CPU動作) |
--log | 3 | ログ出力レベル (0/1/2/3/4/5) |
--ucr | - | 配色フラグ(指定すれば Ultralytics カラー) |
-y ultralytics/yolov5 ← オンライン(TorchHub)<default> -y ./ ← オフライン(ローカル)※ 初回起動時にキャッシュにダウンロードされ以後はキャッシュで動作する
-m yolov5s ← オンライン(TorchHub)<default> -m ./test/yolov5s.pt ← オフライン(ローカル)※ モデルが指定場所にない場合は、初回実行時に自動的にダウンロードされる
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp・実行結果
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp Starting.. Object detection YoloV5 in PyTorch Ver. 0.06: Starting application... OpenCV virsion : 4.9.0 - Image File : data/images/drive003_s.mp4 - YOLO v5 : ultralytics/yolov5 - Pretrained : yolov5s.pt - Pretrained 2 : runs/train/ts0_yolov5s_ep30/weights/best.pt - Confidence lv: 0.25 - Label file : ts_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 - Log Level : 3 Using cache found in C:\Users\<USER>/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape... Using cache found in C:\Users\<USER>/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 22.00 Finished.
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp -y ./・実行結果
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp -y ./ Starting.. Object detection YoloV5 in PyTorch Ver. 0.06: Starting application... OpenCV virsion : 4.9.0 - Image File : data/images/drive003_s.mp4 - YOLO v5 : ./ - Pretrained : yolov5s.pt - Pretrained 2 : runs/train/ts0_yolov5s_ep30/weights/best.pt - Confidence lv: 0.25 - Label file : ts_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 - Log Level : 3 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs Adding AutoShape... YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 22.60 Finished.
(py_learn) PS > cd /anaconda_win/workspace_pylearn/ (py_learn) PS > mkdir p2y_conv (py_learn) PS > cd p2y_conv(5) P2Y Converter サイトから「main.py」をダウンロードし、「p2y_conv」ディレクトリ内に配置する
p2y.py(main.py) : PPATH = '/anaconda_win/workspace_pylearn/p2y_conv/' absolutepath_of_directory_with_xmlfiles = PPATH + 'annotations/' absolutepath_of_directory_with_imgfiles = PPATH + 'images/' absolutepath_of_directory_with_yolofiles = PPATH + 'format_yolo/' absolutepath_of_directory_with_classes_txt = PPATH absolutepath_of_directory_with_error_txt = PPATH + 'error/' :(2) 作業用のディレクトリを作成する
mkdir format_yolo mkdir error
p2y_conv/ ├─annotations ├─error ├─format_yolo ├─images ├─main.py └─p2y.py(3)「lxml」ライブラリをインストールする
(py_learn) PS > pip install lxml Collecting lxml Downloading lxml-5.2.1-cp311-cp311-win_amd64.whl.metadata (3.5 kB) Downloading lxml-5.2.1-cp311-cp311-win_amd64.whl (3.8 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 11.1 MB/s eta 0:00:00 Installing collected packages: lxml Successfully installed lxml-5.2.1(4) フォーマット変換を実行する
(py_learn) python p2y.py libpng warning: iCCP: Not recognizing known sRGB profile that has been edited
mkdir -p mask_dataset/images/train mkdir -p mask_dataset/images/val mkdir -p mask_dataset/labels/train mkdir -p mask_dataset/labels/val・作成結果
(py_learn) PS > tree p2y_conv/ ├─annotations ├─error ├─format_yolo ├─images └─mask_dataset ├─images │ ├─train │ └─val └─labels ├─train └─val(2)「mask_dataset」ディレクトリに画像ファイルとラベルファイルを配置する
move format_yolo/*1.txt mask_dataset/labels/val move format_yolo/*5.txt mask_dataset/labels/val move images/*1.png mask_dataset/images/val move images/*5.png mask_dataset/images/val・学習用は残りのファイルを移動{853-171 = 682個)
move format_yolo/*.txt mask_dataset/labels/train move images/*.png mask_dataset/images/train(3)「mask_dataset/」フォルダ内にファイル mask.yaml を作成
train: data/mask_dataset/images/train val: data/mask_dataset/images/val nc: 3 names: 0: without_mask 1: with_mask 2: mask_weared_incorrect 3: motorcycle(4)「mask_dataset/」フォルダを「yolov5/data/」内にコピー(移動)する
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s・GPU を使用しない場合は以下のコマンドを実行する
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s --device cpu
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s train: weights=yolov5s.pt, cfg=, data=data/mask_dataset/mask.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=mask_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False github: YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update. YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=4 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] : Starting training for 100 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/99 3.35G 0.1051 0.06055 0.03577 79 640: 100%|██████████| 43/43 [00:04<00:00, 9. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 6/6 [00:01<0 all 171 778 0.69 0.0704 0.0216 0.00536 : Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 99/99 4.41G 0.01885 0.01993 0.001163 75 640: 100%|██████████| 43/43 [00:03<00:00, 11. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 6/6 [00:01<0 all 171 778 0.848 0.814 0.86 0.59 100 epochs completed in 0.143 hours. Optimizer stripped from runs\train\mask_yolov5s\weights\last.pt, 14.4MB Optimizer stripped from runs\train\mask_yolov5s\weights\best.pt, 14.4MB Validating runs\train\mask_yolov5s\weights\best.pt... Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 6/6 [00:01<0 all 171 778 0.859 0.849 0.888 0.609 without_mask 171 122 0.773 0.885 0.899 0.59 with_mask 171 630 0.932 0.931 0.965 0.684 mask_weared_incorrect 171 26 0.872 0.731 0.801 0.554 Results saved to runs\train\mask_yolov5s
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask-test.jpg --view-img・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask-test.jpg --view-img detect2: weights=['./runs/train/mask_yolov5s/weights/best.pt'], source=../../Images/mask-test.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Speed: 1.0ms pre-process, 49.2ms inference, 42.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp10
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask.mov --view-img・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask.mov --view-img detect2: weights=['./runs/train/mask_yolov5s/weights/best.pt'], source=../../Videos/mask.mov, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Speed: 0.6ms pre-process, 6.4ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp12
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask.jpg --view-img (py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask2.mp4 --view-img (py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask-test.mp4 --view-img
マスクなし マスクあり マスク着用 _不正確・「ts_names」英語(「coco.names」をコピーし編集)
without_mask with_mask mask_weared_incorrect
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Images/mask-test.jpg -l mask_names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Images/mask-test.jpg -l mask_names_jp Object detection YoloV5 in PyTorch Ver. 0.02: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Images/mask-test.jpg - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/mask_yolov5s/weights/best.pt - Label file : mask_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 9.00 Finished.
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Videos/mask.mov -l mask_names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Videos/mask.mov -l mask_names_jp Object detection YoloV5 in PyTorch Ver. 0.02: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Videos/mask.mov - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/mask_yolov5s/weights/best.pt - Label file : mask_names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 45.70 Finished.
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask.jpg --view-img (py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask2.mp4 --view-img (py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask-test.mp4 --view-img
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s・GPU を使用しない場合は以下のコマンドを実行する
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s --device cpu
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s --device cpu
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s train: weights=yolov5s.pt, cfg=, data=data/janken4_dataset/janken4_dataset.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=janken4_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False github: YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update. YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=3 from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] : Starting training for 100 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/99 3.35G 0.08231 0.02932 0.03379 37 640: 100%|██████████| 30/30 [00:03<00:00, 9. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 4/4 [00:00<0 all 120 120 0.00334 0.992 0.0603 0.0182 : Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 99/99 3.98G 0.01874 0.009725 0.00775 38 640: 100%|██████████| 30/30 [00:02<00:00, 12. Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 4/4 [00:00<0 all 120 120 0.993 0.992 0.995 0.787 100 epochs completed in 0.093 hours. Optimizer stripped from runs\train\janken4_yolov5s\weights\last.pt, 14.4MB Optimizer stripped from runs\train\janken4_yolov5s\weights\best.pt, 14.4MB Validating runs\train\janken4_yolov5s\weights\best.pt... Fusing layers... Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 4/4 [00:00<0 all 120 120 0.993 0.992 0.995 0.788 goo 120 40 0.984 1 0.995 0.754 choki 120 40 1 0.977 0.995 0.755 par 120 40 0.994 1 0.995 0.855 Results saved to runs\train\janken4_yolov5s
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken3.jpg --view-img・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken3.jpg --view-img detect2: weights=['./runs/train/janken4_yolov5s/weights/best.pt'], source=../../Images/janken3.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs Speed: 0.0ms pre-process, 53.6ms inference, 42.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp23
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Videos/janken_test2.mp4 --view-img・実行ログ(結果は「runs/detect/exp*」*は順次更新)
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Videos/janken_test2.mp4 --view-img detect2: weights=['./runs/train/janken4_yolov5s/weights/best.pt'], source=../../Videos/janken_test2.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs Speed: 0.7ms pre-process, 8.4ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640) Results saved to runs\detect\exp24
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken.jpg --view-img (py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken2.jpg --view-img
グー チョキ パー・「jankennames」英語
gook choki par
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken3.jpg -l janken.names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken3.jpg -l janken.names_jp Object detection YoloV5 in PyTorch Ver. 0.02: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Images/janken3.jpg - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/janken4_yolov5s/weights/best.pt - Label file : janken.names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 7.90 Finished.
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Videos/janken_test2.mp4 -l janken.names_jp・実行ログ
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Videos/janken_test2.mp4 -l janken.names_jp Object detection YoloV5 in PyTorch Ver. 0.02: Starting application... OpenCV virsion : 4.9.0 - Image File : ../../Videos/janken_test2.mp4 - YOLO v5 : ultralytics/yolov5 - Pretrained : ./runs/train/janken4_yolov5s/weights/best.pt - Label file : janken.names_jp - Program Title: y - Speed flag : y - Processed out: non - Use device : cuda:0 Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master YOLOv5 2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) Fusing layers... Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs Adding AutoShape... FPS average: 47.30 Finished.
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken.jpg -l janken.names_jp (py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken2.jpg -l janken.names_jp
プログラム名 | 主なコマンドオプション | 初期値 | 機能 | 種別 |
detect2.py | --weights | yolov7s.pt | 入力ソースを物体検出(推論) | YOLOv5 添付 detect.py 修正版 |
--source | data/images | |||
--device | (cuda:0) | |||
--view-img | False | |||
detect3_yolov5.py | -i , --image | '../../Videos/car_m.mp4' | PyTorch による 入力ソースの物体検出(推論) 複数モデル/日本語表示対応 | 新規作成 |
-y , --yolov5 | 'ultralytics/yolov5' | |||
-m , --models | :'yolov5s' | |||
-ms , --models2 | '' | |||
-l , --labels | :'coco.names_jp' | |||
yolov5_OV2.py | -i, --input | cam | OpenVINO™ による 入力ソースの物体検出(推論) 日本語表示対応 | 新規作成 |
-m, --model | yolov5s_v7.xml | |||
-d, --device | CPU | |||
-l, --label | coco.names_jp | |||
export.py | --include | 必須指定 | 学習済みモデルのフォーマット変換 | YOLOv5 添付 |
train.py | --weights | yolov7s.pt | 学習プログラム | YOLOv5 添付 |
--cfg | ||||
--data | data/coco.yaml | |||
--epochs | 300 | |||
--batch-size | 16 | |||
--device | (cuda:0) | |||
--project | runs/train | |||
--name | project/name |
Model | size (pixels) | mAPval 50-95 | mAPval 50 | Speed CPU b1 (ms) | Speed V100 b1 (ms) | Speed V100 b32 (ms) | params (M) | FLOPs @640 (B) |
YOLOv5n | 640 | 28.0 | 45.7 | 45 | 6.3 | 0.6 | 1.9 | 4.5 |
YOLOv5s | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 |
YOLOv5m | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 |
YOLOv5l | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 |
YOLOv5x | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 |
データセット | データ数 | yolov5s 学習回数 | yolov5x 学習回数 | ||
30 | 100 | 300 | 100 | ||
交通標識 (Traffic Signs Dataset) | 741 | 2分30秒 | 7分1秒 | 20分36秒 | |
マスク着用 (Mask Wearing Dataset) | 853 | 8分35秒 | |||
じゃんけん (janken Dataset) | 600 | 5分35秒 | 2時間4分41秒※ |
OS | GPU | CPU | ||||
RTX 4070 | GTX 1050 | i9-13900 | i7-1260P | i7-1185G7 | i7-6700 | |
Windows11/10 | 2分30秒 | 16分48秒 | 59分17秒 | × | ||
Ubuntu22.04/20.04 | 1分40秒 | × | × | × | × |
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 train: weights=yolov5s.pt, cfg=, data=data/ts_dataset/ts.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=30, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False remote: Enumerating objects: 8, done. remote: Counting objects: 100% (6/6), done. remote: Compressing objects: 100% (3/3), done. remote: Total 8 (delta 3), reused 5 (delta 3), pack-reused 2 Unpacking objects: 100% (8/8), 3.77 KiB | 226.00 KiB/s, done. From https://github.com/ultralytics/yolov5 db125a20..ae4ef3b2 master -> origin/master github: YOLOv5 is out of date by 2 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update. YOLOv5 v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB) : Plotting labels to runs\train\exp\labels.jpg... OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
"C:\Users\<USER>\anaconda3\envs\py_learn\Lib\site-packages\torch\lib\libiomp5md.dll" "C:\Users\<USER>\anaconda3\envs\py_learn\Library\bin\libiomp5md.dll"2. 2つ目の「libiomp5md.dll」を「temp/」(どこでもよい)へ移動
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s : Starting training for 100 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/99 3.35G 0.1201 0.0557 0.04686 158 640: 12%|█▏ | 5/43 [00:01<00:06, 6.1libpng warning: iCCP: Not recognizing known sRGB profile that has been edited 0/99 3.35G 0.1187 0.05642 0.0466 156 640: 28%|██▊ | 12/43 [00:01<00:03, 9.libpng warning: iCCP: Not recognizing known sRGB profile that has been edited 0/99 3.35G 0.1165 0.05713 0.04507 126 640: 40%|███▉ | 17/43 [00:02<00:02, 9.libpng warning: iCCP: Not recognizing known sRGB profile that has been edited :
> cd temp > magick mogrify -identify *.png image01.png PNG 512x366 512x366+0+0 8-bit TrueColor sRGB 329899B 0.007u 0:00.006 9mage02.png PNG 301x400 301x400+0+0 8-bit TrueColorAlpha sRGB 181960B 0.006u 0:00.005・Imagemagickで画像の情報を取得する
> cd train > magick mogrify -strip *.png > cd train > magick mogrify -strip *.png
: github: ⚠️ YOLOv5 is out of date by 6 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update. YOLOv5 v7.0-297-gd07d0cf6 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 11987MiB) :
(py_learn) git remote -v origin https://github.com/ultralytics/yolov5 (fetch) origin https://github.com/ultralytics/yolov5 (push) (py_learn) git branch -vv * master d07d0cf6 [origin/master: behind 6] Create cla.yml (#12899)
(py_learn) git pull Updating d07d0cf6..cf8b67b7 Fast-forward .github/workflows/merge-main-into-prs.yml | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ pyproject.toml | 2 +- 2 files changed, 57 insertions(+), 1 deletion(-) create mode 100644 .github/workflows/merge-main-into-prs.yml