#author("2024-04-18T10:59:36+00:00","default:mizutu","mizutu")
[[私的AI研究会]] > RevYOLOv5_2
*【復習】物体検出アルゴリズム「YOLO V5」2 &color(green){== 編集中 ==}; [#n9670cad]
#ref(ts_result.gif,right,around,90%,ts_result.gif)
「PyTorch ではじめる AI開発」Chapter04 で使用する「YOLO V5」について復習する。~
「YOLO V5」の学習を深堀する~
#divregion( 目 次,open)
#contents
#enddivregion
#clear
RIGHT:&size(12){※ 最終更新:2024/04/18 };

** [[Official YOLOv5>+https://github.com/ultralytics/yolov5]] 考察2 学習編 [#j3d5c191]
- 下記のプロジェクト・パッケージ [[update_20240418.zip>https://izutsu.aa0.netvolante.jp/download/linux/update_20240418.zip]] (624MB) <アップデートファイル> をダウンロード~
・解凍してできるフォルダ~
#codeprettify(){{
update
├─workspace_py37
│  └─mylib
└─workspace_pylearn
    └─yolov5
        ├─data
        │  ├─images
        │  ├─janken4_dataset         ※1 学習用データセット
        │  ├─mask_dataset            ※1
        │  ├─ts0_dataset             ※1
        │  └─ts_dataset              ※1
        └─runs
            └─train
                ├─janken4_yolov5s     ※2 学習結果
                ├─mask_yolov5s        ※2
                ├─ts0_yolov5s_ep30    ※2
                └─ts_yolov5s_ep30     ※2
}}
※1 以下のプロジェクトの『前準備』を行った結果のデータセット~
※2 学習の実行後にできるファイル群(GPU環境がない場合学習には時間がかかる)~
・&color(red){解凍してできるフォルダを同名のフォルダに上書きコピーする(※1,※1 は実際にやってみる場合はコピーしない)};~

*** 学習プログラム「train.py」 [#dac40883]
- 実行ディレクトリは「workspace_pylearn/yolov5/」~

-「train.py」実行時のコマンドパラメータ~
|LEFT:128|CENTER:38|CENTER:180|LEFT:|c
|CENTER:コマンドオプション|引数|初期値|CENTER:意味|h
|BGCOLOR(lightyellow):--weights|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):yolov7s.pt|BGCOLOR(lightyellow):学習済み重みモデルファイル|
|BGCOLOR(lightyellow):--cfg|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):|BGCOLOR(lightyellow):使用するYOLOのバージョン(cfgフォルダからモデルの種類に合わせて指定)|
|BGCOLOR(lightyellow):--data|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):data/coco.yaml|BGCOLOR(lightyellow):学習データの情報を記載したyamlファイルのパス|
|--hyp|str|data/hyp.scratch.p5.yaml|ハイパーパラメータのパス|
|BGCOLOR(lightyellow):--epochs|BGCOLOR(lightyellow):int|BGCOLOR(lightyellow):300|BGCOLOR(lightyellow):学習回数指定|
|BGCOLOR(lightyellow):--batch-size|BGCOLOR(lightyellow):int|BGCOLOR(lightyellow):16|BGCOLOR(lightyellow):バッチサイズ指定|
|BGCOLOR(lightyellow):--img-size|BGCOLOR(lightyellow):list|BGCOLOR(lightyellow):[640, 640]|BGCOLOR(lightyellow):学習画像のサイズを指定(--img-size 640 640)|
|--rect|なし|-|rectangular training|
|BGCOLOR(lightyellow):--resume|BGCOLOR(lightyellow):なし|BGCOLOR(lightyellow):-|BGCOLOR(lightyellow):前回中断した学習(training)を途中から再開する|
|--nosave|なし|-|最終 checkpoint のみ保存|
|--notest|なし|-|only test final epoch|
|--noautoanchor|なし|-|自動アンカーチェックの無効化|
|--evolve|なし|-|evolve hyperparameters|
|--bucket|str||gsutil bucket|
|--cache-images|str|ram|cashの保存先指定 (ram or disk)|
|--image-weights|なし|-|use weighted image selection for training|
|BGCOLOR(lightyellow):--device|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):|BGCOLOR(lightyellow):使用プロセッサの指定(0 or 0,1,2,3 or cpu) (指定なしの場合 cuda)|
|--multi-scale|なし|-|画像サイズを±50%変化させる|
|--single-cls|なし|-|train multi-class data as single-class|
|--adam|なし|-|Adam optimizerを使用する (指定しなければ SGD が使用される)|
|--sync-bn|なし|-|SyncBatchNormを使用する (DDPモードでのみ使用可能)|
|--local_rank|int|-1|DDP parameter, do not modify|
|--workers|int|8|データローダーワーカーの最大数|
|BGCOLOR(lightyellow):--project|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):runs/train|BGCOLOR(lightyellow):学習結果の記録フォルダパス|
|BGCOLOR(lightyellow):--name|BGCOLOR(lightyellow):str|BGCOLOR(lightyellow):project/name|BGCOLOR(lightyellow):学習結果の記録フォルダの下のフォルダ名(推論ごとにインクリメント)|
|--entity|なし|None|W&B entity|
|BGCOLOR(lightyellow):--exist-ok|BGCOLOR(lightyellow):なし|BGCOLOR(lightyellow):-|BGCOLOR(lightyellow):学習結果を上書き保存(指定すれば上書き)|
|--quad|なし|-|quad dataloader|
|--linear-lr|なし|-|linear LR|
|--label-smoothing|0.0|Label smoothing epsilon||
|--upload_dataset|なし|-|Upload dataset as W&B artifact table|
|--bbox_interval|なし|-1|Set bounding-box image logging interval for W&B'|
|--save_period|int|-1|指定 epoch 毎にチェックポイントを保存する (-1 = (無効))|
|--artifact_alias|なし|latest|version of dataset artifact to be used|
|--freeze|list|[0]|学習しない層を指定 backbone of yolov7=50, first3=0 1 2'|
|--v5-metric|なし|-|assume maximum recall as 1.0 in AP calculation|

** オープンデータセットを用いた追加学習「交通標識の検出」 [#y29f2a45]
- ''引用サイト'' → [[金子邦彦研究室:物体検出,物体検出のための追加学習の実行(YOLOv5,PyTorch,Python を使用)(Windows 上)>+https://www.kkaneko.jp/ai/win/yolov5.html]]~

- YOLO形式のオープンデータセット [[Traffic Signs Dataset>+https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format?resource=downloa]] を使用する~

*** 前準備 [#w8a2f435]
#ref(yolov5_train01_m.jpg,right,around,15%,yolov5_train01_m.jpg)
+ [[Traffic Signs Dataset in YOLO format>+https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format?resource=downloa]] のページを開く~
・「Download」ボタンを押す~
 Kaggleへ登録するか,Google アカウントなどでサインインする~
・ダウンロードした「archive.zip」を解凍する~
・できた「archive/」フォルダを「workspace_pylearn/yolov5_test/」フォルダにコピー(移動)する~
~
+「archive/ts/ts/」ディレクトリ内の 900 個のテキストファイル("00000.txt"から"00899.txt")を処理する~
 cocoデータセットに使用されているクラス番号にデータセットのクラス番号 80, 81, 82, 83 を追加する~
#codeprettify(){{
80, prohibitory
81, danger
82, mandatory
83, other
}}
#clear
+ クラス番号変換プログラム「data_select1.py」を実行~
・実行ディレクトリは「workspace_pylearn/yolov5_test/」~
#codeprettify(){{
(py_learn) python data_select1.py
}}
・「data_select1.py」実行ログ
#codeprettify(){{
(py_learn) python data_select1.py
    :
archive/ts/ts/00892.txt が見つかりませんでした
filename: archive/ts/ts/00893.txt
   83 0.6139705882352942 0.62 0.020588235294117647 0.035

filename: archive/ts/ts/00894.txt
   80 0.2963235294117647 0.67625 0.020588235294117647 0.035

   80 0.8180147058823529 0.685625 0.028676470588235293 0.04875

   83 0.8158088235294118 0.628125 0.03602941176470588 0.06125

filename: archive/ts/ts/00895.txt
   80 0.15845588235294117 0.630625 0.03308823529411765 0.05625

   80 0.7279411764705882 0.5525 0.03529411764705882 0.06

filename: archive/ts/ts/00896.txt
   80 0.6080882352941176 0.53875 0.027941176470588237 0.0475

filename: archive/ts/ts/00897.txt
   83 0.6 0.6725 0.01764705882352941 0.03

filename: archive/ts/ts/00898.txt
   80 0.24926470588235294 0.65 0.023529411764705882 0.04

   80 0.6125 0.6575 0.023529411764705882 0.04

filename: archive/ts/ts/00899.txt
   81 0.65625 0.63625 0.04191176470588235 0.0625
}}
・ソースファイル~
#divregion(「data_select1.py」)
#codeprettify(){{
# -*- coding: utf-8 -*-
##------------------------------------------
## YOLOv5 追加学習のためのデータセット作成
##   Step 1 ダウンロードと ClassID 修正
##   Traffic Signs Dataset in YOLO format
##   https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format?resource=downloa
##
##               2024.04.09 Masahiro Izutsu
##------------------------------------------
## https://www.kkaneko.jp/ai/win/yolov5.html
## data_select1.py
##      2024/04/15  すべての行を処理するように変更

'''
"archive/ts/ts/" ディレクトリ内の900個のテキストファイル("00000.txt"から"00899.txt")を処理する。 
各ファイルの行からクラス番号(0, 1, 2, 3)を取り出し,その値が80未満の場合,80を加算してファイルに書き戻す。
 ファイルが存在しない場合,エラーメッセージを表示する
'''

for i in range(0, 900):  # 00000.txt から 00899.txt までスキャンする(実際のファイル数は 741)
    filename = f"archive/ts/ts/{i:05}.txt"
    try:
        with open(filename, "r", encoding="utf-8") as file:
            # すべての行を処理する
            lines = file.readlines()
            for ln in range(len(lines)):
                one_line = lines[ln].strip()
                parts = one_line.split()
                class_number = int(parts[0])
                # もともとのクラス番号 (class_number) は 0, 1, 2, 3 である。80を加えて,元のファイルのクラス番号を更新する。
                if class_number < 80:
                    updated_class_number = class_number + 80
                    x1, y1, x2, y2 = map(float, parts[1:])
                    updated_line = f"{updated_class_number} {x1} {y1} {x2} {y2}\n"
                    lines[ln] = updated_line

        with open(filename, "w", encoding="utf-8") as file:
            file.writelines(lines)
        # 確認表示
        with open(filename, "r", encoding="utf-8") as file:
            lines = file.readlines()
            print(f"filename: {filename}")
            for ln in range(len(lines)):
                print("  ", lines[ln])

    except FileNotFoundError:
        print(f"{filename} が見つかりませんでした")

exit()
}}
#enddivregion
~
+ 画像の幅を 640pixel に縮小する~
・実行ディレクトリは「workspace_pylearn/yolov5_test/」~
・画像縮小プログラム「data_select2.py」を実行~
#codeprettify(){{
(py_learn) python data_select2.py
}}
・「data_select2.pu」実行ログ
#codeprettify(){{
(py_learn) python data_select2.py
./archive/ts/ts/00000.jpg を変換
./archive/ts/ts/00001.jpg を変換
./archive/ts/ts/00002.jpg を変換
./archive/ts/ts/00003.jpg を変換
    :
./archive/ts/ts/00896.jpg を変換
./archive/ts/ts/00897.jpg を変換
./archive/ts/ts/00898.jpg を変換
./archive/ts/ts/00899.jpg を変換
}}
・ソースファイル~
#divregion(「data_select2.py」)
#codeprettify(){{
# -*- coding: utf-8 -*-
##------------------------------------------
## YOLOv5 追加学習のためのデータセット作成
##   Step 1 ダウンロードと ClassID 修正
##   Traffic Signs Dataset in YOLO format
##   https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format?resource=downloa
##
##               2024.04.09 Masahiro Izutsu
##------------------------------------------
## https://www.kkaneko.jp/ai/win/yolov5.html
## data_select2.py

'''
画像サイズを 640 ピクセルにする
'''

from PIL import Image
import os

# カレントディレクトリ
data_dir = './archive/ts/ts'

# 新しい幅
new_width = 640

# カレントディレクトリ内のすべてのファイル
for f in os.listdir(data_dir):
    filename = data_dir + '/' + f
    # .jpgファイルのみを処理
    if filename.endswith('.jpg'):
        print(f"{filename} を変換")
        with Image.open(filename) as img:
            # アスペクト比を保持した高さを計算
            aspect_ratio = new_width / img.width
            new_height = int(img.height * aspect_ratio)
            # リサイズ
            resized_img = img.resize((new_width, new_height))
            # 元のファイルを上書き
            resized_img.save(filename)

exit()
}}
#enddivregion
~
+ データセット名を「ts_dataset」として配置するディレクトリを作成する~
・以下のコマンドを実行~
#codeprettify(){{
mkdir -p ts_dataset/images/train
mkdir -p ts_dataset/images/val
mkdir -p ts_dataset/labels/train
mkdir -p ts_dataset/labels/val
}}
・作成結果~
#codeprettify(){{
(py_learn) PS > tree
yolov5_test/
├─archive
│  └─ts
│      └─ts
└─ts_dataset
    ├─images
    │  ├─train
    │  └─val
    └─labels
        ├─train
        └─val
}}
+「ts_dataset」ディレクトリに画像ファイルとラベルファイルを配置する~
検証用(val):学習用(train) がおよそ 8:2 になるようにする~
・検証用はファイル名の末尾が 1 (77個), 5 (73個) のもの計150個を移動~
#codeprettify(){{
move archive/ts/ts/*1.txt ts_dataset/labels/val
move archive/ts/ts/*5.txt ts_dataset/labels/val
move archive/ts/ts/*1.jpg ts_dataset/images/val
move archive/ts/ts/*5.jpg ts_dataset/images/val
}}
・学習用は残りのファイルを移動{741-150 = 531個)~
#codeprettify(){{
move archive/ts/ts/*.txt ts_dataset/labels/train
move archive/ts/ts/*.jpg ts_dataset/images/train
}}
+「ts_dataset/」フォルダ内にファイル ts.yaml を作成~
・データセットの配置場所は「yolov5/data/」フォルダとする~
#codeprettify(){{
train: data/ts_dataset/images/train
val: data/ts_dataset/images/val
nc: 84
names:
  0: person
  1: bicycle
  2: car
    :
  79: toothbrush
  80: prohibitory
  81: danger
  82: mandatory
  83: other
}}
#divregion(「ts.yaml」詳細)
#codeprettify(){{
train: data/ts_dataset/images/train
val: data/ts_dataset/images/val
nc: 84
names:
  0: person
  1: bicycle
  2: car
  3: motorcycle
  4: airplane
  5: bus
  6: train
  7: truck
  8: boat
  9: traffic light
  10: fire hydrant
  11: stop sign
  12: parking meter
  13: bench
  14: bird
  15: cat
  16: dog
  17: horse
  18: sheep
  19: cow
  20: elephant
  21: bear
  22: zebra
  23: giraffe
  24: backpack
  25: umbrella
  26: handbag
  27: tie
  28: suitcase
  29: frisbee
  30: skis
  31: snowboard
  32: sports ball
  33: kite
  34: baseball bat
  35: baseball glove
  36: skateboard
  37: surfboard
  38: tennis racket
  39: bottle
  40: wine glass
  41: cup
  42: fork
  43: knife
  44: spoon
  45: bowl
  46: banana
  47: apple
  48: sandwich
  49: orange
  50: broccoli
  51: carrot
  52: hot dog
  53: pizza
  54: donut
  55: cake
  56: chair
  57: couch
  58: potted plant
  59: bed
  60: dining table
  61: toilet
  62: tv
  63: laptop
  64: mouse
  65: remote
  66: keyboard
  67: cell phone
  68: microwave
  69: oven
  70: toaster
  71: sink
  72: refrigerator
  73: book
  74: clock
  75: vase
  76: scissors
  77: teddy bear
  78: hair drier
  79: toothbrush
  80: prohibitory
  81: danger
  82: mandatory
  83: other
}}
#enddivregion
~
+「ts_dataset/」フォルダを「yolov5/data/」内にコピー(移動)する~

*** 学習の実行 [#g5e15ed7]
+ エポック数 30 で実行する~
・実行ディレクトリは「workspace_pylearn/yolov5/」~
・学習モデルはデフォールト設定「yolov7s」を使用する~
・学習結果フォルダ名は「ts_yolov5s_ep30」とする
#codeprettify(){{
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30
}}
・GPU を使用しない場合は以下のコマンドを実行する~
#codeprettify(){{
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30 --device cpu
}}
+ 学習の終了~
#codeprettify(){{
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30
train: weights=yolov5s.pt, cfg=, data=data/ts_dataset/ts.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=30, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=ts_yolov5s_ep30, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
remote: Enumerating objects: 24, done.
remote: Counting objects: 100% (24/24), done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 24 (delta 13), reused 1 (delta 0), pack-reused 0
Unpacking objects: 100% (24/24), 9.93 KiB | 462.00 KiB/s, done.
From https://github.com/ultralytics/yolov5
   d07d0cf6..cf8b67b7  master     -> origin/master
github:  YOLOv5 is out of date by 9 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=84

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
    :
Starting training for 30 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/29      3.42G     0.1287    0.02085     0.1002         46        640: 100%|██████████| 37/37 [00:04<00:00,  8.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230    0.00304     0.0611    0.00226   0.000532
    :
      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      29/29      3.65G    0.02481   0.007517    0.01037         49        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.912      0.884      0.933      0.684

30 epochs completed in 0.036 hours.
Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\last.pt, 14.8MB
Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\best.pt, 14.8MB

Validating runs\train\ts_yolov5s_ep30\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:01<0
                   all        150        230      0.905       0.89      0.932      0.689
           prohibitory        150        108      0.954      0.953       0.98      0.763
                danger        150         38      0.934      0.921      0.959      0.684
             mandatory        150         35      0.876       0.81        0.9      0.687
                 other        150         49      0.854      0.878       0.89      0.621
Results saved to runs\train\ts_yolov5s_ep30
}}
#divregion(「train.py」実行ログ詳細)
#codeprettify(){{
(py_learn) PS C:\anaconda_win\workspace_pylearn\yolov5> python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts_yolov5s_ep30
train: weights=yolov5s.pt, cfg=, data=data/ts_dataset/ts.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=30, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=ts_yolov5s_ep30, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
remote: Enumerating objects: 24, done.
remote: Counting objects: 100% (24/24), done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 24 (delta 13), reused 1 (delta 0), pack-reused 0
Unpacking objects: 100% (24/24), 9.93 KiB | 462.00 KiB/s, done.
From https://github.com/ultralytics/yolov5
   d07d0cf6..cf8b67b7  master     -> origin/master
github:  YOLOv5 is out of date by 9 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=84

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  2    115712  models.common.C3                        [128, 128, 2]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  3    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1   1182720  models.common.C3                        [512, 512, 1]
  9                -1  1    656896  models.common.SPPF                      [512, 512, 5]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1    240033  models.yolo.Detect                      [84, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7246177 parameters, 7246177 gradients, 16.7 GFLOPs

Transferred 343/349 items from yolov5s.pt
AMP: checks passed
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
train: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\ts_dataset\labels\train... 591 images, 0 backgrounds, 0 c
train: WARNING  C:\anaconda_win\workspace_pylearn\yolov5\data\ts_dataset\images\train\00340.jpg: 1 duplicate labels removed
train: New cache created: C:\anaconda_win\workspace_pylearn\yolov5\data\ts_dataset\labels\train.cache
val: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\ts_dataset\labels\val... 150 images, 0 backgrounds, 0 corru
val: New cache created: C:\anaconda_win\workspace_pylearn\yolov5\data\ts_dataset\labels\val.cache

AutoAnchor: 4.43 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset
Plotting labels to runs\train\ts_yolov5s_ep30\labels.jpg...
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\train\ts_yolov5s_ep30
Starting training for 30 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/29      3.42G     0.1287    0.02085     0.1002         46        640: 100%|██████████| 37/37 [00:04<00:00,  8.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230    0.00304     0.0611    0.00226   0.000532

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       1/29      3.64G     0.1039    0.01941    0.06545         53        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230    0.00397      0.594     0.0526     0.0118

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       2/29      3.65G    0.09454    0.01781    0.04951         32        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230     0.0226     0.0556     0.0172    0.00371

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       3/29      3.65G    0.08653    0.01659    0.04304         37        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230     0.0736      0.238      0.106      0.042

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       4/29      3.65G    0.07595    0.01581    0.03856         33        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.139       0.45      0.228     0.0895

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       5/29      3.65G    0.06804     0.0139    0.03444         44        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.749      0.314      0.425      0.206

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       6/29      3.65G    0.06295    0.01304    0.03212         48        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.703      0.361       0.44      0.234

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       7/29      3.65G    0.05831    0.01187    0.02852         74        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.435      0.659      0.538      0.258

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       8/29      3.65G    0.05609    0.01114    0.02617         42        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.591      0.684      0.682      0.339

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       9/29      3.65G    0.05391    0.01047    0.02247         50        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230       0.47      0.684      0.536      0.298

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      10/29      3.65G    0.05144   0.009963     0.0219         38        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.867      0.733      0.841      0.429

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      11/29      3.65G    0.04968   0.009795    0.01991         51        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.785      0.785      0.812      0.475

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      12/29      3.65G    0.04783    0.01023    0.01957         47        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.749      0.756      0.788      0.455

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      13/29      3.65G    0.04579   0.009305    0.01832         38        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.749      0.794      0.793      0.467

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      14/29      3.65G      0.044   0.009433    0.01659         39        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.843      0.824      0.884      0.527

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      15/29      3.65G    0.04319   0.009144    0.01619         62        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.899       0.84      0.902      0.554

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      16/29      3.65G     0.0407    0.00878    0.01458         37        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.836       0.82      0.872      0.501

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      17/29      3.65G    0.03939   0.008869    0.01431         43        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.918      0.848       0.92      0.544

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      18/29      3.65G    0.03764   0.008324    0.01392         39        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.952      0.812      0.915      0.534

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      19/29      3.65G    0.03544   0.008533    0.01277         37        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.939       0.86      0.926      0.569

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      20/29      3.65G    0.03505   0.008579    0.01321         48        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230       0.95       0.84      0.925      0.619

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      21/29      3.65G    0.03357     0.0083    0.01221         36        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.962      0.855      0.933      0.622

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      22/29      3.65G    0.03258   0.008084    0.01167         59        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.956      0.834      0.928      0.631

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      23/29      3.65G    0.03153   0.008387    0.01206         65        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.912      0.872      0.926      0.668

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      24/29      3.65G    0.02976   0.007869    0.01135         47        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.907      0.866      0.925      0.653

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      25/29      3.65G    0.02866   0.007615    0.01101         46        640: 100%|██████████| 37/37 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.903       0.87      0.926      0.653

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      26/29      3.65G    0.02773   0.007582    0.01107         42        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.905      0.891      0.932      0.687

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      27/29      3.65G    0.02651   0.007505    0.01082         42        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.921       0.86      0.932      0.682

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      28/29      3.65G    0.02511   0.007342    0.01027         33        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:01<0
                   all        150        230       0.91      0.872      0.933      0.683

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      29/29      3.65G    0.02481   0.007517    0.01037         49        640: 100%|██████████| 37/37 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:00<0
                   all        150        230      0.912      0.884      0.933      0.684

30 epochs completed in 0.036 hours.
Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\last.pt, 14.8MB
Optimizer stripped from runs\train\ts_yolov5s_ep30\weights\best.pt, 14.8MB

Validating runs\train\ts_yolov5s_ep30\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:01<0
                   all        150        230      0.905       0.89      0.932      0.689
           prohibitory        150        108      0.954      0.953       0.98      0.763
                danger        150         38      0.934      0.921      0.959      0.684
             mandatory        150         35      0.876       0.81        0.9      0.687
                 other        150         49      0.854      0.878       0.89      0.621
Results saved to runs\train\ts_yolov5s_ep30
}}
#enddivregion
~
+ メッセージ「Results saved to runs\train\ts_yolov5s_ep30」のファイルを確認~
#codeprettify(){{
(py_learn) ls runs/train/ts_yolov5s_ep30
Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        2024/04/15      9:20                weights
-a----        2024/04/15      9:23         558388 confusion_matrix.png
-a----        2024/04/15      9:23         226814 F1_curve.png
-a----        2024/04/15      9:19            401 hyp.yaml
-a----        2024/04/15      9:20         126318 labels.jpg
-a----        2024/04/15      9:20         223809 labels_correlogram.jpg
-a----        2024/04/15      9:19           1208 opt.yaml
-a----        2024/04/15      9:23         124885 PR_curve.png
-a----        2024/04/15      9:23         172410 P_curve.png
-a----        2024/04/15      9:23           9145 results.csv
-a----        2024/04/15      9:23         289059 results.png
-a----        2024/04/15      9:23         175122 R_curve.png
-a----        2024/04/15      9:20         501238 train_batch0.jpg
-a----        2024/04/15      9:20         483837 train_batch1.jpg
-a----        2024/04/15      9:20         496065 train_batch2.jpg
-a----        2024/04/15      9:23         448059 val_batch0_labels.jpg
-a----        2024/04/15      9:23         451714 val_batch0_pred.jpg
-a----        2024/04/15      9:23         447649 val_batch1_labels.jpg
-a----        2024/04/15      9:23         453702 val_batch1_pred.jpg
-a----        2024/04/15      9:23         389031 val_batch2_labels.jpg
-a----        2024/04/15      9:23         395697 val_batch2_pred.jpg
}}
・学習結果モデルは「runs/train/exp*/weights」評価指標は「runs/train/exp*/」~
|CENTER:F1 curve|CENTER:P curve|CENTER:PR curve|CENTER:R curve|h
|#ref(ts_F1_curve_m.jpg,left,around,15%,F1_curve_m.jpg)|#ref(ts_P_curve_m.jpg,left,around,15%,P_curve_m.jpg)|#ref(ts_PR_curve_m.jpg,left,around,15%,PR_curve_m.jpg)|#ref(ts_R_curve_m.jpg,left,around,15%,R_curve_m.jpg)|
#ref(ts_results_m.jpg,left,around,40%,resuls_m.jpg)
#ref(ts_confusion_matrix_m.jpg,left,around,25%,confusion_matrix_m.jpg)
#clear
#ref(ts_labels_correlogram_m.jpg,left,around,15%,labels_correlogram.jpg)
#ref(ts_labels_m.jpg,left,around,15%,labels.jpg)
#ref(ts_train_batch0_m.jpg,left,around,10%,train_batch0.jpg)
#ref(ts_train_batch1_m.jpg,left,around,10%,train_batch1.jpg)
#ref(ts_train_batch2_m.jpg,left,around,10%,train_batch2.jpg)
#clear
#ref(ts_val_batch0_labels_m.jpg,left,around,10%,val_batch0_labels.jpg)
#ref(ts_val_batch0_pred_m.jpg,left,around,10%,val_batch0_pred.jpg)
#ref(ts_val_batch1_labels_m.jpg,left,around,10%,val_batch1_labels.jpg)
#ref(ts_val_batch1_pred_m.jpg,left,around,10%,val_batch1_pred.jpg)
#ref(ts_val_batch2_labels_m.jpg,left,around,10%,val_batch2_labels.jpg)
#ref(ts_val_batch2_pred_m.jpg,left,around,10%,val_batch2_pred.jpg)
#clear

*** 実行結果を使って推論 [#beb93693]
+「detect2.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/ts_dataset/images/val/00001.jpg
}}
#ref(train_00001_s.jpg,right,around,30%,train_00001_s.jpg)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/ts_dataset/images/val/00001.jpg
detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/ts_dataset/images/val/00001.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
Speed: 1.0ms pre-process, 52.0ms inference, 46.1ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp25
}}
+「detect2.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/traffic-sign-to-test.mp4
}}
#ref(traffic_test.gif,right,around,50%,traffic_test.gif)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/traffic-sign-to-test.mp4
detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/images/traffic-sign-to-test.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
Speed: 0.2ms pre-process, 5.1ms inference, 5.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp32
}}

*** 実行結果を使って推論(日本語表示) [#ade4f035]
+ ラベルファイルを作成する~
・「ts_names_jp」日本語(「coco.names_jp」をコピーし編集)~
#codeprettify(){{
人
自転車
車
    :
ヘアドライヤー
歯ブラシ
標識-禁止
標識-危険
標識-必須
標識-その他
}}
・「ts_names」英語(「coco.names」をコピーし編集)~
#codeprettify(){{
person
bicycle
car
    :
toothbrush
prohibitory
danger
mandatory
other
}}
+「detect2_yolov5.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/ts_dataset/images/val/00001.jpg -l ts_names_jp
}}
#ref(ts_result_s.jpg,right,around,30%,ts_result_s.jpg)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/ts_dataset/images/val/00001.jpg -l ts_names_jp

Object detection YoloV5 in PyTorch Ver. 0.05: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  data/ts_dataset/images/val/00001.jpg
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/ts_yolov5s_ep30/weights/best.pt
   - Confidence lv:  0.25
   - Label file   :  ts_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
Adding AutoShape...

FPS average:       7.90

 Finished.
}}
+「detect2_yolov5.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/images/traffic-sign-to-test.mp4 -l ts_names_jp
}}
#ref(ts_result.gif,right,around,50%,ts_result.gif)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/ts_yolov5s_ep30/weights/best.pt -i data/images/traffic-sign-to-test.mp4 -l ts_names_jp

Object detection YoloV5 in PyTorch Ver. 0.05: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  data/images/traffic-sign-to-test.mp4
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/ts_yolov5s_ep30/weights/best.pt
   - Confidence lv:  0.25
   - Label file   :  ts_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
Adding AutoShape...

FPS average:      52.70

 Finished.
}}

*** 結果から見た疑問点 [#i677bbbc]
-「元々の cocoデータセット 80クラスに追加学習して 84のクラスとなった学習済みモデル」で「新たに追加した4クラスしか識別できない」のはなぜか?~
#ref(exp8_bus_m.jpg,right,around,15%,exp8_bus_m.jpg)
・追加学習モデルを使っての検出(84クラス)ログ~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/ts_yolov5s_ep30/weights/best.pt --source data/images/bus.jpg
detect2: weights=['./runs/train/ts_yolov5s_ep30/weights/best.pt'], source=data/images/bus.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7236673 parameters, 0 gradients, 16.5 GFLOPs
Speed: 1.0ms pre-process, 50.0ms inference, 45.1ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp8
}}
#ref(YOLOv5/yolov5_bus_result.jpg,right,around,15%,yolov5_bus_result.jpg)
・元々の yolov5s.pt モデルでの検出(80クラス)ログ~
#codeprettify(){{
(py_learn) python detect2.py --source data/images/bus.jpg
detect2: weights=yolov5s.pt, source=data/images/bus.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
Speed: 0.0ms pre-process, 51.9ms inference, 47.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp9
}}

- 追加学習で元のクラスの識別機能を保持したまま、新しくクラスを追加することはできないか?~
・『学習済みのディープニューラルネットワークの最終層を除去し、その上で新しい 4クラスを学習できるように調整を行っているため』らしい~
・『学習済みのディープニューラルネットワークの最終層を保持したまま新しいクラスを追加するような学習方法はあるか?~

*** 疑問点についての考察 [#d1e94ef8]
#ref(bus_result_ex0_ucr.jpg,right,around,15%,bus_result_ex0_ucr.jpg)
-「追加学習」で解決すべき問題でないかもしれない~
・元の学習済みのモデルで推論を行った後追加の学習させたモデルで推論を行うことで目的を達成できる →~
・元の 80クラスの識別結果と左上に追加モデルでの『標識』が識別できているのがわかる~
~
- この結果から見るに上記の学習で「Traffic Signs Dataset」データセットのクラス番号(ID) を「0, 1, 2, 3」→「80, 81, 82, 83」とシフトする意味はないと考える~
・追加学習する「Traffic Signs Dataset」データセットには cocoデータセットに使用されているクラス番号に相当するアノテーションデータは含まれていない~
・推論結果で必要に応じて対応すればよいことである~
#clear

***「Traffic Signs Dataset」を再学習 [#pe62826d]
+ クラス番号(ID) を変更しないデータセット「ts0_dataset」を作成する~
+ エポック数 30 で実行する~
・実行ディレクトリは「workspace_pylearn/yolov5/」~
・学習モデルはデフォールト設定「yolov7s」を使用する~
・学習結果フォルダ名は「ts0_yolov5s_ep30」とする
#codeprettify(){{
(py_learn) python train.py --data data/ts0_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts0_yolov5s_ep30
}}
・GPU を使用しない場合は以下のコマンドを実行する~
#codeprettify(){{
(py_learn) python train.py --data data/ts0_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30 --name ts0_yolov5s_ep30 --device cpu
}}
+ 学習の終了~
#codeprettify(){{
    :
Validating runs\train\ts0_yolov5s_ep30\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 5/5 [00:01<0
                   all        150        230      0.924       0.89      0.929      0.683
           prohibitory        150        108       0.96      0.944      0.988      0.793
                danger        150         38      0.973      0.945      0.951      0.691
             mandatory        150         35       0.85      0.812      0.887      0.621
                 other        150         49      0.911      0.857      0.889      0.627
Results saved to runs\train\ts0_yolov5s_ep30

}}
・学習結果モデルは「runs/train/ts0_dataset_ep30/weights」評価指標は「runs/train/ts0_dataset_ep30/」~
|CENTER:F1 curve|CENTER:P curve|CENTER:PR curve|CENTER:R curve|h
|#ref(ts0_F1_curve_m.jpg,left,around,15%,F1_curve_m.jpg)|#ref(ts0_P_curve_m.jpg,left,around,15%,P_curve_m.jpg)|#ref(ts0_PR_curve_m.jpg,left,around,15%,PR_curve_m.jpg)|#ref(ts0_R_curve_m.jpg,left,around,15%,R_curve_m.jpg)|
#ref(ts0_results_m.jpg,left,around,40%,resuls_m.jpg)
#ref(ts0_confusion_matrix_m.jpg,left,around,25%,confusion_matrix_m.jpg)
#clear
#ref(ts0_labels_correlogram_m.jpg,left,around,15%,labels_correlogram.jpg)
#ref(ts0_labels_m.jpg,left,around,15%,labels.jpg)
#ref(ts0_train_batch0_m.jpg,left,around,10%,train_batch0.jpg)
#ref(ts0_train_batch1_m.jpg,left,around,10%,train_batch1.jpg)
#ref(ts0_train_batch2_m.jpg,left,around,10%,train_batch2.jpg)
#clear
#ref(ts0_val_batch0_labels_m.jpg,left,around,10%,val_batch0_labels.jpg)
#ref(ts0_val_batch0_pred_m.jpg,left,around,10%,val_batch0_pred.jpg)
#ref(ts0_val_batch1_labels_m.jpg,left,around,10%,val_batch1_labels.jpg)
#ref(ts0_val_batch1_pred_m.jpg,left,around,10%,val_batch1_pred.jpg)
#ref(ts0_val_batch2_labels_m.jpg,left,around,10%,val_batch2_labels.jpg)
#ref(ts0_val_batch2_pred_m.jpg,left,around,10%,val_batch2_pred.jpg)
#clear

*** 2つまでの学習済みモデルを指定できる YOLO V5 物体検出プログラム「detect3_yolov5.py」 [#k79eb6a8]
#ref(d3_00001_s.jpg,right,around,30%,d3_00001_s.jpg)
#ref(d3_00001_ucr_s.jpg,right,around,30%,d3_00001_ucr_s.jpg)
- 作成プログラムの仕様~
・''前述の「detect2_yolov5.py」改良版で以前のコマンドはそのまま動作する''~
・オンライン/オフライン(ローカル)どちらでも動作する~
・検出された オブジェクトを領域と文字で表示する~
・複数のモデルを指定した場合は最初のモデルの最終クラスID の次に自動的にシフトする~
 最初のモデル(0~79) 次のモデル(0~3) → (80~83)~
 最初のモデル(0~79) を次のモデルが包括する場合(0~83)はそのままのクラスID を使用する~
・文字の表示は「日本語/英語」の表記が可能~
・オブジェクトの種類によって色分け表示する(Ultralytics オリジナル配色の指定も可能)~
・入力ソースは「WEBカメラ(0~9)/動画ファイル/静止画ファイル」に対応する。~
・結果を画像出力できる~
~
-「yolov5」ディレクトリ直下に統合したラベルファイルを用意しておく~
・「ts_names_jp」日本語~
#codeprettify(){{
人                  ← coco データ          ID: 0
自転車
車
    :
ヘアドライヤー
歯ブラシ            ← coco データ          ID:79
標識-禁止           ← Traffic Signs データ ID:80 (0)
標識-危険
標識-必須
標識-その他         ← Traffic Signs データ ID:83 (3)
}}
・「ts_names」英語~
#codeprettify(){{
person              ← coco データ          ID: 0
bicycle
car
    :
toothbrush          ← coco データ          ID:79
prohibitory         ← Traffic Signs データ ID:80 (0)
danger
mandatory
other               ← Traffic Signs データ ID:83 (3)
}}
~
- コマンドパラメータ~
|LEFT:|CENTER:|LEFT:|c
|CENTER:コマンドオプション|初期値|CENTER:意味|h
|BGCOLOR(lightyellow): -i , --image|BGCOLOR(lightyellow):'../../Videos/car_m.mp4'|BGCOLOR(lightyellow):入力ソースのパス またはカメラ(cam/cam0~cam9)|
|BGCOLOR(lightyellow): -y , --yolov5|BGCOLOR(lightyellow):'ultralytics/yolov5'|BGCOLOR(lightyellow):yolov5ディレクトリのパス(ローカルの場合は yolov5 のパス)|
|BGCOLOR(lightyellow): -m , --models|BGCOLOR(lightyellow):'yolov5s'|BGCOLOR(lightyellow):モデル名(ローカルの場合は モデルファイルのパス)※1|
|BGCOLOR(lightyellow): -ms , --models2|BGCOLOR(lightyellow):''|BGCOLOR(lightyellow):2番目に推論するモデル名(モデルファイルのパス)|
|BGCOLOR(lightyellow): -l , --labels|BGCOLOR(lightyellow):'coco.names_jp'|BGCOLOR(lightyellow):ラベルファイルのパス(coco.name, coco_name_jp)|
| -c , --conf|0.25|オブジェクト検出レベルの閾値|
| -t , --title|'y'|タイトルの表示(y/n)|
| -s , --speed|'y'|速度の表示(y/n)|
| -o , --out|'non'|出力結果の保存パス <path/filename> ※2|
| -cpu|-|CPUフラグ(指定すれば 常に CPU動作)|
| --log|3|ログ出力レベル (0/1/2/3/4/5)|
|BGCOLOR(lightyellow): --ucr|BGCOLOR(lightyellow):-|BGCOLOR(lightyellow):配色フラグ(指定すれば Ultralytics カラー)|
※1 オンライン動作の場合の指定できるモデルの種類「yolov5n」「yolov5s」「yolov5m」「yolov5l」「yolov5x」~
※2 出力ファイル名までのディレクトリ・パスは必ず存在すること(存在しない場合は保存しない)~
~
・「-y , --yolov5」パラメータ指定の例~
#codeprettify(){{
-y ultralytics/yolov5                                       ← オンライン(TorchHub)<default>
-y ./                                                       ← オフライン(ローカル)
}}
 ※ 初回起動時にキャッシュにダウンロードされ以後はキャッシュで動作する~
~
・「-m , --models」パラメータ指定の例~
#codeprettify(){{
-m yolov5s                                                  ← オンライン(TorchHub)<default>
-m ./test/yolov5s.pt                                        ← オフライン(ローカル)
}}
 ※ モデルが指定場所にない場合は、初回実行時に自動的にダウンロードされる~
~
#divregion( コマンドパラメータ詳細)
#codeprettify(){{
(py_learn) python detect3_yolov5.py -h
usage: detect3_yolov5.py [-h] [-i IMAGE_FILE] [-y YOLOV5] [-m MODELS] [-ms MODELS2] [-c CONFIDENCE] [-l LABELS]
                         [-t TITLE] [-s SPEED] [-o IMAGE_OUT] [-cpu] [--log LOG] [--ucr]

options:
  -h, --help            show this help message and exit
  -i IMAGE_FILE, --image IMAGE_FILE
                        Absolute path to image file or cam/cam0/cam1 for camera stream.
  -y YOLOV5, --yolov5 YOLOV5
                        YOLO V5 directry absolute path.
  -m MODELS, --models MODELS
                        yolov5n/yolov5m/yolov5l/yolov5x or model file absolute path.
  -ms MODELS2, --models2 MODELS2
                        second model file absolute path.
  -c CONFIDENCE, --conf CONFIDENCE
                        confidences labels Default value is 0.25
  -l LABELS, --labels LABELS
                        Language.(jp/en) Default value is 'jp'
  -t TITLE, --title TITLE
                        Program title flag.(y/n) Default value is 'y'
  -s SPEED, --speed SPEED
                        Speed display flag.(y/n) Default calue is 'y'
  -o IMAGE_OUT, --out IMAGE_OUT
                        Processed image file path. Default value is 'non'
  -cpu                  Optional. CPU only!
  --log LOG             Log level(-1/0/1/2/3/4/5) Default value is '3'
  --ucr                 use Ultralytics color
}}
#enddivregion
~
- オンライン実行例~
#codeprettify(){{
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp
}}
・実行結果~
#ref(drive003_ts0_s.gif,right,around,100%,drive003_ts0_s.gif)
#codeprettify(){{
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp
 Starting..

Object detection YoloV5 in PyTorch Ver. 0.06: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  data/images/drive003_s.mp4
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  yolov5s.pt
   - Pretrained 2 :  runs/train/ts0_yolov5s_ep30/weights/best.pt
   - Confidence lv:  0.25
   - Label file   :  ts_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0
   - Log Level    :  3

Using cache found in C:\Users\<USER>/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
Adding AutoShape...
Using cache found in C:\Users\<USER>/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:      22.00

 Finished.
}}
#clear
- オフライン実行例~
#codeprettify(){{
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp -y ./
}}
・実行結果~
#codeprettify(){{
(py_learn) python detect3_yolov5.py -m yolov5s.pt -ms runs/train/ts0_yolov5s_ep30/weights/best.pt -i data/images/drive003_s.mp4 -l ts_names_jp -y ./
 Starting..

Object detection YoloV5 in PyTorch Ver. 0.06: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  data/images/drive003_s.mp4
   - YOLO v5      :  ./
   - Pretrained   :  yolov5s.pt
   - Pretrained 2 :  runs/train/ts0_yolov5s_ep30/weights/best.pt
   - Confidence lv:  0.25
   - Label file   :  ts_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0
   - Log Level    :  3

YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs
Adding AutoShape...
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:      22.60

 Finished.
}}

- ドライブレコーダの画像で認識してみる~
~
&tinyvideo(https://izutsu.aa0.netvolante.jp/video/ai_result/drive001_ts0_s.mp4,400 225,controls,loop,muted,autoplay);
&tinyvideo(https://izutsu.aa0.netvolante.jp/video/ai_result/drive005_ts0_s.mp4,400 225,controls,loop,muted,autoplay);~
cocoデータセットのオブジェクトと交通標識が検出できている。~
#clear

- ソースコード~
#divregion(「detect3_yolov5.py」)
#codeprettify(){{
# -*- coding: utf-8 -*-
##------------------------------------------
## 【復習】「PyTorch で始める AI開発」
##   Chapter 04 / Extra edition     Ver. 0.6
##       YoloV5 in PyTorch による物体検出
##
##               2024.09.13 Masahiro Izutsu
##------------------------------------------
## detect3_yolov5.py    (detect3_yolov5.py を改良)
##  Ver. 0.03   2024/04/09  classID=119 まで対応
##  Ver. 0.04   2024/04/13  クラウド/ローカル切り替え
##  Ver. 0.05   2024/04/15  confidence 閾値設定/カメラ入力(cam0-cam9)
##  Ver. 0.06   2024/04/15  プログラム名変更 2つ目のモデル追加/ログ出力

# -y <YOLOv5>                                   -m <Pretrained model>
#    'ultralytics/yolov5'                          'yolov5s' [yolov5n][yolov5m][yolov5l][yolov5x]      Torch Hub on line
#    '/anaconda_win/workspace_pylearn/yolov5'      '/anaconda_win/workspace_pylearn/yolov5/yolov5s'              off line
#
# 例:Windows
#       python detect2_yolov5.py                (Torch Hub on line )
#       python detect2_yolov5.py -y '/anaconda_win/workspace_pylearn/yolov5' -m '/anaconda_win/workspace_pylearn/yolov5/yolov5s'
#
# 例:Linux
#       python detect2_yolov5.py                (Torch Hub on line)
#       python detect2_yolov5.py -y '~/workspace_pylearn/yolov5' -m '~/workspace_pylearn/yolov5/yolov5s'

# Color Escape Code
GREEN = '\033[1;32m'
RED = '\033[1;31m'
NOCOLOR = '\033[0m'
YELLOW = '\033[1;33m'

# 定数定義
WINDOW_WIDTH = 640

from os.path import expanduser
INPUT_DEF = expanduser('../../Videos/car_m.mp4')
LANG_DEF = 'coco.names_jp'                                    # 2024/04/09

# import処理
import sys
import cv2
import numpy as np
import argparse
import torch
from torch import nn
from torchvision import transforms, models
from PIL import Image
import platform

import my_puttext                                           # my library 2024.03.13
import my_fps                                               # my library 2024.03.13
import my_color80                                           # my library 2024.03.13
import my_logging

from ultralytics.utils.plotting import colors

TEXT_COLOR = my_color80.CR_white

# タイトル
title = 'Object detection YoloV5 in PyTorch Ver. 0.06'

# Parses arguments for the application
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('-i', '--image', metavar = 'IMAGE_FILE', type=str,
            default = INPUT_DEF,
            help = 'Absolute path to image file or cam/cam0/cam1 for camera stream.')
    parser.add_argument('-y', '--yolov5', metavar = 'YOLOV5', type=str,
            default = 'ultralytics/yolov5',
            help = 'YOLO V5 directry absolute path.')
    parser.add_argument('-m', '--models', metavar = 'MODELS', type=str,
            default = 'yolov5s',
            help = 'yolov5n/yolov5m/yolov5l/yolov5x or model file absolute path.')
    parser.add_argument('-ms', '--models2', metavar = 'MODELS2', type=str,
            default = '',
            help = 'second model file absolute path.')
    parser.add_argument('-c', '--conf', metavar = 'CONFIDENCE',
            default = 0.25,                                 # 2024/04/14
            help = 'confidences labels Default value is 0.25')
    parser.add_argument('-l', '--labels', metavar = 'LABELS',
            default = LANG_DEF,                             # 2024/04/09
            help = 'Language.(jp/en) Default value is \'jp\'')
    parser.add_argument('-t', '--title', metavar = 'TITLE',
            default = 'y',
            help = 'Program title flag.(y/n) Default value is \'y\'')
    parser.add_argument('-s', '--speed', metavar = 'SPEED',
            default = 'y',
            help = 'Speed display flag.(y/n) Default calue is \'y\'')
    parser.add_argument('-o', '--out', metavar = 'IMAGE_OUT',
            default = 'non',
            help = 'Processed image file path. Default value is \'non\'')
    parser.add_argument("-cpu", default = False, action = 'store_true',
            help="Optional. CPU only!")
    parser.add_argument('--log', metavar = 'LOG', default = '3',
            help = 'Log level(-1/0/1/2/3/4/5) Default value is \'3\'')
    parser.add_argument("--ucr", default=False, action="store_true",
            help="use Ultralytics color")
    return parser

# モデル基本情報の表示
def display_info(args, image, yolov5, models, models2, conf, labels, titleflg, speedflg, outpath, use_device, log):
    print('\n' + GREEN + title + ': Starting application...' + NOCOLOR)
    print('   OpenCV virsion :',cv2.__version__)
    print('\n   - ' + YELLOW + 'Image File   : ' + NOCOLOR, image)
    print('   - ' + YELLOW + 'YOLO v5      : ' + NOCOLOR, yolov5)
    print('   - ' + YELLOW + 'Pretrained   : ' + NOCOLOR, models)
    if models2 != '':
        print('   - ' + YELLOW + 'Pretrained 2 : ' + NOCOLOR, models2)
    print('   - ' + YELLOW + 'Confidence lv: ' + NOCOLOR, conf)
    print('   - ' + YELLOW + 'Label file   : ' + NOCOLOR, labels)
    print('   - ' + YELLOW + 'Program Title: ' + NOCOLOR, titleflg)
    print('   - ' + YELLOW + 'Speed flag   : ' + NOCOLOR, speedflg)
    print('   - ' + YELLOW + 'Processed out: ' + NOCOLOR, outpath)
    print('   - ' + YELLOW + 'Use device   : ' + NOCOLOR, use_device)
    if args.ucr:
        print('   - ' + YELLOW + 'Class color  : ' + NOCOLOR, 'Ultralytics')
    print('   - ' + YELLOW + 'Log Level    : ' + NOCOLOR, log, '\n')

# 画像の種類を判別する
#   戻り値: 'jeg''png'... 画像ファイル
#           'None'        画像ファイル以外 (動画ファイル)
#           'NotFound'    ファイルが存在しない
import os
def is_pict(filename):
    '''
    try:
        imgtype = imghdr.what(filename)
    except FileNotFoundError as e:
        imgtype = 'NotFound'
    return str(imgtype)
    '''
    if not os.path.isfile(filename):
        return 'NotFound'

    types = ['.bmp','.png','.jpg','.jpeg','.JPG','.tif']
    for ss in types:
        if filename.endswith(ss):
            return ss
    return 'None'

# TorchHubからモデルを読み込む (クラウド/ローカル切り替え)    2024/04/15
def load_model(yolov5, models):
    cust = 'custom' if 0 < models.find('yolo') else ''
    if yolov5 == 'ultralytics/yolov5':
        if cust == '':
            if -1 == models.find('.'):
                model = torch.hub.load(yolov5, models)
            else:
                model = torch.hub.load(yolov5, 'custom', models)
        else:
            model = torch.hub.load(yolov5, cust, models)
    else:
        if cust == '':
            if -1 == models.find('.'):
                model = torch.hub.load(yolov5, models, source='local')
            else:
                model = torch.hub.load(yolov5, 'custom', models, source='local')
        else:
            model = torch.hub.load(yolov5, cust, models, source='local')

    return model


# ** main関数 **
def main():

    # 日本語フォント指定
    fontPIL = my_puttext.get_font()                         # 2024.03.13

    # Argument parsing and parameter setting
    args = parse_args().parse_args()
    input_stream = args.image
    labels = args.labels                                    # 2024/04/09
    titleflg = args.title
    speedflg = args.speed

    # アプリケーション・ログ設定
    module = os.path.basename(__file__)
    module_name = os.path.splitext(module)[0]
    logger = my_logging.get_module_logger_sel(module_name, int(args.log))
    logger.info(' Starting..')

    # 入力 cam/cam0-cam9 対応                               # 2024/04/15
    if input_stream.find('cam') == 0 and len(input_stream) < 5:
        input_stream = 0 if input_stream == 'cam' else int(input_stream[3])
        isstream = True
    else:
        filetype = is_pict(input_stream)
        isstream = filetype == 'None'
        if (filetype == 'NotFound'):
            print(RED + "\ninput file Not found." + NOCOLOR)
            quit()
    outpath = args.out
    conf = args.conf
    yolov5 = args.yolov5 if platform.system()=='Windows' else expanduser(args.yolov5)
    models = args.models if platform.system()=='Windows' else expanduser(args.models)
    models2 = args.models2 if platform.system()=='Windows' else expanduser(args.models2)
    
    # 判定ラベル
    with open(labels, 'r', encoding="utf-8") as labels_file:
        label_list = labels_file.read().splitlines()

    # GPUが使用できるか調べる
    use_device = 'cuda:0' if not args.cpu and torch.cuda.is_available() else 'cpu'

    # 情報表示
    display_info(args, input_stream, yolov5, models, models2, conf, labels, titleflg, speedflg, outpath, use_device, args.log)

    # TorchHubからモデルを読み込む
    model = load_model(yolov5, models)

    # モデルを推論用に設定する
    model.eval()
    model.to(use_device)

    class_num = len(model.names)
    class_ofs = 0

    # 2つ目のモデル指定がある場合
    model2 = None
    if models2 != '':
        model2 = load_model(yolov5, models2)
        model2.eval()
        model2.to(use_device)
        class_ofs = class_num

        # 2番目のモデルのクラスID が1番目のクラスにすべて含まれているかチェック
        key = list(model.names)
        val = list(model.names.values())
        eq = True
        for i in range(len(key)):
            if (key[i], val[i]) not in model2.names.items():
                eq = False
        if eq:
            class_ofs = 0               # ID はすべて含んでいる

    # 入力準備
    if (isstream):
        # カメラ 
        cap = cv2.VideoCapture(input_stream)
        ret, frame = cap.read()
        loopflg = cap.isOpened()
    else:
        # 画像ファイル読み込み
        frame = cv2.imread(input_stream)
        if frame is None:
            print(RED + "\nUnable to read the input." + NOCOLOR)
            quit()

        # アスペクト比を固定してリサイズ
        img_h, img_w = frame.shape[:2]
        if (img_w > WINDOW_WIDTH):
            height = round(img_h * (WINDOW_WIDTH / img_w))
            frame = cv2.resize(frame, dsize = (WINDOW_WIDTH, height))
        loopflg = True                                      # 1回ループ

    # 処理結果の記録 step1
    if (outpath != 'non'):
        if (isstream):
            fps = int(cap.get(cv2.CAP_PROP_FPS))
            out_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
            out_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
            fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v')
            outvideo = cv2.VideoWriter(outpath, fourcc, fps, (out_w, out_h))

    # 計測値初期化
    fpsWithTick = my_fps.fpsWithTick()
    fps_total = 0
    fpsWithTick.get()                                       # fps計測開始

    # メインループ 
    while (loopflg):
        if frame is None:
            print(RED + "\nUnable to read the input." + NOCOLOR)
            quit()

        # ニューラルネットワークを実行する
        results = model(frame, size=640)
        message = []                                        # 表示メッセージ
        bbox = results.xyxy[0].detach().cpu().numpy()
        if models2 != '':                                   # 2つ目のモデル指定がある場合
            results2 = model2(frame, size=640)
            bbox2 = results2.xyxy[0].detach().cpu().numpy()
            for i in range(len(bbox2)):
                bbox2[i][5] += class_ofs                    # クラスID の調整
            bbox = np.append(bbox, bbox2, axis=0)           # 結果を追加する

        logger.debug(bbox)
        for preds in bbox:
            xmin = int(preds[0])
            ymin = int(preds[1])
            xmax = int(preds[2])
            ymax = int(preds[3])
            confidence  = preds[4]
            class_id  = int(preds[5])
            color_id = class_id if class_id < 80 else class_id - 40 # 2024/04/09
            
            if (confidence > conf):                         # 低い確率を除外
                # オブジェクト別の色指定                    # 2024/04/16
                if args.ucr:
                    BOX_COLOR = colors(color_id, True)
                    LABEL_BG_COLOR = BOX_COLOR
                else:
                    BOX_COLOR = my_color80.get_boder_bgr80(color_id)
                    LABEL_BG_COLOR = my_color80.get_back_bgr80(color_id)

                # ラベル描画領域を得る
                x0,y0,x1,y1 = my_puttext.cv2_putText(img = frame,
                                       text = label_list[class_id] + ': %.2f' % confidence,
                                       org = (xmin+5, ymin+18), fontFace = fontPIL,
                                       fontScale = 14,
                                       color = TEXT_COLOR,
                                       mode = 0,
                                       areaf=True)
                xx = xmax if xmax > x1 else x1              # 横が領域を超える場合は超えた値にする
                cv2.rectangle(frame,(xmin, ymin), (xx, ymin+20), LABEL_BG_COLOR, -1)
                my_puttext.cv2_putText(img = frame,
                                       text = label_list[class_id] + ': %.2f' % confidence,
                                       org = (xmin+5, ymin+18), fontFace = fontPIL,
                                       fontScale = 14,
                                       color = TEXT_COLOR,
                                       mode = 0)
                # 画像に枠を描く
                cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), BOX_COLOR, 2)

        # FPSを計算する
        fps = fpsWithTick.get()
        st_fps = 'fps: {:>6.2f}'.format(fps)
        if (speedflg == 'y'):
            cv2.rectangle(frame, (10, 38), (95, 55), (90, 90, 90), -1)
            cv2.putText(frame, st_fps, (15, 50), cv2.FONT_HERSHEY_DUPLEX, fontScale=0.4, color=(255, 255, 255), lineType=cv2.LINE_AA)

        # タイトル描画
        if (titleflg == 'y'):
            cv2.putText(frame, title, (12, 32), cv2.FONT_HERSHEY_DUPLEX, fontScale=0.8, color=(0, 0, 0), lineType=cv2.LINE_AA)
            cv2.putText(frame, title, (10, 30), cv2.FONT_HERSHEY_DUPLEX, fontScale=0.8, color=(200, 200, 0), lineType=cv2.LINE_AA)

        # 画像表示 
        window_name = title + "  (hit 'q' or 'esc' key to exit)"
        cv2.namedWindow(window_name, flags=cv2.WINDOW_AUTOSIZE | cv2.WINDOW_GUI_NORMAL) 
        cv2.imshow(window_name, frame)

        # 処理結果の記録 step2
        if (outpath != 'non'):
            if (isstream):
                outvideo.write(frame)
            else:
                cv2.imwrite(outpath, frame)

        # 何らかのキーが押されたら終了 
        breakflg = False
        while(True):
            key = cv2.waitKey(1)
            prop_val = cv2.getWindowProperty(window_name, cv2.WND_PROP_ASPECT_RATIO)
            if cv2.getWindowProperty(window_name, cv2.WND_PROP_VISIBLE) < 1:        
                print('\n Window close !!')
                sys.exit(0)
            if key == 27 or key == 113 or (prop_val < 0.0):     # 'esc' or 'q'
                breakflg = True
                break
            if (isstream):
                break

        if ((breakflg == False) and isstream):
            # 次のフレームを読み出す
            ret, frame = cap.read()
            if ret == False:
                break
            loopflg = cap.isOpened()
        else:
            loopflg = False

    # 終了処理 
    if (isstream):
        cap.release()

        # 処理結果の記録 step3
        if (outpath != 'non'):
            if (isstream):
                outvideo.release()

    cv2.destroyAllWindows()

    print('\nFPS average: {:>10.2f}'.format(fpsWithTick.get_average()))
    print('\n Finished.')

# main関数エントリーポイント(実行開始)
if __name__ == "__main__":
    sys.exit(main())
}}
#enddivregion

#br

** オープンデータセットによる学習「マスク着用の判定」 [#acbbda39]
- ''引用サイト'' → [[Face Mask Detection)>https://github.com/spacewalk01/yolov5-face-mask-detection?tab=readme-ov-file]]~

- PascalVOC形式のオープンデータセット [[Face Mask Detection Dataset>+https://www.kaggle.com/datasets/andrewmvd/face-mask-detectiona]] を使用する~
[[Mask Wearing Dataset>+https://public.roboflow.com/object-detection/mask-wearing]] から YOLO形式のデータセットを入手可能だが、データ数が少ないのと、以前使用したので、今回は違うものを使ってみる~

*** 前準備 [#ca615340]
#ref(yolov5_train03_m.jpg,right,around,15%,yolov5_train03_m.jpg)
+ ''データセットと変換ツールのダウンロード''~
(1) [[Face Mask Detection Dataset>+https://www.kaggle.com/datasets/andrewmvd/face-mask-detectiona]] のページを開く~
(2)「Download」ボタンを押す~
 Kaggleへ登録するか,Google アカウントなどでサインインする~
(3) ダウンロードした「archive.zip」を解凍する~
~
(4)「workspace_pylearn」の下に「p2y_conv」フォルダを作成する~
#codeprettify(){{
(py_learn) PS > cd /anaconda_win/workspace_pylearn/
(py_learn) PS > mkdir p2y_conv
(py_learn) PS > cd p2y_conv
}}
(5) [[P2Y Converter>+https://github.com/rihib/p2y-converter]] サイトから「main.py」をダウンロードし、「p2y_conv」ディレクトリ内に配置する~
(6) (3)で解凍してできた「archive/」フォルダ内の「annotations」「images」を「p2y_conv」ディレクトリ内にコピー(移動)する~
#clear
~
+ ''PascalVOC形式のアノテーションデータ(.xml)を YOLO形式(.txt)にフォーマット変換''~
・ 以下の作業は「p2y_conv/」ディレクトリ内でおこなう~
(1)「main.py」を「p2y.py」としてコピーし環境に対応した変更をる~
#codeprettify(){{
p2y.py(main.py)
    :
PPATH = '/anaconda_win/workspace_pylearn/p2y_conv/'
absolutepath_of_directory_with_xmlfiles = PPATH + 'annotations/'
absolutepath_of_directory_with_imgfiles = PPATH + 'images/'
absolutepath_of_directory_with_yolofiles = PPATH + 'format_yolo/'
absolutepath_of_directory_with_classes_txt = PPATH
absolutepath_of_directory_with_error_txt = PPATH + 'error/'
    :
}}
(2) 作業用のディレクトリを作成する~
#codeprettify(){{
mkdir format_yolo
mkdir error
}}
#codeprettify(){{
p2y_conv/
├─annotations
├─error
├─format_yolo
├─images
├─main.py
└─p2y.py
}}
(3)「lxml」ライブラリをインストールする~
#codeprettify(){{
(py_learn) PS > pip install lxml
Collecting lxml
  Downloading lxml-5.2.1-cp311-cp311-win_amd64.whl.metadata (3.5 kB)
Downloading lxml-5.2.1-cp311-cp311-win_amd64.whl (3.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 11.1 MB/s eta 0:00:00
Installing collected packages: lxml
Successfully installed lxml-5.2.1
}}
(4) フォーマット変換を実行する~
#codeprettify(){{
(py_learn) python p2y.py
libpng warning: iCCP: Not recognizing known sRGB profile that has been edited
}}
+ ''データセットのまとめ''~
(1) データセットのディレクトリを作成する~
・以下のコマンドを実行~
#codeprettify(){{
mkdir -p mask_dataset/images/train
mkdir -p mask_dataset/images/val
mkdir -p mask_dataset/labels/train
mkdir -p mask_dataset/labels/val
}}
・作成結果~
#codeprettify(){{
(py_learn) PS > tree
p2y_conv/
├─annotations
├─error
├─format_yolo
├─images
└─mask_dataset
    ├─images
    │  ├─train
    │  └─val
    └─labels
        ├─train
        └─val
}}
(2)「mask_dataset」ディレクトリに画像ファイルとラベルファイルを配置する~
検証用(val):学習用(train) がおよそ 8:2 になるようにする~
・検証用はファイル名の末尾が 1 (86個), 5 (85個) のもの計171個を移動~
#codeprettify(){{
move format_yolo/*1.txt mask_dataset/labels/val
move format_yolo/*5.txt mask_dataset/labels/val
move images/*1.png mask_dataset/images/val
move images/*5.png mask_dataset/images/val
}}
・学習用は残りのファイルを移動{853-171 = 682個)~
#codeprettify(){{
move format_yolo/*.txt mask_dataset/labels/train
move images/*.png mask_dataset/images/train
}}
(3)「mask_dataset/」フォルダ内にファイル mask.yaml を作成~
・データセットの配置場所は「yolov5/data/」フォルダとする~
#codeprettify(){{
train: data/mask_dataset/images/train
val: data/mask_dataset/images/val
nc: 3
names:
  0: without_mask
  1: with_mask
  2: mask_weared_incorrect
  3: motorcycle
}}
(4)「mask_dataset/」フォルダを「yolov5/data/」内にコピー(移動)する~


*** 学習の実行 [#ycdc6edb]
+ エポック数 100 で実行する~
・実行ディレクトリは「workspace_pylearn/yolov5/」~
・学習モデルはデフォールト設定「yolov7s」を使用する~
・学習結果フォルダ名は「mask_yolov5s」とする~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s
}}
・GPU を使用しない場合は以下のコマンドを実行する~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s --device cpu
}}
+ 学習の終了~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s
train: weights=yolov5s.pt, cfg=, data=data/mask_dataset/mask.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=mask_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github:  YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=4

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
    :
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/99      3.35G     0.1051    0.06055    0.03577         79        640: 100%|██████████| 43/43 [00:04<00:00,  9.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.69     0.0704     0.0216    0.00536
    :
      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      99/99      4.41G    0.01885    0.01993   0.001163         75        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.848      0.814       0.86       0.59

100 epochs completed in 0.143 hours.
Optimizer stripped from runs\train\mask_yolov5s\weights\last.pt, 14.4MB
Optimizer stripped from runs\train\mask_yolov5s\weights\best.pt, 14.4MB

Validating runs\train\mask_yolov5s\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.859      0.849      0.888      0.609
          without_mask        171        122      0.773      0.885      0.899       0.59
             with_mask        171        630      0.932      0.931      0.965      0.684
 mask_weared_incorrect        171         26      0.872      0.731      0.801      0.554
Results saved to runs\train\mask_yolov5s
}}
#divregion(「train.py」実行ログ詳細)
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s
train: weights=yolov5s.pt, cfg=, data=data/mask_dataset/mask.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=mask_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github:  YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=4

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  2    115712  models.common.C3                        [128, 128, 2]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  3    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1   1182720  models.common.C3                        [512, 512, 1]
  9                -1  1    656896  models.common.SPPF                      [512, 512, 5]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1     24273  models.yolo.Detect                      [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7030417 parameters, 7030417 gradients, 16.0 GFLOPs

Transferred 343/349 items from yolov5s.pt
AMP: checks passed
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
train: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\mask_dataset\labels\train... 682 images, 0 backgrounds, 0
train: WARNING  Cache directory C:\anaconda_win\workspace_pylearn\yolov5\data\mask_dataset\labels is not writeable: [WinError 183] : 'C:\\anaconda_win\\workspace_pylearn\\yolov5\\data\\mask_dataset\\labels\\train.cache.npy' -> 'C:\\anaconda_win\\workspace_pylearn\\yolov5\\data\\mask_dataset\\labels\\train.cache'
val: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\mask_dataset\labels\val... 171 images, 0 backgrounds, 0 cor
val: WARNING  Cache directory C:\anaconda_win\workspace_pylearn\yolov5\data\mask_dataset\labels is not writeable: [WinError 183] : 'C:\\anaconda_win\\workspace_pylearn\\yolov5\\data\\mask_dataset\\labels\\val.cache.npy' -> 'C:\\anaconda_win\\workspace_pylearn\\yolov5\\data\\mask_dataset\\labels\\val.cache'

AutoAnchor: 5.55 anchors/target, 0.999 Best Possible Recall (BPR). Current anchors are a good fit to dataset
Plotting labels to runs\train\mask_yolov5s\labels.jpg...
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\train\mask_yolov5s
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/99      3.35G     0.1051    0.06055    0.03577         79        640: 100%|██████████| 43/43 [00:04<00:00,  9.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.69     0.0704     0.0216    0.00536

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       1/99      4.41G    0.08192    0.04846    0.02121         67        640: 100%|██████████| 43/43 [00:03<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.769       0.19     0.0995      0.032

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       2/99      4.41G    0.07935    0.04207    0.01834         83        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.769      0.183      0.112     0.0377

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       3/99      4.41G    0.07218    0.03915    0.01724         60        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.544      0.416      0.283     0.0952

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       4/99      4.41G    0.06119    0.03696     0.0145         89        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.565      0.475      0.338      0.122

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       5/99      4.41G    0.05579    0.03693    0.01203         90        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.682      0.447      0.396      0.176

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       6/99      4.41G    0.05158    0.03469    0.01042         68        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.761      0.496       0.48      0.234

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       7/99      4.41G    0.04887    0.03386   0.009272         68        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.753      0.509      0.474      0.211

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       8/99      4.41G    0.04631    0.03263    0.00888        207        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.905      0.503      0.564      0.273

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       9/99      4.41G    0.04507    0.03335   0.008373         78        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.929      0.554      0.613      0.358

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      10/99      4.41G    0.04216     0.0299    0.00798        142        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.901      0.542      0.597      0.285

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      11/99      4.41G    0.04115     0.0307   0.008121         72        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.886      0.553      0.607      0.322

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      12/99      4.41G    0.04034    0.03206   0.007108        119        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.905      0.557      0.609      0.324

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      13/99      4.41G    0.03864     0.0303   0.007133         91        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.923      0.548       0.61      0.341

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      14/99      4.41G    0.03765    0.03079   0.006593         91        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:00<0
                   all        171        778      0.912       0.56      0.618      0.314

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      15/99      4.41G    0.03754    0.03185   0.006909        109        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.896      0.567      0.607       0.33

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      16/99      4.41G    0.03702    0.03072   0.006434        115        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.935      0.573      0.629      0.374

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      17/99      4.41G    0.03657    0.02973   0.006558         68        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.929      0.578      0.644      0.395

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      18/99      4.41G    0.03636    0.03259   0.006621        164        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.926      0.558      0.638      0.381

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      19/99      4.41G    0.03474    0.02924   0.006014         73        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.925      0.568      0.663      0.404

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      20/99      4.41G    0.03353    0.03008   0.005776         48        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.939      0.546      0.638        0.4

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      21/99      4.41G    0.03369    0.02975   0.006082        131        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.943      0.563      0.656        0.4

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      22/99      4.41G    0.03419    0.02884   0.005488         59        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.934      0.553      0.668      0.417

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      23/99      4.41G    0.03338    0.02761   0.005553         44        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.952      0.567      0.675      0.407

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      24/99      4.41G    0.03331    0.02906   0.005624        104        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.94      0.571      0.672      0.415

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      25/99      4.41G    0.03239    0.02725   0.005505         79        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.948      0.573      0.702       0.44

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      26/99      4.41G    0.03184    0.02699   0.005355         95        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.818      0.642      0.676      0.401

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      27/99      4.41G    0.03124    0.02666   0.005242         42        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.675      0.647      0.695      0.431

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      28/99      4.41G     0.0308    0.02896   0.005306         85        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.659      0.662      0.687      0.437

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      29/99      4.41G    0.03131    0.02665   0.005278         49        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.885      0.634      0.688       0.44

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      30/99      4.41G     0.0304    0.02694   0.004846         66        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.616      0.682      0.688      0.444

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      31/99      4.41G    0.02945    0.02757   0.004831         51        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.763      0.675      0.676      0.405

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      32/99      4.41G    0.02965    0.02752    0.00461        130        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.762      0.733      0.762      0.497

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      33/99      4.41G    0.02996    0.02561   0.004403         75        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.838      0.716      0.763      0.489

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      34/99      4.41G    0.02936    0.02624   0.003965        107        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.766      0.745      0.775      0.501

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      35/99      4.41G    0.02946    0.02683   0.004163         53        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.731      0.735      0.772      0.494

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      36/99      4.41G    0.02868    0.02562   0.003824         51        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.74      0.751      0.761      0.471

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      37/99      4.41G    0.02841    0.02709   0.003661         91        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.909      0.683      0.759      0.491

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      38/99      4.41G    0.02946    0.02841    0.00366         92        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.768      0.764      0.798      0.518

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      39/99      4.41G    0.02856    0.02682   0.003546         77        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.809      0.717      0.779      0.504

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      40/99      4.41G    0.02754     0.0259   0.003536         48        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.829      0.774      0.818      0.523

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      41/99      4.41G    0.02746    0.02545   0.003263         75        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.795      0.784      0.814      0.522

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      42/99      4.41G    0.02789    0.02589    0.00354         80        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.866      0.734      0.805      0.519

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      43/99      4.41G    0.02785    0.02654   0.003272         65        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.765       0.77      0.797      0.515

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      44/99      4.41G    0.02648    0.02617   0.003113         60        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.882      0.791       0.85       0.55

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      45/99      4.41G    0.02703    0.02565   0.003004         78        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.872      0.769      0.848       0.54

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      46/99      4.41G    0.02704    0.02547   0.002782         75        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.836       0.78      0.823      0.528

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      47/99      4.41G     0.0269    0.02641   0.002753         90        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.836      0.767      0.805      0.523

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      48/99      4.41G    0.02615    0.02451   0.002561         88        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.827       0.79      0.837      0.551

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      49/99      4.41G     0.0259    0.02599   0.002557         69        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.84      0.793       0.85      0.565

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      50/99      4.41G     0.0261    0.02376   0.002365        121        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.846      0.794       0.84      0.548

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      51/99      4.41G    0.02553     0.0248   0.002372         78        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.852      0.794      0.855       0.57

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      52/99      4.41G    0.02561    0.02417   0.002499         63        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.875      0.789      0.865      0.562

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      53/99      4.41G    0.02539      0.025   0.002371         74        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.872      0.771       0.84      0.564

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      54/99      4.41G    0.02514    0.02344   0.002231        136        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.87      0.754      0.844      0.559

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      55/99      4.41G    0.02519    0.02457   0.002228         63        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.884      0.781      0.843      0.564

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      56/99      4.41G     0.0242    0.02459    0.00215         67        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.852      0.796      0.854       0.56

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      57/99      4.41G    0.02431    0.02375   0.001989         98        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.84      0.802      0.854      0.581

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      58/99      4.41G    0.02462    0.02456   0.001973         74        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.876      0.809      0.871      0.583

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      59/99      4.41G    0.02505    0.02474   0.001895         92        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.911      0.789       0.87      0.576

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      60/99      4.41G    0.02398     0.0228   0.002016         48        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.842      0.788       0.85      0.581

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      61/99      4.41G    0.02351    0.02405    0.00177         54        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.938      0.803      0.871      0.592

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      62/99      4.41G    0.02383    0.02467    0.00173        112        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.911      0.773      0.851      0.581

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      63/99      4.41G    0.02386      0.024   0.001924         99        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.877      0.825      0.872      0.579

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      64/99      4.41G    0.02275    0.02339   0.001726         62        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.895      0.804      0.865      0.587

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      65/99      4.41G    0.02288    0.02347   0.001597        111        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.859      0.849      0.888      0.607

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      66/99      4.41G    0.02258     0.0216   0.001675         52        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.873      0.829      0.876      0.589

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      67/99      4.41G    0.02268     0.0237   0.001742         82        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.857      0.875      0.876      0.587

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      68/99      4.41G    0.02201    0.02348   0.001615        113        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.831       0.85      0.858      0.573

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      69/99      4.41G    0.02263    0.02294   0.001527         60        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.875       0.78      0.846      0.562

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      70/99      4.41G    0.02162     0.0219   0.001599         53        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.88      0.816      0.874      0.599

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      71/99      4.41G    0.02186    0.02351   0.001494         70        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.873      0.812      0.855      0.582

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      72/99      4.41G    0.02196    0.02211   0.001572         77        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.88      0.773       0.85       0.58

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      73/99      4.41G    0.02145    0.02341   0.001484         95        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.88      0.807      0.854      0.576

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      74/99      4.41G    0.02171    0.02345   0.001453         66        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.845      0.836      0.862      0.581

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      75/99      4.41G    0.02128    0.02272   0.001519         67        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.864      0.821      0.867      0.584

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      76/99      4.41G    0.02094    0.02281   0.001223         40        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.845      0.833      0.868      0.593

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      77/99      4.41G    0.02102    0.02174    0.00133        102        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.848      0.834       0.87       0.59

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      78/99      4.41G    0.02105    0.02172   0.001171         59        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.886      0.814      0.867      0.594

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      79/99      4.41G    0.02107    0.02259   0.001177        106        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.93      0.801      0.871      0.594

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      80/99      4.41G    0.02067    0.02213   0.001498        161        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.843      0.816      0.861      0.592

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      81/99      4.41G    0.02095    0.02207    0.00136         55        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.883      0.815      0.862      0.586

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      82/99      4.41G    0.02013    0.02178   0.001258         84        640: 100%|██████████| 43/43 [00:03<00:00, 10.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.866      0.834      0.861      0.595

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      83/99      4.41G    0.02068    0.02235   0.001233        103        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778       0.81      0.859      0.859       0.59

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      84/99      4.41G    0.01991    0.02144    0.00116         88        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.909      0.781       0.86      0.597

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      85/99      4.41G    0.01994    0.02198   0.001115         71        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.883      0.813      0.853      0.575

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      86/99      4.41G    0.01994    0.02229   0.001158         46        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.888      0.823      0.851      0.581

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      87/99      4.41G    0.02001    0.02167   0.001083         45        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.866      0.809      0.849      0.586

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      88/99      4.41G    0.01939    0.02168  0.0009812         77        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.855      0.813      0.856      0.585

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      89/99      4.41G    0.01946    0.02135   0.001199         51        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.859      0.825      0.861      0.589

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      90/99      4.41G    0.01931    0.02054   0.001064         85        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.885      0.803      0.869      0.585

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      91/99      4.41G    0.01965    0.02123  0.0009763        106        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.891      0.805      0.869      0.586

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      92/99      4.41G     0.0197    0.02175   0.001261         83        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.862      0.808      0.871      0.598

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      93/99      4.41G    0.01915    0.02116  0.0009928        105        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.906      0.765      0.861      0.592

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      94/99      4.41G    0.01909    0.02033  0.0009784        105        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.859      0.801      0.858       0.59

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      95/99      4.41G    0.01901    0.02131  0.0009813         70        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.831      0.831      0.859      0.586

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      96/99      4.41G    0.01892    0.02219  0.0009219        105        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.921      0.781      0.865      0.593

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      97/99      4.41G    0.01874    0.02093  0.0008694         54        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.855      0.812      0.862      0.592

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      98/99      4.41G    0.01946    0.02096   0.001002        101        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.872        0.8      0.864      0.592

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      99/99      4.41G    0.01885    0.01993   0.001163         75        640: 100%|██████████| 43/43 [00:03<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.848      0.814       0.86       0.59

100 epochs completed in 0.143 hours.
Optimizer stripped from runs\train\mask_yolov5s\weights\last.pt, 14.4MB
Optimizer stripped from runs\train\mask_yolov5s\weights\best.pt, 14.4MB

Validating runs\train\mask_yolov5s\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 6/6 [00:01<0
                   all        171        778      0.859      0.849      0.888      0.609
          without_mask        171        122      0.773      0.885      0.899       0.59
             with_mask        171        630      0.932      0.931      0.965      0.684
 mask_weared_incorrect        171         26      0.872      0.731      0.801      0.554
Results saved to runs\train\mask_yolov5s
}}
#enddivregion
~
+ メッセージの最後「Results saved to runs\train\mask_yolov5s」を確認~
・学習結果モデルは「runs/train/mask_yolov5s/weights」評価指標は「runs/train/mask_yolov5s/」~
|CENTER:F1 curve|CENTER:P curve|CENTER:PR curve|CENTER:R curve|h
|#ref(mask_F1_curve_m.jpg,left,around,15%,F1_curve_m.jpg)|#ref(mask_P_curve_m.jpg,left,around,15%,P_curve_m.jpg)|#ref(mask_PR_curve_m.jpg,left,around,15%,PR_curve_m.jpg)|#ref(mask_R_curve_m.jpg,left,around,15%,R_curve_m.jpg)|
#ref(mask_results_m.jpg,left,around,40%,resuls_m.jpg)
#ref(mask_confusion_matrix_m.jpg,left,around,25%,confusion_matrix_m.jpg)
#clear
#ref(mask_labels_correlogram_m.jpg,left,around,15%,labels_correlogram.jpg)
#ref(mask_labels_m.jpg,left,around,15%,labels.jpg)
#ref(mask_train_batch0_m.jpg,left,around,10%,train_batch0.jpg)
#ref(mask_train_batch1_m.jpg,left,around,10%,train_batch1.jpg)
#ref(mask_train_batch2_m.jpg,left,around,10%,train_batch2.jpg)
#clear
#ref(mask_val_batch0_labels_m.jpg,left,around,10%,val_batch0_labels.jpg)
#ref(mask_val_batch0_pred_m.jpg,left,around,10%,val_batch0_pred.jpg)
#ref(mask_val_batch1_labels_m.jpg,left,around,10%,val_batch1_labels.jpg)
#ref(mask_val_batch1_pred_m.jpg,left,around,10%,val_batch1_pred.jpg)
#ref(mask_val_batch2_labels_m.jpg,left,around,10%,val_batch2_labels.jpg)
#ref(mask_val_batch2_pred_m.jpg,left,around,10%,val_batch2_pred.jpg)
#clear

*** 実行結果を使って推論 [#hcf237a9]
+「detect2.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask-test.jpg --view-img
}}
#ref(mask-test_s.jpg,right,around,30%,mask-test_s.jpg)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask-test.jpg --view-img
detect2: weights=['./runs/train/mask_yolov5s/weights/best.pt'], source=../../Images/mask-test.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Speed: 1.0ms pre-process, 49.2ms inference, 42.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp10
}}
+「detect2.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask.mov --view-img
}}
#ref(mask.gif,right,around,60%,mask.gif)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask.mov --view-img
detect2: weights=['./runs/train/mask_yolov5s/weights/best.pt'], source=../../Videos/mask.mov, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Speed: 0.6ms pre-process, 6.4ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp12
}}
+「detect2.py」その他の実行例~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask.jpg --view-img
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask2.mp4 --view-img
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask-test.mp4 --view-img
}}
#ref(mask_s.jpg,left,around,15%,mask_s.jpg)
#ref(mask2.gif,left,around,60%,mask2.gif)
#ref(mask-test.gif,left,around,60%,mask-test.gif)
#clear

*** 実行結果を使って推論(日本語表示) [#ve847abd]
+ ラベルファイルを作成する~
・「ts_names_jp」日本語(「coco.names_jp」をコピーし編集)~
#codeprettify(){{
マスクなし
マスクあり
マスク着用 _不正確
}}
・「ts_names」英語(「coco.names」をコピーし編集)~
#codeprettify(){{
without_mask
with_mask
mask_weared_incorrect
}}
+「detect2_yolov5.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Images/mask-test.jpg -l mask_names_jp
}}
#ref(mask-test_jp_s.jpg,right,around,30%,mask-test_jp_s.jpg)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Images/mask-test.jpg -l mask_names_jp

Object detection YoloV5 in PyTorch Ver. 0.02: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  ../../Images/mask-test.jpg
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/mask_yolov5s/weights/best.pt
   - Label file   :  mask_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:       9.00

 Finished.
}}
+「detect2_yolov5.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Videos/mask.mov -l mask_names_jp
}}
#ref(mask_jp.gif,right,around,60%,mask_jp.gif)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/mask_yolov5s/weights/best.pt -i ../../Videos/mask.mov -l mask_names_jp

Object detection YoloV5 in PyTorch Ver. 0.02: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  ../../Videos/mask.mov
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/mask_yolov5s/weights/best.pt
   - Label file   :  mask_names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:      45.70

 Finished.
}}
+「detect2_yolov5.py」その他の実行例~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Images/mask.jpg --view-img
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask2.mp4 --view-img
(py_learn) python detect2.py --weights ./runs/train/mask_yolov5s/weights/best.pt --source ../../Videos/mask-test.mp4 --view-img
}}
#ref(mask_jp_s.jpg,left,around,25%,mask_jp_s.jpg)
#ref(mask2_jp.gif,left,around,60%,mask2_jp.gif)
#ref(mask-test_jp.gif,left,around,60%,mask-test_jp.gif)
#clear

#br

** カスタムセットによる学習「じゃんけんの判定」 [#rd6c914a]
#ref(yolov5_train04_m.jpg,right,around,15%,yolov5_train04_m.jpg)
- 以前に [[カスタムデータによる学習4「じゃんけんの判定(その3)」>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab5]] で作成した自前のデータセットを使用する~
- 「YOLOv7」用に作成したものはそのまま利用可能~

*** 前準備 [#o96aa0e2]
+ ''データセットの配置''~
「janken4_dataset/」フォルダを「yolov5/data/」内にコピー(移動)する~


*** 学習の実行 [#z850ba45]
+ エポック数 100 で実行する~
・実行ディレクトリは「workspace_pylearn/yolov5/」~
・学習モデルはデフォールト設定「yolov7s」を使用する~
・学習結果フォルダ名は「janken4_yolov5s」とする~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s
}}
・GPU を使用しない場合は以下のコマンドを実行する~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s --device cpu
}}
+ 学習の終了~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s --device cpu
}}
+ 学習の終了~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s
train: weights=yolov5s.pt, cfg=, data=data/janken4_dataset/janken4_dataset.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=janken4_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github:  YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=3

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
    :
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/99      3.35G    0.08231    0.02932    0.03379         37        640: 100%|██████████| 30/30 [00:03<00:00,  9.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120    0.00334      0.992     0.0603     0.0182
    :
      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      99/99      3.98G    0.01874   0.009725    0.00775         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.787

100 epochs completed in 0.093 hours.
Optimizer stripped from runs\train\janken4_yolov5s\weights\last.pt, 14.4MB
Optimizer stripped from runs\train\janken4_yolov5s\weights\best.pt, 14.4MB

Validating runs\train\janken4_yolov5s\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.788
                   goo        120         40      0.984          1      0.995      0.754
                 choki        120         40          1      0.977      0.995      0.755
                   par        120         40      0.994          1      0.995      0.855
Results saved to runs\train\janken4_yolov5s
}}
#divregion(「train.py」実行ログ詳細)
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/janken4_dataset/janken4_dataset.yaml --weights yolov5s.pt --name janken4_yolov5s
train: weights=yolov5s.pt, cfg=, data=data/janken4_dataset/janken4_dataset.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=100, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=janken4_yolov5s, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github:  YOLOv5 is out of date by 3 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5  runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs\train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=3

                 from  n    params  module                                  arguments
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]
  4                -1  2    115712  models.common.C3                        [128, 128, 2]
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]
  6                -1  3    625152  models.common.C3                        [256, 256, 3]
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]
  8                -1  1   1182720  models.common.C3                        [512, 512, 1]
  9                -1  1    656896  models.common.SPPF                      [512, 512, 5]
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 12           [-1, 6]  1         0  models.common.Concat                    [1]
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 16           [-1, 4]  1         0  models.common.Concat                    [1]
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]
 19          [-1, 14]  1         0  models.common.Concat                    [1]
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]
 22          [-1, 10]  1         0  models.common.Concat                    [1]
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]
 24      [17, 20, 23]  1     21576  models.yolo.Detect                      [3, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7027720 parameters, 7027720 gradients, 16.0 GFLOPs

Transferred 343/349 items from yolov5s.pt
AMP: checks passed
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.0005), 60 bias
train: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\janken4_dataset\train\labels... 480 images, 0 backgrounds
train: New cache created: C:\anaconda_win\workspace_pylearn\yolov5\data\janken4_dataset\train\labels.cache
val: Scanning C:\anaconda_win\workspace_pylearn\yolov5\data\janken4_dataset\valid\labels... 120 images, 0 backgrounds,
val: New cache created: C:\anaconda_win\workspace_pylearn\yolov5\data\janken4_dataset\valid\labels.cache

AutoAnchor: 3.12 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset
Plotting labels to runs\train\janken4_yolov5s\labels.jpg...
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs\train\janken4_yolov5s
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/99      3.35G    0.08231    0.02932    0.03379         37        640: 100%|██████████| 30/30 [00:03<00:00,  9.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120    0.00334      0.992     0.0603     0.0182

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       1/99      3.98G     0.0572    0.02576    0.03066         39        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.183      0.392        0.2     0.0732

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       2/99      3.98G    0.04957    0.02153    0.02791         45        640: 100%|██████████| 30/30 [00:02<00:00, 13.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.164      0.358      0.145     0.0422

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       3/99      3.98G     0.0485    0.02054    0.02889         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.364      0.433      0.401      0.136

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       4/99      3.98G    0.04405    0.01775    0.02909         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.244       0.56      0.344      0.121

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       5/99      3.98G    0.04293    0.01668    0.02636         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.301      0.675      0.393        0.2

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       6/99      3.98G    0.04271    0.01557    0.02331         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.398      0.786      0.629      0.245

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       7/99      3.98G    0.03947    0.01544    0.01986         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.427       0.85      0.631      0.355

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       8/99      3.98G    0.03792    0.01511    0.02048         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.495      0.825      0.698      0.353

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       9/99      3.98G    0.03979    0.01469    0.02054         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.579       0.93      0.802      0.442

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      10/99      3.98G    0.03555    0.01413     0.0197         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.669      0.775      0.806      0.458

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      11/99      3.98G      0.033    0.01385     0.0217         45        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.705      0.594      0.721      0.365

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      12/99      3.98G    0.03321    0.01327    0.01911         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120       0.69      0.739      0.799      0.421

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      13/99      3.98G    0.03502    0.01387    0.02147         43        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.833      0.875      0.917      0.531

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      14/99      3.98G    0.02933    0.01338    0.01939         46        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.893      0.848      0.957      0.617

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      15/99      3.98G     0.0358    0.01393    0.01933         38        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.954      0.953      0.979      0.604

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      16/99      3.98G    0.03375    0.01295    0.01561         41        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.855      0.913      0.911      0.523

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      17/99      3.98G    0.03741    0.01303    0.01721         41        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.911      0.928      0.988      0.602

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      18/99      3.98G     0.0341    0.01311    0.01504         36        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.979      0.958      0.986      0.605

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      19/99      3.98G     0.0325    0.01296    0.01394         44        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.838      0.948      0.931      0.567

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      20/99      3.98G    0.03098    0.01276    0.01328         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.784      0.636      0.723      0.397

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      21/99      3.98G    0.02741    0.01282     0.0139         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.807      0.867      0.925       0.55

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      22/99      3.98G     0.0297     0.0124    0.01551         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.965      0.975      0.993      0.709

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      23/99      3.98G    0.03286    0.01236    0.01528         36        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.881      0.951      0.973      0.648

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      24/99      3.98G     0.0304    0.01237    0.01601         36        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.706      0.599      0.627      0.219

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      25/99      3.98G    0.03226    0.01292    0.01424         41        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.926      0.925      0.958      0.534

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      26/99      3.98G    0.03152    0.01266    0.01315         32        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.988      0.992      0.995      0.675

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      27/99      3.98G    0.03078    0.01273    0.01316         47        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.923      0.914      0.973      0.634

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      28/99      3.98G    0.02773    0.01226    0.01355         43        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.689      0.682      0.738      0.446

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      29/99      3.98G    0.02957      0.012    0.01376         35        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.927      0.833      0.952      0.599

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      30/99      3.98G    0.02811    0.01192    0.01414         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120       0.99      0.995      0.995      0.718

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      31/99      3.98G    0.02548    0.01237    0.01207         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.961      0.975      0.988       0.66

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      32/99      3.98G    0.03024    0.01172    0.01306         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.985      0.985      0.993      0.687

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      33/99      3.98G    0.02691    0.01125    0.01007         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.954       0.98      0.993      0.732

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      34/99      3.98G    0.02743    0.01169    0.01167         43        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.986      0.988      0.994      0.706

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      35/99      3.98G     0.0286    0.01179     0.0153         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.945      0.979      0.976      0.663

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      36/99      3.98G    0.02537    0.01211     0.0127         43        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120       0.98      0.972      0.994      0.715

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      37/99      3.98G    0.02837    0.01169    0.01206         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.974      0.967      0.983      0.698

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      38/99      3.98G    0.02573    0.01186    0.01277         44        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.988          1      0.995      0.708

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      39/99      3.98G    0.02653    0.01211    0.01424         45        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.989      0.982       0.99      0.752

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      40/99      3.98G    0.02987    0.01177     0.0129         31        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.996      0.998      0.995      0.754

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      41/99      3.98G    0.02421    0.01095   0.008913         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.956      0.969      0.992      0.712

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      42/99      3.98G    0.02611    0.01091    0.01266         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.896      0.914      0.976      0.683

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      43/99      3.98G    0.02356    0.01097    0.01029         40        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.988      0.994      0.707

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      44/99      3.98G    0.02585    0.01176    0.01273         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.989      0.998      0.995      0.741

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      45/99      3.98G    0.02214    0.01103   0.009593         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.984      0.968      0.994      0.731

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      46/99      3.98G    0.02429    0.01134    0.01229         37        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.972      0.997      0.995      0.696

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      47/99      3.98G    0.02669    0.01131     0.0113         42        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.979       0.99      0.993      0.728

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      48/99      3.98G    0.02624    0.01117    0.01391         35        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.981      0.991      0.994      0.731

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      49/99      3.98G     0.0216    0.01082    0.01195         40        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.979      0.997      0.994      0.749

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      50/99      3.98G    0.02802    0.01135    0.01317         35        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.987      0.992      0.995      0.753

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      51/99      3.98G    0.02481    0.01129    0.01008         33        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.979          1      0.995      0.734

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      52/99      3.98G    0.02087    0.01088    0.00971         39        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.991      0.976      0.995      0.742

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      53/99      3.98G    0.02397     0.0108    0.01014         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.965      0.987      0.993      0.737

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      54/99      3.98G    0.02143    0.01047    0.01007         39        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.999      0.968      0.994       0.76

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      55/99      3.98G    0.02357    0.01063    0.01068         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.982      0.965      0.992      0.732

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      56/99      3.98G    0.02155    0.01041   0.009341         32        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.997      0.997      0.995      0.734

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      57/99      3.98G    0.02413    0.01099    0.01143         43        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.992      0.998      0.995      0.741

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      58/99      3.98G    0.02416    0.01015    0.01208         46        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.989          1      0.995      0.752

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      59/99      3.98G    0.02315    0.01038   0.009973         36        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120       0.99      0.993      0.995      0.755

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      60/99      3.98G    0.02194    0.01087   0.008395         41        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.991      0.984      0.995      0.767

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      61/99      3.98G    0.02366    0.01112    0.01098         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.988      0.996      0.995      0.757

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      62/99      3.98G    0.02109    0.01104   0.008866         32        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.998      0.998      0.995       0.75

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      63/99      3.98G    0.02194    0.01008    0.01025         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995          1      0.995      0.767

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      64/99      3.98G    0.01991     0.0104   0.009375         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.986          1      0.995      0.773

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      65/99      3.98G    0.02214    0.01088   0.008747         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.997      0.989      0.995      0.764

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      66/99      3.98G    0.02282    0.01059    0.01417         27        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.987      0.992      0.994       0.74

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      67/99      3.98G    0.02407    0.01053    0.01109         40        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995       0.99      0.995      0.754

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      68/99      3.98G    0.02182    0.01021   0.008963         40        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.999      0.991      0.994      0.745

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      69/99      3.98G    0.02283    0.01069   0.008686         41        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.959      0.993      0.993      0.746

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      70/99      3.98G    0.02077    0.01068   0.008957         39        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120       0.96      0.987      0.994      0.754

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      71/99      3.98G    0.02087    0.01011    0.00913         27        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.986      0.992      0.993      0.749

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      72/99      3.98G    0.02065    0.01047    0.01085         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.946      0.961      0.985      0.713

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      73/99      3.98G    0.02052    0.01022   0.008233         46        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.765

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      74/99      3.98G    0.02427    0.01052    0.01263         39        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.976      0.997      0.993      0.744

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      75/99      3.98G    0.01907   0.009838   0.006526         36        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.996      0.999      0.995      0.767

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      76/99      3.98G    0.02121    0.01021   0.008985         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.997      0.993      0.995      0.771

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      77/99      3.98G    0.01806    0.01032   0.009292         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.999      0.995      0.767

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      78/99      3.98G     0.0211   0.009842   0.008575         42        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.994          1      0.995      0.759

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      79/99      3.98G    0.02148     0.0103   0.009642         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995          1      0.995      0.759

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      80/99      3.98G    0.01837   0.009587   0.007205         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993          1      0.995      0.773

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      81/99      3.98G     0.0206    0.01032   0.007889         44        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995          1      0.995      0.775

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      82/99      3.98G    0.02004     0.0102   0.007953         34        640: 100%|██████████| 30/30 [00:02<00:00, 11.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.996          1      0.995      0.768

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      83/99      3.98G    0.01952   0.009843   0.008777         28        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.997          1      0.995      0.764

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      84/99      3.98G    0.01898   0.009601   0.006989         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.999      0.995      0.777

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      85/99      3.98G    0.02144   0.009912   0.009352         37        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995      0.992      0.995      0.775

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      86/99      3.98G    0.02084     0.0103    0.01001         32        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.991          1      0.995       0.77

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      87/99      3.98G    0.01999   0.009585   0.007808         33        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995      0.999      0.995       0.78

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      88/99      3.98G    0.01913    0.01006   0.009588         31        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.997      0.993      0.995      0.779

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      89/99      3.98G    0.02166   0.009986   0.008898         34        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.777

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      90/99      3.98G    0.01914   0.009491   0.007874         40        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.991      0.992      0.995      0.771

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      91/99      3.98G    0.01811   0.009106   0.007113         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.988      0.992      0.995      0.774

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      92/99      3.98G     0.0193   0.009321   0.008117         47        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.778

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      93/99      3.98G    0.01848   0.009137   0.008004         32        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.991      0.991      0.995      0.781

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      94/99      3.98G    0.01892   0.009626   0.007458         34        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.774

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      95/99      3.98G    0.01856   0.009444   0.007062         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.992      0.999      0.995      0.776

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      96/99      3.98G    0.01887   0.009591   0.007504         35        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.996      0.993      0.995      0.784

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      97/99      3.98G    0.01569   0.009341    0.00873         46        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.995      0.992      0.995      0.776

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      98/99      3.98G    0.01845   0.009881   0.007536         36        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.783

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
      99/99      3.98G    0.01874   0.009725    0.00775         38        640: 100%|██████████| 30/30 [00:02<00:00, 12.
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.787

100 epochs completed in 0.093 hours.
Optimizer stripped from runs\train\janken4_yolov5s\weights\last.pt, 14.4MB
Optimizer stripped from runs\train\janken4_yolov5s\weights\best.pt, 14.4MB

Validating runs\train\janken4_yolov5s\weights\best.pt...
Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
                 Class     Images  Instances          P          R      mAP50   mAP50-95: 100%|██████████| 4/4 [00:00<0
                   all        120        120      0.993      0.992      0.995      0.788
                   goo        120         40      0.984          1      0.995      0.754
                 choki        120         40          1      0.977      0.995      0.755
                   par        120         40      0.994          1      0.995      0.855
Results saved to runs\train\janken4_yolov5s
}}
#enddivregion
~
+ メッセージの最後「Results saved to runs\train\janken4_yolov5s」を確認~
・学習結果モデルは「runs/train/janken4_yolov5s/weights」評価指標は「runs/train/janken4_yolov5s/」~
|CENTER:F1 curve|CENTER:P curve|CENTER:PR curve|CENTER:R curve|h
|#ref(janken_F1_curve_m.jpg,left,around,15%,F1_curve_m.jpg)|#ref(janken_P_curve_m.jpg,left,around,15%,P_curve_m.jpg)|#ref(janken_PR_curve_m.jpg,left,around,15%,PR_curve_m.jpg)|#ref(janken_R_curve_m.jpg,left,around,15%,R_curve_m.jpg)|
#ref(janken_results_m.jpg,left,around,40%,resuls_m.jpg)
#ref(janken_confusion_matrix_m.jpg,left,around,25%,confusion_matrix_m.jpg)
#clear
#ref(janken_labels_correlogram_m.jpg,left,around,15%,labels_correlogram.jpg)
#ref(janken_labels_m.jpg,left,around,15%,labels.jpg)
#ref(janken_train_batch0_m.jpg,left,around,10%,train_batch0.jpg)
#ref(janken_train_batch1_m.jpg,left,around,10%,train_batch1.jpg)
#ref(janken_train_batch2_m.jpg,left,around,10%,train_batch2.jpg)
#clear
#ref(janken_val_batch0_labels_m.jpg,left,around,10%,val_batch0_labels.jpg)
#ref(janken_val_batch0_pred_m.jpg,left,around,10%,val_batch0_pred.jpg)
#ref(janken_val_batch1_labels_m.jpg,left,around,10%,val_batch1_labels.jpg)
#ref(janken_val_batch1_pred_m.jpg,left,around,10%,val_batch1_pred.jpg)
#ref(janken_val_batch2_labels_m.jpg,left,around,10%,val_batch2_labels.jpg)
#ref(janken_val_batch2_pred_m.jpg,left,around,10%,val_batch2_pred.jpg)
#clear

*** 実行結果を使って推論 [#e9d6b7b5]
+「detect2.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken3.jpg --view-img
}}
#ref(janken3_s.jpg,right,around,30%,janken3_s.jpg)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken3.jpg --view-img
detect2: weights=['./runs/train/janken4_yolov5s/weights/best.pt'], source=../../Images/janken3.jpg, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
Speed: 0.0ms pre-process, 53.6ms inference, 42.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp23
}}
+「detect2.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Videos/janken_test2.mp4 --view-img
}}
#ref(janken_test2.gif,right,around,60%,janken_test2.gif)
・実行ログ(結果は「runs/detect/exp*」*は順次更新)~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Videos/janken_test2.mp4 --view-img
detect2: weights=['./runs/train/janken4_yolov5s/weights/best.pt'], source=../../Videos/janken_test2.mp4, data=data\coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=True, save_txt=False, save_csv=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs\detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
Speed: 0.7ms pre-process, 8.4ms inference, 1.0ms NMS per image at shape (1, 3, 640, 640)
Results saved to runs\detect\exp24
}}
+「detect2.py」その他の実行例~
#codeprettify(){{
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken.jpg --view-img
(py_learn) python detect2.py --weights ./runs/train/janken4_yolov5s/weights/best.pt --source ../../Images/janken2.jpg --view-img
}}
#ref(janken_s.jpg,left,around,25%,janken_s.jpg)
#ref(janken2_s.jpg,left,around,25%,janken2_s.jpg)
#clear

*** 実行結果を使って推論(日本語表示) [#xf60caaa]
+ ラベルファイルを準備する~
・「janken.names_jp」日本語~
#codeprettify(){{
グー
チョキ
パー
}}
・「jankennames」英語~
#codeprettify(){{
gook
choki
par
}}
+「detect2_yolov5.py」で推論実行(静止画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken3.jpg -l janken.names_jp
}}
#ref(janken3_jp_s.jpg,right,around,30%,janken3_jp_s.jpg)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken3.jpg -l janken.names_jp

Object detection YoloV5 in PyTorch Ver. 0.02: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  ../../Images/janken3.jpg
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/janken4_yolov5s/weights/best.pt
   - Label file   :  janken.names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:       7.90

 Finished.
}}
+「detect2_yolov5.py」で推論実行(動画)~
・学習結果モデルを指定する~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Videos/janken_test2.mp4 -l janken.names_jp
}}
#ref(janken_test2_jp.gif,right,around,60%,janken_test2_jp.gif)
・実行ログ~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Videos/janken_test2.mp4 -l janken.names_jp

Object detection YoloV5 in PyTorch Ver. 0.02: Starting application...
   OpenCV virsion : 4.9.0

   - Image File   :  ../../Videos/janken_test2.mp4
   - YOLO v5      :  ultralytics/yolov5
   - Pretrained   :  ./runs/train/janken4_yolov5s/weights/best.pt
   - Label file   :  janken.names_jp
   - Program Title:  y
   - Speed flag   :  y
   - Processed out:  non
   - Use device   :  cuda:0

Using cache found in C:\Users\izuts/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5  2024-4-9 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)

Fusing layers...
Model summary: 157 layers, 7018216 parameters, 0 gradients, 15.8 GFLOPs
Adding AutoShape...

FPS average:      47.30

 Finished.
}}
+「detect2_yolov5.py」その他の実行例~
#codeprettify(){{
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken.jpg -l janken.names_jp
(py_learn) python detect2_yolov5.py -m ./runs/train/janken4_yolov5s/weights/best.pt -i ../../Images/janken2.jpg -l janken.names_jp
}}
#ref(janken_jp_s.jpg,left,around,25%,janken_jp_s.jpg)
#ref(janken2_jp_s.jpg,left,around,25%,janken2_jp_s.jpg)
#clear

#br

** Official YOLOv5 まとめ [#nd934e97]
*** 各プログラムの機能 [#a09b8315]
- 主なコマンドオプション(詳しくは各項参照)~
|CENTER:プログラム名|CENTER:主なコマンドオプション|CENTER:初期値|CENTER:機能|CENTER:種別|h
|detect2.py|--weights|yolov7s.pt|入力ソースを物体検出(推論)|YOLOv5 添付&br;detect.py 修正版|
|~|--source|data/images|~|~|
|~|--device|(cuda:0)|~|~|
|~|--view-img|False|~|~|
|BGCOLOR(lightyellow):detect3_yolov5.py|BGCOLOR(lightyellow):-i , --image|BGCOLOR(lightyellow):'../../Videos/car_m.mp4'|BGCOLOR(lightyellow):PyTorch による&br;入力ソースの物体検出(推論)&br;複数モデル/日本語表示対応|BGCOLOR(lightyellow):新規作成|
|~|BGCOLOR(lightyellow):-y , --yolov5|BGCOLOR(lightyellow):'ultralytics/yolov5'|~|~|
|~|BGCOLOR(lightyellow):-m , --models|BGCOLOR(lightyellow)::'yolov5s'|~|~|
|~|BGCOLOR(lightyellow):-ms , --models2|BGCOLOR(lightyellow):''|~|~|
|~|BGCOLOR(lightyellow):-l , --labels|BGCOLOR(lightyellow)::'coco.names_jp'|~|~|
|BGCOLOR(lightyellow):yolov5_OV2.py|BGCOLOR(lightyellow):-i, --input|BGCOLOR(lightyellow):cam|BGCOLOR(lightyellow):OpenVINO™ による&br;入力ソースの物体検出(推論)&br;日本語表示対応|BGCOLOR(lightyellow):新規作成|
|~|BGCOLOR(lightyellow):-m, --model|BGCOLOR(lightyellow):yolov5s_v7.xml|~|~|
|~|BGCOLOR(lightyellow):-d, --device|BGCOLOR(lightyellow):CPU|~|~|
|~|BGCOLOR(lightyellow):-l, --label|BGCOLOR(lightyellow):coco.names_jp|~|~|
|export.py|--include|必須指定|学習済みモデルのフォーマット変換|YOLOv5 添付|
|train.py|--weights|yolov7s.pt|学習プログラム|YOLOv5 添付|
|~|--cfg||~|~|
|~|--data|data/coco.yaml|~|~|
|~|--epochs|300|~|~|
|~|--batch-size|16|~|~|
|~|--device|(cuda:0)|~|~|
|~|--project|runs/train|~|~|
|~|--name|project/name|~|~|

- 利用できる 事前トレーニングモデル~
|LEFT:|RIGHT:|RIGHT:|RIGHT:|RIGHT:|RIGHT:|RIGHT:|RIGHT:|RIGHT:|c
|CENTER:Model|CENTER:size&br;(pixels)|CENTER:mAPval&br;50-95|CENTER:mAPval&br;50|CENTER:Speed&br;CPU b1&br;(ms)|CENTER:Speed&br;V100 b1&br;(ms)|CENTER:Speed&br;V100 b32&br;(ms)|CENTER:params&br;(M)|CENTER:FLOPs&br;@640 (B)|h
|YOLOv5n|640|28.0|45.7|45|6.3|0.6|1.9|4.5|
|BGCOLOR(lightyellow):YOLOv5s|BGCOLOR(lightyellow):640|BGCOLOR(lightyellow):37.4|BGCOLOR(lightyellow):56.8|BGCOLOR(lightyellow):98|BGCOLOR(lightyellow):6.4|BGCOLOR(lightyellow):0.9|BGCOLOR(lightyellow):7.2|BGCOLOR(lightyellow):16.5|
|YOLOv5m|640|45.4|64.1|224|8.2|1.7|21.2|49.0|
|YOLOv5l|640|49.0|67.3|430|10.1|2.7|46.5|109.1|
|YOLOv5x|640|50.7|68.9|766|12.1|4.8|86.7|205.7|
 [[&size(12){※ YOLOv5:Pretrained Checkpoints より転載};>https://github.com/ultralytics/yolov5#pretrained-checkpoints]]~

*** 学習(Training) [#o76421df]
- 学習にかかった時間 '''GPU: GeForce RTX 4070 Ti 12GB'''~
|LEFT:|CENTER:|CENTER:100|CENTER:100|CENTER:100|CENTER:|c
|CENTER:データセット|データ数|>|>|yolov5s 学習回数|yolov5x 学習回数|h
|~|~|30|100|300|100|h
|交通標識 (Traffic Signs Dataset)|741|2分30秒|7分1秒|20分36秒||
|マスク着用 (Mask Wearing Dataset)|853||8分35秒|||
|じゃんけん (janken Dataset)|600||5分35秒||124分41秒※|
 ※テストしたマシンは共有GPUメモリー 32GB で、専用GPUメモリー 12GB を超えても共有サイズまでは動作するが速度は極端に低下する~
  高速動作のためには 専用GPUメモリーサイズ内で動作させる必要がある~
  → [[&size(12){※ GPUメモリ(VRAM)とは?| グラフィックボードのメモリの役割、システムメモリとの違い!【簡単な言葉で解説!!】};>https://btopcs.jp/basic/parts/gpumemory/]]~

- ハードウェアによる学習時間の違い~
|CENTER:|CENTER:80|CENTER:80|CENTER:80|CENTER:80|CENTER:80|CENTER:80|c
|OS|>|GPU|>|>|>|CPU|h
|~|RTX 4070|GTX 1050|i9-13900|i7-1260P|i7-1185G7|i7-6700|h
|Windows11/10|BGCOLOR(lightyellow):2分30秒|BGCOLOR(lightyellow):16分48秒|59分17秒|113分46秒|×|246分7秒|
|Ubuntu22.04/20.04|BGCOLOR(lightyellow):1分40秒|BGCOLOR(lightyellow):×||×|122分17秒|×|
 ・学習済みモデル: yolov5s~
 ・追加学習データ: Traffic Signs Dataset (交通標識)~
 ・学習回数   : 30~
#br

** 対処したエラー詳細 [#sf0e6bd6]
*** '''OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.''' [#yfa0947c]
- エラー内容~
#codeprettify(){{
(py_learn) python train.py --data data/ts_dataset/ts.yaml --weights yolov5s.pt --img 640 --epochs 30
train: weights=yolov5s.pt, cfg=, data=data/ts_dataset/ts.yaml, hyp=data\hyps\hyp.scratch-low.yaml, epochs=30, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data\hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs\train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
remote: Enumerating objects: 8, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 8 (delta 3), reused 5 (delta 3), pack-reused 2
Unpacking objects: 100% (8/8), 3.77 KiB | 226.00 KiB/s, done.
From https://github.com/ultralytics/yolov5
   db125a20..ae4ef3b2  master     -> origin/master
github:  YOLOv5 is out of date by 2 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-294-gdb125a20 Python-3.11.8 torch-2.2.1+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 12282MiB)
    :
Plotting labels to runs\train\exp\labels.jpg...
OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
}}
#ref(yolov5_train02_m.jpg,right,around,25%,yolov5_train02_m.jpg)
- 対処方法~
1. C:\Users\<USER>\anaconda3\envs\py_learn で「libiomp5md.dll」を検索~
#codeprettify(){{
"C:\Users\<USER>\anaconda3\envs\py_learn\Lib\site-packages\torch\lib\libiomp5md.dll"
"C:\Users\<USER>\anaconda3\envs\py_learn\Library\bin\libiomp5md.dll"
}}
2. 2つ目の「libiomp5md.dll」を「temp/」(どこでもよい)へ移動~
~
3. 再度実行する~

- コメント~
「libiomp5md.dll」が複数存在する場合に起こるらしい~

- 参考サイト:~
・https://www.programmersought.com/article/53286415201/~
#clear

*** '''libpng warning: iCCP: Not recognizing known sRGB profile that has been edited''' [#d9d50c4a]
- エラー内容~
#codeprettify(){{
(py_learn) python train.py --epochs 100 --data data/mask_dataset/mask.yaml --weights yolov5s.pt --name mask_yolov5s
    :
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   obj_loss   cls_loss  Instances       Size
       0/99      3.35G     0.1201     0.0557    0.04686        158        640:  12%|█▏        | 5/43 [00:01<00:06,  6.1libpng warning: iCCP: Not recognizing known sRGB profile that has been edited
       0/99      3.35G     0.1187    0.05642     0.0466        156        640:  28%|██▊       | 12/43 [00:01<00:03,  9.libpng warning: iCCP: Not recognizing known sRGB profile that has been edited
       0/99      3.35G     0.1165    0.05713    0.04507        126        640:  40%|███▉      | 17/43 [00:02<00:02,  9.libpng warning: iCCP: Not recognizing known sRGB profile that has been edited
    :
}}
- 対処方法~
1.「ImageMagic」をインストールする~
 [[画像処理ソフト ImageMagick 7, 動画像処理ソフト FFmpeg のインストール(Windows 上)>+https://www.kkaneko.jp/tools/win/imagemagick7.html]]~
2. インストールの確認~
 適当なフォルダ(temp/)にテスト用画像ファイルを置いてコマンドプロンプトで実行する~
#codeprettify(){{
> cd temp
> magick mogrify -identify *.png

image01.png PNG 512x366 512x366+0+0 8-bit TrueColor sRGB 329899B 0.007u 0:00.006
9mage02.png PNG 301x400 301x400+0+0 8-bit TrueColorAlpha sRGB 181960B 0.006u 0:00.005
}}
・[[Imagemagickで画像の情報を取得する>+https://imagemagick.biz/archives/916]]~
~
3.「png」ファイルのプロファイルを削除する~
#codeprettify(){{
> cd train
> magick mogrify -strip *.png

> cd train
> magick mogrify -strip *.png
}}

- 参考サイト:~
・[[dvipdfmxで「warning: iCCP: known incorrect sRGB profile」というエラーが出る場合の対処方法>+https://qiita.com/takahashim/items/39534bd820f7fd71a5bb]]~
・[[warning: pdflatex>libpng warning: iCCP: known incorrect sRGB profile (警告: pdflatex libpng 警告: iCCP: 既知の不正な sRGB プロファイル>+https://tex.stackexchange.com/questions/125612/warning-pdflatex-libpng-warning-iccp-known-incorrect-srgb-profile]]~
・[[【ImageMagick】コマンドで画像を一括処理(形式変換・リサイズ)する方法【Windows11対応】>+https://hontabisatori.com/imagemagick-command/]]~

- コメント~
「libpngl」の問題だそうで画像ファイル内に付属の プロファイルやその他のメタデータ を削除するとエラーが出なくなる~

*** '''github: ⚠️ YOLOv5 is out of date by 6 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.''' [#n407bb35]
- エラー内容~
#codeprettify(){{
    :
github: ⚠️ YOLOv5 is out of date by 6 commits. Use 'git pull' or 'git clone https://github.com/ultralytics/yolov5' to update.
YOLOv5  v7.0-297-gd07d0cf6 Python-3.11.7 torch-2.2.0+cu121 CUDA:0 (NVIDIA GeForce RTX 4070 Ti, 11987MiB)
    :
}}
- 対処方法~
+ リモート(GitHub)の情報を確認する~
#codeprettify(){{
(py_learn) git remote -v
origin  https://github.com/ultralytics/yolov5 (fetch)
origin  https://github.com/ultralytics/yolov5 (push)

(py_learn) git branch -vv
* master d07d0cf6 [origin/master: behind 6] Create cla.yml (#12899)
}}
+ pull を実行する~
#codeprettify(){{
(py_learn) git pull
Updating d07d0cf6..cf8b67b7
Fast-forward
 .github/workflows/merge-main-into-prs.yml | 56 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 pyproject.toml                            |  2 +-
 2 files changed, 57 insertions(+), 1 deletion(-)
 create mode 100644 .github/workflows/merge-main-into-prs.yml
}}

- 参考サイト:~
・[[gitのremote URLを確認する>+https://zenn.dev/iga3/articles/843edd1ab31d02]]~
・[[【Git&GitHub】リモート(GitHub)の情報を確認する(git remoteコマンド)>+https://phoeducation.work/entry/20210819/1629327480#google_vignette]]~
・[[git pullとは>+https://zenn.dev/advn/articles/e102d5f50c4673]]~
・[[Git - リモートの最新情報をローカルに反映する(fetchとpull)>+https://forest-valley17.hatenablog.com/entry/2018/12/06/223807]]~

#br

** 更新履歴 [#ad443145]
- 2024/04/05 初版
- 2024/04/18 学習内容追加
#br

* 参考資料 [#da97fc5a]
- [[【復習】物体検出アルゴリズム「YOLO V5」>+https://izutsu.aa0.netvolante.jp/pukiwiki/?RevYOLOv5]]~

- YOLO V5~
-- [[YOLOV5 By Ultralytics>+https://pytorch.org/hub/ultralytics_yolov5/]]~
-- [[''PyTorch Hub -Ultralytics YOLOv8 ドキュメント''>+https://docs.ultralytics.com/ja/yolov5/tutorials/pytorch_hub_model_loading/]]~
-- [[YOLOv5 -Ultralytics YOLOv8 ドキュメント>+https://docs.ultralytics.com/ja/models/yolov5/]]~

-- [[【物体検出】YOLOとは? PyTorchでYOLOv5を動作させよう!>+https://aiacademy.jp/media/?p=2954]]~
-- [[YOLOv5 で物体検出をしてみよう>+https://rinsaka.com/python/yolov5/index.html]]~

- YOLO V5 training~
-- [[物体検出,物体検出のための追加学習の実行(YOLOv5,PyTorch,Python を使用)(Windows 上)>+https://www.kkaneko.jp/ai/win/yolov5.html]]~
-- [[AIを使っての画像認識による物体検出をYOLOv5を使ってやってみた>+https://sakura-system.com/?p=3814]]~
-- [[YOLOv5を使った物体検出>+https://www.alpha.co.jp/blog/202108_02/]]~
-- [[YOLOv5の学習時に指定可能なオプションについての解説>+https://qiita.com/shinya_sun_sun/items/8c368f3024bf5b0d14aa]]~

-- [[Face Mask Detection>+https://github.com/spacewalk01/yolov5-face-mask-detection?tab=readme-ov-file]]~
-- [[【物体検出2022】YOLOv7まとめ第1回  マスク着用をリアルタイムに判定>+https://tt-tsukumochi.com/archives/3197]]~

-- [[AI を自分好みに調整できる、追加学習まとめ (その1 : 概要)>+https://note.com/te_ftef/n/n48926bfae747]]~
-- [[転移学習とは?ファインチューニングとの違いや活用例をご紹介>+https://biz.hipro-job.jp/column/corporation/transfer-learning/]]~
-- [[転移学習とは?AI実装でよく聞くファインチューニングとの違いも紹介>+https://aismiley.co.jp/ai_news/transfer-learning/]]~
-- [[実装方法から読み解くファインチューニングと転移学習の違いとは>+https://tech.datafluct.com/entry/20220511/1652194800]]~

- ImageMagick~
-- [[画像処理ソフト ImageMagick 7, 動画像処理ソフト FFmpeg のインストール(Windows 上)>+https://www.kkaneko.jp/tools/win/imagemagick7.html]]~
-- [[Imagemagickで画像の情報を取得する>+https://imagemagick.biz/archives/916]]~
-- [[【ImageMagick】コマンドで画像を一括処理(形式変換・リサイズ)する方法【Windows11対応】>+https://hontabisatori.com/imagemagick-command/]]~

- Dataset~
-- [[Traffic Signs Dataset>+https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format?resource=downloa]]~
-- [[Face Mask Detection Dataset>+https://www.kaggle.com/datasets/andrewmvd/face-mask-detectiona]]~
-- [[Mask Wearing Dataset>+https://public.roboflow.com/object-detection/mask-wearing]]~

-- [[PascalVOC形式のファイルをYOLO形式に変換>+https://qiita.com/rihib/items/e163d90c009f4fe12782]]~
#br