- 追加された行はこの色です。
- 削除された行はこの色です。
#author("2023-09-21T00:30:47+00:00","default:mizutu","mizutu")
#author("2023-09-23T18:57:35+00:00","default:mizutu","mizutu")
[[私的AI研究会]] > Colab
* Google Colaboratory で機械学習 [#z1fcd891]
#ref(colab_s.jpg,right,around,40%,colab_s.jpg)
機械学習や Deep Learning の実行環境に「Google Colaboratory」を使ってみる~
YOLO V7 を使ってカスタムデータによる学習を実行する~
環境設定/カスタムデータによる追加学習/ローカルマシンでの推論実行まで、1時間以内に終了する作業を目標とする~
#contents
#clear
RIGHT:&size(12){※ 最終更新:2023/09/21 };
RIGHT:&size(12){※ 最終更新:2023/09/24 };
** Google Colaboratory 動作確認 [#f1005ae5]
無償で手軽に利用可能な クラウド上の Python 環境「Google Colaboratory」を確認する~
*** Google Drive に作業領域を確保する [#vdd4a187]
*** Google Drive に作業領域を確保する [#n2decc12]
+ [[Googleドライブ>+https://drive.google.com/drive/my-drive]]を起動する~
Googleアカウントにログインしていなければログインする~
~
+ マイドライブ直下に新しく「try」フォルダを作成する~
マイドライブ表示エリアでマウス右クリック、「新しいフォルダ」を選択する~
~
#ref(20230910_000001_001m.jpg,left,around,15%,20230910_000001_001m.jpg)
#ref(20230910_000002_001m.jpg,left,around,15%,20230910_000002_001m.jpg)
#ref(20230910_000003_001m.jpg,left,around,15%,20230910_000003_001m.jpg)
#clear
*** Google Colaboratory を使ってみる [#q2d16be2]
+ Google の[[「Colaboratory へようこそ」>+https://colab.research.google.com//?hl=ja]] ページを開く~
#ref(20230910_000000_001m.jpg,right,around,15%,20230910_000000_001m.jpg)
~
+ Googleアカウントにログインしていなければログインする~
~
+「ファイル」メニューまたはダイアログのボタンから「ノートブックを新規作成」を選択~
#ref(20230910_000005_001m.jpg,left,around,15%,20230910_000005_001m.jpg)
#ref(20230910_000006_001m.jpg,left,around,15%,20230910_000006_001m.jpg)
#clear
~
+ ''左上のタイトル(Untitled*)を「yolov7_custom」に変更する''~
~
+ セルにコードを入力し、左側の三角ボタンをクリックするか「Ctrl」+「Enter」を押して実行する~
#ref(20230910_000011_001m.jpg,right,around,20%,20230910_000011_001m.jpg)
#codeprettify(){{
print("hello colaboratory!")
}}
入力欄(セル)の下に結果が表示され簡単に Python の実行環境が手に入る~
~
&color(green){''※「ノートブック」でのコマンド実行手順''};
++ セルが表示されていない場合は「+コード」メニューを押しセルを表示する~
++ セルにコマンドを入力し、三角ボタンをクリックするか「Ctrl」+「Enter」を押す~
++ ''先頭に「!」をつけることで、PythonではなくOSのコマンドラインに命令を送ることができる''~
#clear
*** Google Colaboratory の実行環境を設定(確認)する [#yabd4487]
#ref(20230910_000012_001m.jpg,right,around,20%,20230910_000012_001m.jpg)
+ メニューから「ランタイム」→「ランタイムのタイプを変更」を選ぶ~
#ref(20230910_000014_001m.jpg,right,around,20%,20230910_000014_001m.jpg)
~
+ 表示されたダイアログから「GPU」を選ぶ~
&color(red){GPU を設定する};~
~
+ スペックを確認する~
~
++「lshw」をインストールする~
#codeprettify(){{
!apt install lshw
}}
++ハードウェア情報を収集するために「lshw」を実行する~
#codeprettify(){{
!lshw
}}
・結果表示
:
*-memory
description: System memory
physical id: 0
size: 12GiB
*-cpu
product: Intel(R) Xeon(R) CPU @ 2.20GHz
vendor: Intel Corp.
physical id: 1
bus info: cpu@0
width: 64 bits
:
*-display
description: 3D controller
product: TU104GL [Tesla T4]
vendor: NVIDIA Corporation
physical id: 4
bus info: pci@0000:00:04.0
version: a1
width: 64 bits
clock: 33MHz
:
※「CPU Intel(R) Xeon(R) 2.20GHz」「メモリ 12GB」「GPU Tesla T4」の実行環境が確認できる~
#clear
*** Google Colaboratory に Googleドライブをマウントする [#da9e47c1]
*** Google Colaboratory に Googleドライブをマウントする [#ncda19fc]
#ref(20230910_000020_001m.jpg,right,around,20%,20230910_000020_001m.jpg)
+ [[Colaboratory>+https://colab.research.google.com//?hl=ja]] ページの右「ファイル」①を押す~
+ [[Colaboratory>+https://colab.research.google.com//?hl=ja]] ページの左「ファイル」①を押す~
~
+ 左上の「ドライブをマウント」②を押す~
~
+ ダイアログが表示された場合「Googleドライブに接続」を押す~
#ref(20230910_000021_001m.jpg,right,around,20%,20230910_000021_001m.jpg)
~
+ マウントが完了すると「drive」フォルダ③が表示される~
前項で作成した「try」フォルダが表示されることを確認する~
~
参考 → [[GoogleColabでGoogleドライブをマウント>+https://kenko-keep.com/google-colab-mount/]]~
#clear
** Google Colaboratory に「YOLO V7」を実装 [#sc82b4ed]
** Google Colaboratory に「YOLO V7」を実装 [#ie4fb451]
クラウド上の Python 仮想環境「Google Colaboratory」に「YOLO V7」を実行できる環境を構築する~
*** 物体検出 AI「YOLO V7」をインストール [#bdfcb78a]
*** 物体検出 AI「YOLO V7」をインストール [#y0ea8e37]
+ カレントディレクトリを「MyDrive/try7」へ移動する~
#codeprettify(){{
cd /content/drive/MyDrive/try
}}
・結果表示
/content/drive/MyDrive/try
※ カレントディレクトリを確認するコマンド~
#codeprettify(){{
!pwd
}}
~
+ [[YOLO v7 オフィシャルサイト>+https://github.com/WongKinYiu/yolov7]] から下記コマンドでプロジェクトをクローン~
#codeprettify(){{
!git clone https://github.com/WongKinYiu/yolov7
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
Cloning into 'yolov7'...
remote: Enumerating objects: 1154, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 1154 (delta 8), reused 13 (delta 7), pack-reused 1139
Receiving objects: 100% (1154/1154), 70.42 MiB | 17.19 MiB/s, done.
Resolving deltas: 100% (494/494), done.
Updating files: 100% (104/104), done.
}}
#enddivregion
~
・成功するとGoogleドライブ「/MyDrive/try」に「yolov7」というディレクトリが作成される~
~
+ カレントディレクトリを「yolov7」へ移動する~
#codeprettify(){{
cd yolov7
}}
~
#ref(20230911_000001_001m.jpg,right,around,15%,20230911_000001_001m.jpg)
+ 学習済みモデルをダウンロード~
・YOLOv7 には学習済みモデルがいくつも用意されているが今回は「yolov7-tiny.pt」を使用する~
#codeprettify(){{
!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
--2023-09-11 00:32:33-- https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
Resolving github.com (github.com)... 140.82.113.3
Connecting to github.com (github.com)|140.82.113.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/511187726/ba7d01ee-125a-4134-8864-fa1abcbf94d5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230911%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230911T003233Z&X-Amz-Expires=300&X-Amz-Signature=e69beb9fd21b30dd684c024a58df89cf2d7402d3d85998f3cee7397cd2680e1b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=511187726&response-content-disposition=attachment%3B%20filename%3Dyolov7-tiny.pt&response-content-type=application%2Foctet-stream [following]
--2023-09-11 00:32:33-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/511187726/ba7d01ee-125a-4134-8864-fa1abcbf94d5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230911%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230911T003233Z&X-Amz-Expires=300&X-Amz-Signature=e69beb9fd21b30dd684c024a58df89cf2d7402d3d85998f3cee7397cd2680e1b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=511187726&response-content-disposition=attachment%3B%20filename%3Dyolov7-tiny.pt&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12639769 (12M) [application/octet-stream]
Saving to: ‘yolov7-tiny.pt’
yolov7-tiny.pt 100%[===================>] 12.05M 34.6MB/s in 0.3s
2023-09-11 00:32:33 (34.6 MB/s) - ‘yolov7-tiny.pt’ saved [12639769/12639769]
}}
#enddivregion
*** サンプル画像の物体検出 [#za6fd4e3]
*** サンプル画像の物体検出 [#efbd893d]
+ サンプル画像の物体検出を実行する~
#codeprettify(){{
!python detect.py --source inference/images/ --weights yolov7-tiny.pt --conf 0.25 --img-size 1280 --device 0
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
Namespace(weights=['yolov7-tiny.pt'], source='inference/images/', img_size=1280, conf_thres=0.25, iou_thres=0.45, device='0', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False, no_trace=False)
YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CUDA:0 (Tesla T4, 15101.8125MB)
Fusing layers...
Model Summary: 200 layers, 6219709 parameters, 229245 gradients
Convert model to Traced-model...
traced_script_module saved!
model is traced!
/usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
4 persons, 1 bus, Done. (13.4ms) Inference, (31.8ms) NMS
The image with the result is saved in: runs/detect/exp/bus.jpg
5 horses, Done. (12.3ms) Inference, (1.8ms) NMS
The image with the result is saved in: runs/detect/exp/horses.jpg
3 persons, Done. (15.3ms) Inference, (1.9ms) NMS
The image with the result is saved in: runs/detect/exp/image1.jpg
2 persons, Done. (12.9ms) Inference, (1.2ms) NMS
The image with the result is saved in: runs/detect/exp/image2.jpg
1 dog, 1 horse, Done. (12.6ms) Inference, (1.3ms) NMS
The image with the result is saved in: runs/detect/exp/image3.jpg
2 persons, 2 ties, Done. (10.9ms) Inference, (1.5ms) NMS
The image with the result is saved in: runs/detect/exp/zidane.jpg
Done. (1.881s)
}}
#enddivregion
~
#ref(20230911_000002_001m.jpg,left,around,12%,20230911_000002_001m.jpg)
~
・サンプル画像は「yolov7/inference/images」を「–source」オプションで指定~
・推論結果の保存先は「yolov7/runs/detect/exp*」( * は自動的に振られる番号)~
・結果の画像は「Google Colab」のファイル、または「Google Drive」で参照(ダウンロード)できる~
#clear
・出力画像(6枚)~
#ref(YOLOv7_Colab/20230510_bus_m.jpg,left,around,18%,20230510_bus_m.jpg)
#ref(YOLOv7_Colab/20230510_horses_m.jpg,left,around,25%,20230510_horses_m.jpg)
#ref(YOLOv7_Colab/20230510_image1_m.jpg,left,around,25%,20230510_image1_m.jpg)
#ref(YOLOv7_Colab/20230510_image2_m.jpg,left,around,25%,20230510_image2_m.jpg)
#ref(YOLOv7_Colab/20230510_image3_m.jpg,left,around,30%,20230510_image3_m.jpg)
#ref(YOLOv7_Colab/20230510_zidane_m.jpg,left,around,20%,20230510_zidane_m.jpg)
#clear
#ref(20230911_000004_001m.jpg,right,around,12%,20230911_000004_001m.jpg)
#ref(20230911_000003_001m.jpg,right,around,12%,20230911_000003_001m.jpg)
*** ランタイム接続を切断する [#wd02b4a6]
*** ランタイム接続を切断する [#b760ff63]
+ メニューバー「ランタイム」→「ランタイムを接続解除して削除」を選択する~
~
+ ダイアログ選択で「はい」を押す~
~
※ ''GPUを選択している場合、実行しない場合は接続解除するほうが望ましい(時間制限がかかるので)''~
#clear
** カスタムデータで学習する [#la64bee9]
** カスタムデータで学習する [#xac89dce]
[[カスタムデータによる学習4「じゃんけんの判定(その3)」>YOLOv7_Colab5]] で作成したデータセット「janken4_dataset」を使用して追加学習をする~
元になる学習モデルは「yolov7-tiny.pt」~
*** 事前準備(データセットのアップロード) [#gcc3da0c]
*** 事前準備(データセットのアップロード) [#p4d8bb5b]
+ [[Googleドライブ>+https://drive.google.com/drive/my-drive]]を起動する~
~
#ref(20230911_000005_001m.jpg,right,around,20%,20230911_000005_001m.jpg)
+ データ・セット「yolov7-main/data/janken4_dataset」をフォルダごとGoogleドライブの「MyDrive/try/yolov7/data」にアップする~
#codeprettify(){{
yolov7
┗ data
┗ janken4_dataset
}}
~
+ 学習するのに必要な情報を記述したyamlファイル「yolov7-main/janken4_dataset.yaml」を「MyDrive/try/yolov7」の直下にアップする~
#codeprettify(){{
yolov7
┠ data
┃ ┗ janken4_dataset
┗ janken4_dataset.yaml
}}
~
+ アップロードには数分かかるので、右下の「アップロード完了」の表示を確認してから次に進む~
*** Google Colaboratory で学習を実行する [#o69ee7d2]
*** Google Colaboratory で学習を実行する [#c47db155]
#ref(20230910_000020_001m.jpg,right,around,20%,20230910_000020_001m.jpg)
+ [[Colaboratory>+https://colab.research.google.com/?hl=ja]] を起動し、ノートブック「yolov7_custom.ipynb」を選択する~
~
+ 左サイドバーの「ファイル」→「ドライブをマウント」を選択することで [[Googleドライブ>+https://drive.google.com/drive/my-drive]]をマウントする~
~
+ カレントディレクトリを「MyDrive/try/yolov7」へ移動する~
#codeprettify(){{
cd /content/drive/MyDrive/try/yolov7
}}
/content/drive/MyDrive/try/yolov7
+ ''下記のコマンドで「janken4_dataset」を学習する (yolov7-tiny)''~
&color(green){学習時間を少なくして効果を上げるため、学習モデル = yolov7-tiny・バッチサイズ = 16・エポック数 = 50 で学習する};~
#codeprettify(){{
!python train.py --workers 8 --batch-size 16 --data janken4_dataset.yaml --cfg cfg/training/yolov7-tiny.yaml --weights 'yolov7-tiny.pt' --name yolov7-tiny_jk4 --hyp data/hyp.scratch.tiny.yaml --epochs 50 --device 0
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
2023-09-11 04:53:56.828622: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-11 04:53:57.729827: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CUDA:0 (Tesla T4, 15101.8125MB)
Namespace(weights='yolov7-tiny.pt', cfg='cfg/training/yolov7-tiny.yaml', data='janken4_dataset.yaml', hyp='data/hyp.scratch.tiny.yaml', epochs=50, batch_size=16, img_size=[640, 640], rect=False, resume=False, nosave=False, notest=False, noautoanchor=False, evolve=False, bucket='', cache_images=False, image_weights=False, device='0', multi_scale=False, single_cls=False, adam=False, sync_bn=False, local_rank=-1, workers=8, project='runs/train', entity=None, name='yolov7-tiny_jk4', exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias='latest', freeze=[0], v5_metric=False, world_size=1, global_rank=-1, save_dir='runs/train/yolov7-tiny_jk4', total_batch_size=16)
tensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.05, copy_paste=0.0, paste_in=0.05, loss_ota=1
wandb: Install Weights & Biases for YOLOR logging with 'pip install wandb' (recommended)
Overriding model.yaml nc=80 with nc=3
from n params module arguments
0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
6 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
8 -1 1 0 models.common.MP []
9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
13 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
15 -1 1 0 models.common.MP []
16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
20 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
22 -1 1 0 models.common.MP []
23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
27 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
29 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
30 -2 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
31 -1 1 0 models.common.SP [5]
32 -2 1 0 models.common.SP [9]
33 -3 1 0 models.common.SP [13]
34 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
35 -1 1 262656 models.common.Conv [1024, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
36 [-1, -7] 1 0 models.common.Concat [1]
37 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
38 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
40 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
41 [-1, -2] 1 0 models.common.Concat [1]
42 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
43 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
44 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
45 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
46 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
47 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
48 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
49 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
50 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
51 [-1, -2] 1 0 models.common.Concat [1]
52 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
53 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
54 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
55 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
56 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
57 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
58 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
59 [-1, 47] 1 0 models.common.Concat [1]
60 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
61 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
62 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
63 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
64 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
65 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
66 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)]
67 [-1, 37] 1 0 models.common.Concat [1]
68 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
69 -2 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
70 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
71 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
72 [-1, -2, -3, -4] 1 0 models.common.Concat [1]
73 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)]
74 57 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
75 65 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
76 73 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)]
77 [74, 75, 76] 1 22544 models.yolo.IDetect [3, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 263 layers, 6020400 parameters, 6020400 gradients
Transferred 330/344 items from yolov7-tiny.pt
Scaled weight_decay = 0.0005
Optimizer groups: 58 .bias, 58 conv.weight, 61 other
train: Scanning 'data/janken4_dataset/train/labels.cache' images and labels... 480 found, 0 missing, 0 empty, 0 corrupted: 100% 480/480 [00:00<?, ?it/s]
val: Scanning 'data/janken4_dataset/valid/labels.cache' images and labels... 120 found, 0 missing, 0 empty, 0 corrupted: 100% 120/120 [00:00<?, ?it/s]
autoanchor: Analyzing anchors... anchors/target = 3.13, Best Possible Recall (BPR) = 1.0000
Image sizes 640 train, 640 test
Using 2 dataloader workers
Logging results to runs/train/yolov7-tiny_jk4
Starting training for 50 epochs...
Epoch gpu_mem box obj cls total labels img_size
0/49 2.82G 0.05163 0.03024 0.03289 0.1148 40 640: 100% 30/30 [00:48<00:00, 1.61s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 0% 0/4 [00:00<?, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:04<00:00, 1.17s/it]
all 120 120 0.0294 0.0833 0.0121 0.0019
Epoch gpu_mem box obj cls total labels img_size
1/49 2.76G 0.04142 0.01522 0.0269 0.08355 39 640: 100% 30/30 [00:24<00:00, 1.21it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s]
all 120 120 0.0508 0.267 0.0419 0.0107
Epoch gpu_mem box obj cls total labels img_size
2/49 3.03G 0.04004 0.01346 0.02455 0.07805 43 640: 100% 30/30 [00:25<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.44it/s]
all 120 120 0.248 0.475 0.269 0.0753
Epoch gpu_mem box obj cls total labels img_size
3/49 3.03G 0.03464 0.01214 0.02277 0.06955 40 640: 100% 30/30 [00:25<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.86it/s]
all 120 120 0.281 0.6 0.344 0.136
Epoch gpu_mem box obj cls total labels img_size
4/49 3.03G 0.03583 0.01056 0.02009 0.06647 32 640: 100% 30/30 [00:24<00:00, 1.21it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.00it/s]
all 120 120 0.289 0.658 0.409 0.173
Epoch gpu_mem box obj cls total labels img_size
5/49 3.03G 0.03825 0.01048 0.02177 0.0705 44 640: 100% 30/30 [00:25<00:00, 1.16it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.28it/s]
all 120 120 0.22 0.308 0.177 0.0539
Epoch gpu_mem box obj cls total labels img_size
6/49 3.04G 0.0388 0.01073 0.02577 0.0753 38 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.96it/s]
all 120 120 0.303 0.625 0.391 0.144
Epoch gpu_mem box obj cls total labels img_size
7/49 3.04G 0.04666 0.009941 0.02295 0.07955 45 640: 100% 30/30 [00:25<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.07it/s]
all 120 120 0.415 0.35 0.366 0.139
Epoch gpu_mem box obj cls total labels img_size
8/49 3.04G 0.03045 0.009827 0.01737 0.05764 43 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.44it/s]
all 120 120 0.317 0.4 0.291 0.125
Epoch gpu_mem box obj cls total labels img_size
9/49 3.04G 0.03207 0.009744 0.01779 0.05961 40 640: 100% 30/30 [00:22<00:00, 1.31it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.40it/s]
all 120 120 0.393 0.514 0.464 0.241
Epoch gpu_mem box obj cls total labels img_size
10/49 3.04G 0.04525 0.009894 0.02294 0.07808 43 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.45it/s]
all 120 120 0.389 0.442 0.371 0.153
Epoch gpu_mem box obj cls total labels img_size
11/49 3.04G 0.03351 0.01057 0.0189 0.06298 35 640: 100% 30/30 [00:23<00:00, 1.27it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.62it/s]
all 120 120 0.325 0.3 0.334 0.126
Epoch gpu_mem box obj cls total labels img_size
12/49 3.04G 0.03801 0.009947 0.01937 0.06733 33 640: 100% 30/30 [00:23<00:00, 1.29it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.17it/s]
all 120 120 0.587 0.585 0.616 0.271
Epoch gpu_mem box obj cls total labels img_size
13/49 3.04G 0.0348 0.01007 0.01838 0.06325 41 640: 100% 30/30 [00:24<00:00, 1.22it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.79it/s]
all 120 120 0.0701 0.183 0.0393 0.00638
Epoch gpu_mem box obj cls total labels img_size
14/49 3.04G 0.03803 0.01172 0.0212 0.07095 39 640: 100% 30/30 [00:24<00:00, 1.21it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s]
all 120 120 0.423 0.408 0.314 0.111
Epoch gpu_mem box obj cls total labels img_size
15/49 3.04G 0.03323 0.0099 0.01628 0.05941 31 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.07it/s]
all 120 120 0.553 0.808 0.746 0.39
Epoch gpu_mem box obj cls total labels img_size
16/49 3.04G 0.0414 0.009267 0.01829 0.06896 34 640: 100% 30/30 [00:24<00:00, 1.25it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.13it/s]
all 120 120 0.438 0.5 0.452 0.223
Epoch gpu_mem box obj cls total labels img_size
17/49 3.04G 0.0422 0.009625 0.01798 0.06981 40 640: 100% 30/30 [00:25<00:00, 1.19it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.19it/s]
all 120 120 0.627 0.8 0.796 0.376
Epoch gpu_mem box obj cls total labels img_size
18/49 3.04G 0.03946 0.01036 0.01854 0.06837 43 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s]
all 120 120 0.416 0.515 0.435 0.168
Epoch gpu_mem box obj cls total labels img_size
19/49 3.04G 0.03422 0.009949 0.01627 0.06044 45 640: 100% 30/30 [00:24<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.99it/s]
all 120 120 0.673 0.697 0.762 0.424
Epoch gpu_mem box obj cls total labels img_size
20/49 3.04G 0.03836 0.0101 0.01839 0.06684 32 640: 100% 30/30 [00:24<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.90it/s]
all 120 120 0.511 0.725 0.647 0.299
Epoch gpu_mem box obj cls total labels img_size
21/49 3.04G 0.0348 0.009871 0.01717 0.06184 45 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.29it/s]
all 120 120 0.501 0.57 0.53 0.164
Epoch gpu_mem box obj cls total labels img_size
22/49 3.04G 0.03335 0.009804 0.01746 0.06061 43 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.91it/s]
all 120 120 0.552 0.692 0.69 0.368
Epoch gpu_mem box obj cls total labels img_size
23/49 3.04G 0.04454 0.009487 0.01796 0.07199 40 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.32it/s]
all 120 120 0.696 0.907 0.879 0.421
Epoch gpu_mem box obj cls total labels img_size
24/49 3.04G 0.03541 0.009314 0.01514 0.05986 45 640: 100% 30/30 [00:25<00:00, 1.19it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.30it/s]
all 120 120 0.749 0.892 0.877 0.522
Epoch gpu_mem box obj cls total labels img_size
25/49 3.04G 0.0346 0.008935 0.01424 0.05777 38 640: 100% 30/30 [00:24<00:00, 1.23it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.21it/s]
all 120 120 0.737 0.865 0.873 0.473
Epoch gpu_mem box obj cls total labels img_size
26/49 3.04G 0.03659 0.009053 0.01299 0.05863 41 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.25it/s]
all 120 120 0.879 0.85 0.938 0.536
Epoch gpu_mem box obj cls total labels img_size
27/49 3.04G 0.03182 0.009148 0.01238 0.05335 40 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.43it/s]
all 120 120 0.839 0.924 0.946 0.522
Epoch gpu_mem box obj cls total labels img_size
28/49 3.04G 0.03359 0.008742 0.0132 0.05553 39 640: 100% 30/30 [00:26<00:00, 1.14it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.75it/s]
all 120 120 0.9 0.93 0.968 0.607
Epoch gpu_mem box obj cls total labels img_size
29/49 3.04G 0.02708 0.008494 0.01253 0.04811 44 640: 100% 30/30 [00:24<00:00, 1.24it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.36it/s]
all 120 120 0.942 0.939 0.977 0.586
Epoch gpu_mem box obj cls total labels img_size
30/49 3.04G 0.02979 0.008439 0.01063 0.04886 49 640: 100% 30/30 [00:25<00:00, 1.16it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.30it/s]
all 120 120 0.774 0.882 0.879 0.428
Epoch gpu_mem box obj cls total labels img_size
31/49 3.04G 0.03019 0.008725 0.01339 0.0523 34 640: 100% 30/30 [00:26<00:00, 1.15it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.16it/s]
all 120 120 0.793 0.887 0.914 0.54
Epoch gpu_mem box obj cls total labels img_size
32/49 3.04G 0.03222 0.009168 0.01255 0.05395 33 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.26it/s]
all 120 120 0.872 0.908 0.934 0.594
Epoch gpu_mem box obj cls total labels img_size
33/49 3.04G 0.0282 0.008521 0.0134 0.05012 38 640: 100% 30/30 [00:24<00:00, 1.24it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.64it/s]
all 120 120 0.916 0.946 0.974 0.619
Epoch gpu_mem box obj cls total labels img_size
34/49 3.04G 0.02161 0.008843 0.01127 0.04173 40 640: 100% 30/30 [00:24<00:00, 1.25it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.58it/s]
all 120 120 0.926 0.967 0.979 0.653
Epoch gpu_mem box obj cls total labels img_size
35/49 3.04G 0.02493 0.00863 0.01233 0.04589 40 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.76it/s]
all 120 120 0.958 0.952 0.98 0.614
Epoch gpu_mem box obj cls total labels img_size
36/49 3.04G 0.02508 0.008566 0.01639 0.05004 26 640: 100% 30/30 [00:24<00:00, 1.24it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.42it/s]
all 120 120 0.951 0.975 0.986 0.617
Epoch gpu_mem box obj cls total labels img_size
37/49 3.04G 0.02741 0.008481 0.01651 0.0524 31 640: 100% 30/30 [00:23<00:00, 1.27it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.43it/s]
all 120 120 0.981 0.956 0.985 0.67
Epoch gpu_mem box obj cls total labels img_size
38/49 3.04G 0.02756 0.008425 0.01379 0.04977 35 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.87it/s]
all 120 120 0.979 0.967 0.984 0.65
Epoch gpu_mem box obj cls total labels img_size
39/49 3.04G 0.02563 0.009084 0.01572 0.05044 40 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.65it/s]
all 120 120 0.95 0.975 0.984 0.639
Epoch gpu_mem box obj cls total labels img_size
40/49 3.04G 0.02511 0.008971 0.01684 0.05092 42 640: 100% 30/30 [00:24<00:00, 1.24it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.43it/s]
all 120 120 0.966 0.975 0.985 0.658
Epoch gpu_mem box obj cls total labels img_size
41/49 3.04G 0.02154 0.009098 0.01421 0.04485 37 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.66it/s]
all 120 120 0.988 0.95 0.985 0.68
Epoch gpu_mem box obj cls total labels img_size
42/49 3.04G 0.02811 0.008275 0.01537 0.05175 34 640: 100% 30/30 [00:23<00:00, 1.26it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.26it/s]
all 120 120 0.989 0.966 0.986 0.674
Epoch gpu_mem box obj cls total labels img_size
43/49 3.04G 0.02775 0.008882 0.0157 0.05233 36 640: 100% 30/30 [00:24<00:00, 1.24it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.61it/s]
all 120 120 0.981 0.958 0.988 0.656
Epoch gpu_mem box obj cls total labels img_size
44/49 3.04G 0.02123 0.008521 0.01461 0.04436 40 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.08it/s]
all 120 120 0.992 0.961 0.989 0.68
Epoch gpu_mem box obj cls total labels img_size
45/49 3.04G 0.02376 0.008377 0.0143 0.04644 37 640: 100% 30/30 [00:25<00:00, 1.20it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 2.00it/s]
all 120 120 0.965 0.975 0.988 0.701
Epoch gpu_mem box obj cls total labels img_size
46/49 3.04G 0.0289 0.008536 0.01608 0.05352 38 640: 100% 30/30 [00:25<00:00, 1.18it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.09it/s]
all 120 120 0.96 0.959 0.987 0.699
Epoch gpu_mem box obj cls total labels img_size
47/49 3.04G 0.02046 0.00847 0.01375 0.04268 40 640: 100% 30/30 [00:25<00:00, 1.17it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.24it/s]
all 120 120 0.986 0.957 0.987 0.708
Epoch gpu_mem box obj cls total labels img_size
48/49 3.04G 0.02173 0.008446 0.01309 0.04327 41 640: 100% 30/30 [00:24<00:00, 1.22it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.04it/s]
all 120 120 0.971 0.975 0.989 0.71
Epoch gpu_mem box obj cls total labels img_size
49/49 3.04G 0.02241 0.008406 0.0145 0.04532 30 640: 100% 30/30 [00:25<00:00, 1.19it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:04<00:00, 1.01s/it]
all 120 120 0.977 0.975 0.989 0.709
goo 120 40 0.962 1 0.993 0.687
choki 120 40 0.969 0.95 0.98 0.659
par 120 40 1 0.975 0.992 0.782
50 epochs completed in 0.403 hours.
Optimizer stripped from runs/train/yolov7-tiny_jk4/weights/last.pt, 12.3MB
Optimizer stripped from runs/train/yolov7-tiny_jk4/weights/best.pt, 12.3MB
}}
#enddivregion
~
#ref(20230911_000006_001m.jpg,right,around,15%,20230911_000006_001m.jpg)
+ およそ30分で正常に終了すると、学習結果は「yolov7/runs/train/yolov7-tiny_jk4/」に保存される~
~
+ onnx パッケージをインストールする~
#codeprettify(){{
!pip install onnx
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
Collecting onnx
Downloading onnx-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.6/14.6 MB 37.4 MB/s eta 0:00:00
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from onnx) (1.23.5)
Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx) (3.20.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx) (4.5.0)
Installing collected packages: onnx
Successfully installed onnx-1.14.1
}}
#enddivregion
~
+ 学習結果モデル「runs/train/yolov7-tiny_jk4/weights/best.pt」を onnx 形式に変換する~
#codeprettify(){{
!python export.py --weights runs/train/yolov7-tiny_jk4/weights/best.pt
}}
#divregion( - log - '''GoogleColab Tesla T4''')
#codeprettify(){{
Import onnx_graphsurgeon failure: No module named 'onnx_graphsurgeon'
Namespace(weights='runs/train/yolov7-tiny_jk4/weights/best.pt', img_size=[640, 640], batch_size=1, dynamic=False, dynamic_batch=False, grid=False, end2end=False, max_wh=None, topk_all=100, iou_thres=0.45, conf_thres=0.25, device='cpu', simplify=False, include_nms=False, fp16=False, int8=False)
YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CPU
Fusing layers...
IDetect.fuse
Model Summary: 208 layers, 6013008 parameters, 0 gradients
Starting TorchScript export with torch 2.0.1+cu118...
TorchScript export success, saved as runs/train/yolov7-tiny_jk4/weights/best.torchscript.pt
CoreML export failure: No module named 'coremltools'
Starting TorchScript-Lite export with torch 2.0.1+cu118...
TorchScript-Lite export success, saved as runs/train/yolov7-tiny_jk4/weights/best.torchscript.ptl
Starting ONNX export with onnx 1.14.1...
/content/drive/MyDrive/try/yolov7/models/yolo.py:582: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if augment:
/content/drive/MyDrive/try/yolov7/models/yolo.py:614: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if profile:
/content/drive/MyDrive/try/yolov7/models/yolo.py:629: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if profile:
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
ONNX export success, saved as runs/train/yolov7-tiny_jk4/weights/best.onnx
Export complete (8.19s). Visualize with https://github.com/lutzroeder/netron.
}}
#enddivregion
~
+ メニューバー「ランタイム」→「ランタイムを接続解除して削除」を選択し、ランタイム接続を切断する~
~
+ 学習結果 モデルは「yolov7-tiny_jk4/weights」評価指標は「yolov7-tiny_jk4」フォルダ に保存される~
|CENTER:F1 curve|CENTER:P curve|CENTER:PR curve|CENTER:R curve|h
|#ref(jk4_tiny_F1_curve_m.jpg,left,around,15%,F1_curve_m.jpg)|#ref(jk4_tiny_P_curve_m.jpg,left,around,15%,P_curve_m.jpg)|#ref(jk4_tiny_PR_curve_m.jpg,left,around,15%,PR_curve_m.jpg)|#ref(jk4_tiny_R_curve_m.jpg,left,around,15%,R_curve_m.jpg)|
#ref(jk4_tiny_results_m.jpg,left,around,40%,results_m.jpg)
#ref(jk4_tiny_confusion_matrix_m.jpg,left,around,25%,confusion_matrix_m.jpg)
#clear
#ref(jk4_tiny_test_batch0_labels.jpg,left,around,10%,test_batch0_labels.jpg)
#ref(jk4_tiny_test_batch0_pred.jpg,left,around,10%,test_batch0_pred.jpg)
#ref(jk4_tiny_test_batch1_labels.jpg,left,around,10%,test_batch1_labels.jpg)
#ref(jk4_tiny_test_batch1_pred.jpg,left,around,10%,test_batch1_pred.jpg)
#ref(jk4_tiny_test_batch2_labels.jpg,left,around,10%,test_batch2_labels.jpg)
#ref(jk4_tiny_test_batch2_pred.jpg,left,around,10%,test_batch2_pred.jpg)
#clear
~
+ ''GoogleColab を終了し、Googleドライブから学習結果をローカル・マシンにダウンロードする''~
Google Drive 上の「yolov7-tiny_jk4」フォルダをダウンロード(圧縮ファイルを展開)~
「MyDrive/try/yolov7/runs/train/yolov7-tiny_jk4」→「yolov7-main/runs/train/yolov7-tiny_jk4」~
** 学習済みモデルで推論を実行する [#g3d8198d]
** 学習済みモデルで推論を実行する [#n816f8db]
[[物体認識の深層学習タスク:YOLO V7>YOLOv7]] で作成・使用した「anaconda」上の仮想環境(py38a)を利用する~
プロジェクト・フォルダは「/anacondawin/work/yolov7」~
使用するカスタムデータによる学習済みモデルは「/anacondawin/work/yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx」~
プロジェクト・フォルダは「/anaconda_win/work/yolov7」~
使用するカスタムデータによる学習済みモデルは「/anaconda_win/work/yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx」~
*** ローカルマシンで推論実行 [#d692733f]
*** ローカルマシンで推論実行 [#o6c570c0]
- OpenVINO™ での実行~
・実行ディレクトリへ切り替える~
#codeprettify(){{
(py38a) PS > cd /anaconda_win/work/yolov7
}}
- 実行コマンドと結果~
|CENTER:60|LEFT:600|LEFT:220|c
|&color(blue){入力ソース};|CENTER:コマンド|CENTER:実行結果|h
|カメラ画像|python object_detect_yolo7.py &color(green){-m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx}; -l janken.names_jp &color(blue){-i cam}; -d GPU|CENTER:-|
|janken3.jpg|python object_detect_yolo7.py &color(green){-m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx}; -l janken.names_jp &color(blue){-i ../../Images/janken3.jpg}; -d GPU|#ref(janken3_yolov7-tiny_jk4_m.jpg,left,around,20%,janken3_yolov7-tiny_jk4_m.jpg)|
|janken_test2.mp4|python object_detect_yolo7.py &color(green){-m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx}; -l janken.names_jp &color(blue){-i ../../Videos/janken_test2.mp4};|&tinyvideo(https://izutsu.aa0.netvolante.jp/video/ai_result/janken_test2_yolov7-tiny_jk4_s.mp4,200 150,controls,loop,muted,autoplay);|
※1 インテル製 GPU を使用しないときは「-d GPU」オプションを除く~
~
#divregion( - log - '''Intel® Core™ i7-6700 / Intel® HD Graphics 530''')
#codeprettify(){{
(py38a) PS > cd /anaconda_win/work/yolov7
(py38a) PS > python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i cam -d GPU
Starting..
- Program title : Object detection YOLO V7
- OpenCV version : 4.5.5
- OpenVINO engine: 2022.1.0-7019-cdb9bec7210-releases/2022/1
- Input image : cam
- Model : ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx
- Device : GPU
- Confidence thr : 0.25
- IOU threshold : 0.45
- Label : janken.names_jp
- Log level : 3
- Title flag : y
- Speed flag : y
- Processed out : non
- Preprocessing : False
- Batch size : 1
- number of inf : 1
- With grid : False
[ WARN:0@14.381] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (539) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
FPS average: 9.30
Finished.
(py38a) PS > python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Images/janken3.jpg -d GPU -o janken3_yolov7-tiny_jk4.jpg
Starting..
- Program title : Object detection YOLO V7
- OpenCV version : 4.5.5
- OpenVINO engine: 2022.1.0-7019-cdb9bec7210-releases/2022/1
- Input image : ../../Images/janken3.jpg
- Model : ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx
- Device : GPU
- Confidence thr : 0.25
- IOU threshold : 0.45
- Label : janken.names_jp
- Log level : 3
- Title flag : y
- Speed flag : y
- Processed out : janken3_yolov7-tiny_jk4.jpg
- Preprocessing : False
- Batch size : 1
- number of inf : 1
- With grid : False
FPS average: 9.90
Finished.
(py38a) PS > python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Videos/janken_test2.mp4 -o janken_test2_yolov7-tiny_jk4.mp4
Starting..
- Program title : Object detection YOLO V7
- OpenCV version : 4.5.5
- OpenVINO engine: 2022.1.0-7019-cdb9bec7210-releases/2022/1
- Input image : ../../Videos/janken_test2.mp4
- Model : ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx
- Device : CPU
- Confidence thr : 0.25
- IOU threshold : 0.45
- Label : janken.names_jp
- Log level : 3
- Title flag : y
- Speed flag : y
- Processed out : janken_test2_yolov7-tiny_jk4.mp4
- Preprocessing : False
- Batch size : 1
- number of inf : 1
- With grid : False
FPS average: 5.10
Finished.
}}
#enddivregion
#br
** 更新履歴 [#xb095000]
- 2023/02/22 初版~
- 2023/09/10 全面改訂 YOLOv7 カスタムデータによる学習~
#br
* 参考資料 [#z668d3a1]
- Google Colaboratory~
-- [[Colaboratory へようこそ>+https://colab.research.google.com/]]~
-- [[Google Colaboratoryの無料GPU環境を使ってみた>+https://www.tdi.co.jp/miso/google-colaboratory-gpu]]~
- YOLO V7~
-- [[Official YOLOv7>+https://github.com/WongKinYiu/yolov7]]~
-- [[YOLOv7_OpenVINO>+https://github.com/OpenVINO-dev-contest/YOLOv7_OpenVINO_cpp-python]]~
- サイト内参考ページ~
-- [[カスタムデータによる学習2「じゃんけんの判定1」>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab2]]~
-- [[学習パラメータ考察>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab3]]~
-- [[カスタムデータによる学習3「じゃんけんの判定2」>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab4]]~
-- [[カスタムデータによる学習4「じゃんけんの判定3」>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab5]]~
-- [[元になる学習モデルの違いによる考察>+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab6]]~
-- [[「じゃんけん」カスタムデータによる学習モデルまとめ >+https://izutsu.aa0.netvolante.jp/pukiwiki/?YOLOv7_Colab7]]~
#br