私的AI研究会 > Colab
機械学習や Deep Learning の実行環境に「Google Colaboratory」を使ってみる
YOLO V7 を使ってカスタムデータによる学習を実行する
環境設定/カスタムデータによる追加学習/ローカルマシンでの推論実行まで、1時間以内に終了する作業を目標とする
無償で手軽に利用可能な クラウド上の Python 環境「Google Colaboratory」を確認する
print("hello colaboratory!")入力欄(セル)の下に結果が表示され簡単に Python の実行環境が手に入る
!apt install lshw
!lshw : *-memory description: System memory physical id: 0 size: 12GiB *-cpu product: Intel(R) Xeon(R) CPU @ 2.20GHz vendor: Intel Corp. physical id: 1 bus info: cpu@0 width: 64 bits : *-display description: 3D controller product: TU104GL [Tesla T4] vendor: NVIDIA Corporation physical id: 4 bus info: pci@0000:00:04.0 version: a1 width: 64 bits clock: 33MHz :※「CPU Intel(R) Xeon(R) 2.20GHz」「メモリ 12GB」「GPU Tesla T4」の実行環境が確認できる
クラウド上の Python 仮想環境「Google Colaboratory」に「YOLO V7」を実行できる環境を構築する
cd /content/drive/MyDrive/try・結果表示
/content/drive/MyDrive/try※ カレントディレクトリを確認するコマンド
!pwd
!git clone https://github.com/WongKinYiu/yolov7
Cloning into 'yolov7'... remote: Enumerating objects: 1154, done. remote: Counting objects: 100% (15/15), done. remote: Compressing objects: 100% (8/8), done. remote: Total 1154 (delta 8), reused 13 (delta 7), pack-reused 1139 Receiving objects: 100% (1154/1154), 70.42 MiB | 17.19 MiB/s, done. Resolving deltas: 100% (494/494), done. Updating files: 100% (104/104), done.
cd yolov7
!pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Requirement already satisfied: matplotlib>=3.2.2 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 4)) (3.7.1) Requirement already satisfied: numpy<1.24.0,>=1.18.5 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 5)) (1.22.4) Requirement already satisfied: opencv-python>=4.1.1 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 6)) (4.7.0.72) Requirement already satisfied: Pillow>=7.1.2 in /usr/local/lib/python3.10/dist-packages (from -r requirements.txt (line 7)) (8.4.0) : : Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch!=1.12.0,>=1.7.0->-r requirements.txt (line 11)) (1.3.0) Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 17)) (0.5.0) Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard>=2.4.1->-r requirements.txt (line 17)) (3.2.2) Installing collected packages: jedi, thop Successfully installed jedi-0.18.2 thop-0.1.1.post2209072238
!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
--2023-09-11 00:32:33-- https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt Resolving github.com (github.com)... 140.82.113.3 Connecting to github.com (github.com)|140.82.113.3|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/511187726/ba7d01ee-125a-4134-8864-fa1abcbf94d5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230911%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230911T003233Z&X-Amz-Expires=300&X-Amz-Signature=e69beb9fd21b30dd684c024a58df89cf2d7402d3d85998f3cee7397cd2680e1b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=511187726&response-content-disposition=attachment%3B%20filename%3Dyolov7-tiny.pt&response-content-type=application%2Foctet-stream [following] --2023-09-11 00:32:33-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/511187726/ba7d01ee-125a-4134-8864-fa1abcbf94d5?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230911%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230911T003233Z&X-Amz-Expires=300&X-Amz-Signature=e69beb9fd21b30dd684c024a58df89cf2d7402d3d85998f3cee7397cd2680e1b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=511187726&response-content-disposition=attachment%3B%20filename%3Dyolov7-tiny.pt&response-content-type=application%2Foctet-stream Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.110.133, ... Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 12639769 (12M) [application/octet-stream] Saving to: ‘yolov7-tiny.pt’ yolov7-tiny.pt 100%[===================>] 12.05M 34.6MB/s in 0.3s 2023-09-11 00:32:33 (34.6 MB/s) - ‘yolov7-tiny.pt’ saved [12639769/12639769]
!python detect.py --source inference/images/ --weights yolov7-tiny.pt --conf 0.25 --img-size 1280 --device 0
Namespace(weights=['yolov7-tiny.pt'], source='inference/images/', img_size=1280, conf_thres=0.25, iou_thres=0.45, device='0', view_img=False, save_txt=False, save_conf=False, nosave=False, classes=None, agnostic_nms=False, augment=False, update=False, project='runs/detect', name='exp', exist_ok=False, no_trace=False) YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CUDA:0 (Tesla T4, 15101.8125MB) Fusing layers... Model Summary: 200 layers, 6219709 parameters, 229245 gradients Convert model to Traced-model... traced_script_module saved! model is traced! /usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 4 persons, 1 bus, Done. (13.4ms) Inference, (31.8ms) NMS The image with the result is saved in: runs/detect/exp/bus.jpg 5 horses, Done. (12.3ms) Inference, (1.8ms) NMS The image with the result is saved in: runs/detect/exp/horses.jpg 3 persons, Done. (15.3ms) Inference, (1.9ms) NMS The image with the result is saved in: runs/detect/exp/image1.jpg 2 persons, Done. (12.9ms) Inference, (1.2ms) NMS The image with the result is saved in: runs/detect/exp/image2.jpg 1 dog, 1 horse, Done. (12.6ms) Inference, (1.3ms) NMS The image with the result is saved in: runs/detect/exp/image3.jpg 2 persons, 2 ties, Done. (10.9ms) Inference, (1.5ms) NMS The image with the result is saved in: runs/detect/exp/zidane.jpg Done. (1.881s)
カスタムデータによる学習4「じゃんけんの判定(その3)」 で作成したデータセット「janken4_dataset」を使用して追加学習をする
元になる学習モデルは「yolov7-tiny.pt」
yolov7 ┗ data ┗ janken4_dataset
yolov7 ┠ data ┃ ┗ janken4_dataset ┗ janken4_dataset.yaml
cd /content/drive/MyDrive/try/yolov7
/content/drive/MyDrive/try/yolov7
!python train.py --workers 8 --batch-size 16 --data janken4_dataset.yaml --cfg cfg/training/yolov7-tiny.yaml --weights 'yolov7-tiny.pt' --name yolov7-tiny_jk4 --hyp data/hyp.scratch.tiny.yaml --epochs 50 --device 0
2023-09-11 04:53:56.828622: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-11 04:53:57.729827: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CUDA:0 (Tesla T4, 15101.8125MB) Namespace(weights='yolov7-tiny.pt', cfg='cfg/training/yolov7-tiny.yaml', data='janken4_dataset.yaml', hyp='data/hyp.scratch.tiny.yaml', epochs=50, batch_size=16, img_size=[640, 640], rect=False, resume=False, nosave=False, notest=False, noautoanchor=False, evolve=False, bucket='', cache_images=False, image_weights=False, device='0', multi_scale=False, single_cls=False, adam=False, sync_bn=False, local_rank=-1, workers=8, project='runs/train', entity=None, name='yolov7-tiny_jk4', exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias='latest', freeze=[0], v5_metric=False, world_size=1, global_rank=-1, save_dir='runs/train/yolov7-tiny_jk4', total_batch_size=16) tensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.05, copy_paste=0.0, paste_in=0.05, loss_ota=1 wandb: Install Weights & Biases for YOLOR logging with 'pip install wandb' (recommended) Overriding model.yaml nc=80 with nc=3 from n params module arguments 0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)] 2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 6 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 8 -1 1 0 models.common.MP [] 9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 13 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 15 -1 1 0 models.common.MP [] 16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 20 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 22 -1 1 0 models.common.MP [] 23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 27 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 29 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 30 -2 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 31 -1 1 0 models.common.SP [5] 32 -2 1 0 models.common.SP [9] 33 -3 1 0 models.common.SP [13] 34 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 35 -1 1 262656 models.common.Conv [1024, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 36 [-1, -7] 1 0 models.common.Concat [1] 37 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 38 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 40 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 41 [-1, -2] 1 0 models.common.Concat [1] 42 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 43 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 44 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 45 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 46 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 47 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 48 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 49 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 50 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 51 [-1, -2] 1 0 models.common.Concat [1] 52 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 53 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 54 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 55 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 56 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 57 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 58 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)] 59 [-1, 47] 1 0 models.common.Concat [1] 60 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 61 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 62 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 63 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 64 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 65 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 66 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope=0.1)] 67 [-1, 37] 1 0 models.common.Concat [1] 68 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 69 -2 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 70 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 71 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 72 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 73 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope=0.1)] 74 57 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 75 65 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 76 73 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope=0.1)] 77 [74, 75, 76] 1 22544 models.yolo.IDetect [3, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] Model Summary: 263 layers, 6020400 parameters, 6020400 gradients Transferred 330/344 items from yolov7-tiny.pt Scaled weight_decay = 0.0005 Optimizer groups: 58 .bias, 58 conv.weight, 61 other train: Scanning 'data/janken4_dataset/train/labels.cache' images and labels... 480 found, 0 missing, 0 empty, 0 corrupted: 100% 480/480 [00:00<?, ?it/s] val: Scanning 'data/janken4_dataset/valid/labels.cache' images and labels... 120 found, 0 missing, 0 empty, 0 corrupted: 100% 120/120 [00:00<?, ?it/s] autoanchor: Analyzing anchors... anchors/target = 3.13, Best Possible Recall (BPR) = 1.0000 Image sizes 640 train, 640 test Using 2 dataloader workers Logging results to runs/train/yolov7-tiny_jk4 Starting training for 50 epochs... Epoch gpu_mem box obj cls total labels img_size 0/49 2.82G 0.05163 0.03024 0.03289 0.1148 40 640: 100% 30/30 [00:48<00:00, 1.61s/it] Class Images Labels P R mAP@.5 mAP@.5:.95: 0% 0/4 [00:00<?, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:04<00:00, 1.17s/it] all 120 120 0.0294 0.0833 0.0121 0.0019 Epoch gpu_mem box obj cls total labels img_size 1/49 2.76G 0.04142 0.01522 0.0269 0.08355 39 640: 100% 30/30 [00:24<00:00, 1.21it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s] all 120 120 0.0508 0.267 0.0419 0.0107 Epoch gpu_mem box obj cls total labels img_size 2/49 3.03G 0.04004 0.01346 0.02455 0.07805 43 640: 100% 30/30 [00:25<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.44it/s] all 120 120 0.248 0.475 0.269 0.0753 Epoch gpu_mem box obj cls total labels img_size 3/49 3.03G 0.03464 0.01214 0.02277 0.06955 40 640: 100% 30/30 [00:25<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.86it/s] all 120 120 0.281 0.6 0.344 0.136 Epoch gpu_mem box obj cls total labels img_size 4/49 3.03G 0.03583 0.01056 0.02009 0.06647 32 640: 100% 30/30 [00:24<00:00, 1.21it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.00it/s] all 120 120 0.289 0.658 0.409 0.173 Epoch gpu_mem box obj cls total labels img_size 5/49 3.03G 0.03825 0.01048 0.02177 0.0705 44 640: 100% 30/30 [00:25<00:00, 1.16it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.28it/s] all 120 120 0.22 0.308 0.177 0.0539 Epoch gpu_mem box obj cls total labels img_size 6/49 3.04G 0.0388 0.01073 0.02577 0.0753 38 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.96it/s] all 120 120 0.303 0.625 0.391 0.144 Epoch gpu_mem box obj cls total labels img_size 7/49 3.04G 0.04666 0.009941 0.02295 0.07955 45 640: 100% 30/30 [00:25<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.07it/s] all 120 120 0.415 0.35 0.366 0.139 Epoch gpu_mem box obj cls total labels img_size 8/49 3.04G 0.03045 0.009827 0.01737 0.05764 43 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.44it/s] all 120 120 0.317 0.4 0.291 0.125 Epoch gpu_mem box obj cls total labels img_size 9/49 3.04G 0.03207 0.009744 0.01779 0.05961 40 640: 100% 30/30 [00:22<00:00, 1.31it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.40it/s] all 120 120 0.393 0.514 0.464 0.241 Epoch gpu_mem box obj cls total labels img_size 10/49 3.04G 0.04525 0.009894 0.02294 0.07808 43 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.45it/s] all 120 120 0.389 0.442 0.371 0.153 Epoch gpu_mem box obj cls total labels img_size 11/49 3.04G 0.03351 0.01057 0.0189 0.06298 35 640: 100% 30/30 [00:23<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.62it/s] all 120 120 0.325 0.3 0.334 0.126 Epoch gpu_mem box obj cls total labels img_size 12/49 3.04G 0.03801 0.009947 0.01937 0.06733 33 640: 100% 30/30 [00:23<00:00, 1.29it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.17it/s] all 120 120 0.587 0.585 0.616 0.271 Epoch gpu_mem box obj cls total labels img_size 13/49 3.04G 0.0348 0.01007 0.01838 0.06325 41 640: 100% 30/30 [00:24<00:00, 1.22it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.79it/s] all 120 120 0.0701 0.183 0.0393 0.00638 Epoch gpu_mem box obj cls total labels img_size 14/49 3.04G 0.03803 0.01172 0.0212 0.07095 39 640: 100% 30/30 [00:24<00:00, 1.21it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s] all 120 120 0.423 0.408 0.314 0.111 Epoch gpu_mem box obj cls total labels img_size 15/49 3.04G 0.03323 0.0099 0.01628 0.05941 31 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.07it/s] all 120 120 0.553 0.808 0.746 0.39 Epoch gpu_mem box obj cls total labels img_size 16/49 3.04G 0.0414 0.009267 0.01829 0.06896 34 640: 100% 30/30 [00:24<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.13it/s] all 120 120 0.438 0.5 0.452 0.223 Epoch gpu_mem box obj cls total labels img_size 17/49 3.04G 0.0422 0.009625 0.01798 0.06981 40 640: 100% 30/30 [00:25<00:00, 1.19it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.19it/s] all 120 120 0.627 0.8 0.796 0.376 Epoch gpu_mem box obj cls total labels img_size 18/49 3.04G 0.03946 0.01036 0.01854 0.06837 43 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.12it/s] all 120 120 0.416 0.515 0.435 0.168 Epoch gpu_mem box obj cls total labels img_size 19/49 3.04G 0.03422 0.009949 0.01627 0.06044 45 640: 100% 30/30 [00:24<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.99it/s] all 120 120 0.673 0.697 0.762 0.424 Epoch gpu_mem box obj cls total labels img_size 20/49 3.04G 0.03836 0.0101 0.01839 0.06684 32 640: 100% 30/30 [00:24<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.90it/s] all 120 120 0.511 0.725 0.647 0.299 Epoch gpu_mem box obj cls total labels img_size 21/49 3.04G 0.0348 0.009871 0.01717 0.06184 45 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.29it/s] all 120 120 0.501 0.57 0.53 0.164 Epoch gpu_mem box obj cls total labels img_size 22/49 3.04G 0.03335 0.009804 0.01746 0.06061 43 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.91it/s] all 120 120 0.552 0.692 0.69 0.368 Epoch gpu_mem box obj cls total labels img_size 23/49 3.04G 0.04454 0.009487 0.01796 0.07199 40 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.32it/s] all 120 120 0.696 0.907 0.879 0.421 Epoch gpu_mem box obj cls total labels img_size 24/49 3.04G 0.03541 0.009314 0.01514 0.05986 45 640: 100% 30/30 [00:25<00:00, 1.19it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.30it/s] all 120 120 0.749 0.892 0.877 0.522 Epoch gpu_mem box obj cls total labels img_size 25/49 3.04G 0.0346 0.008935 0.01424 0.05777 38 640: 100% 30/30 [00:24<00:00, 1.23it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.21it/s] all 120 120 0.737 0.865 0.873 0.473 Epoch gpu_mem box obj cls total labels img_size 26/49 3.04G 0.03659 0.009053 0.01299 0.05863 41 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.25it/s] all 120 120 0.879 0.85 0.938 0.536 Epoch gpu_mem box obj cls total labels img_size 27/49 3.04G 0.03182 0.009148 0.01238 0.05335 40 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.43it/s] all 120 120 0.839 0.924 0.946 0.522 Epoch gpu_mem box obj cls total labels img_size 28/49 3.04G 0.03359 0.008742 0.0132 0.05553 39 640: 100% 30/30 [00:26<00:00, 1.14it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.75it/s] all 120 120 0.9 0.93 0.968 0.607 Epoch gpu_mem box obj cls total labels img_size 29/49 3.04G 0.02708 0.008494 0.01253 0.04811 44 640: 100% 30/30 [00:24<00:00, 1.24it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.36it/s] all 120 120 0.942 0.939 0.977 0.586 Epoch gpu_mem box obj cls total labels img_size 30/49 3.04G 0.02979 0.008439 0.01063 0.04886 49 640: 100% 30/30 [00:25<00:00, 1.16it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.30it/s] all 120 120 0.774 0.882 0.879 0.428 Epoch gpu_mem box obj cls total labels img_size 31/49 3.04G 0.03019 0.008725 0.01339 0.0523 34 640: 100% 30/30 [00:26<00:00, 1.15it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.16it/s] all 120 120 0.793 0.887 0.914 0.54 Epoch gpu_mem box obj cls total labels img_size 32/49 3.04G 0.03222 0.009168 0.01255 0.05395 33 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.26it/s] all 120 120 0.872 0.908 0.934 0.594 Epoch gpu_mem box obj cls total labels img_size 33/49 3.04G 0.0282 0.008521 0.0134 0.05012 38 640: 100% 30/30 [00:24<00:00, 1.24it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.64it/s] all 120 120 0.916 0.946 0.974 0.619 Epoch gpu_mem box obj cls total labels img_size 34/49 3.04G 0.02161 0.008843 0.01127 0.04173 40 640: 100% 30/30 [00:24<00:00, 1.25it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.58it/s] all 120 120 0.926 0.967 0.979 0.653 Epoch gpu_mem box obj cls total labels img_size 35/49 3.04G 0.02493 0.00863 0.01233 0.04589 40 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.76it/s] all 120 120 0.958 0.952 0.98 0.614 Epoch gpu_mem box obj cls total labels img_size 36/49 3.04G 0.02508 0.008566 0.01639 0.05004 26 640: 100% 30/30 [00:24<00:00, 1.24it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.42it/s] all 120 120 0.951 0.975 0.986 0.617 Epoch gpu_mem box obj cls total labels img_size 37/49 3.04G 0.02741 0.008481 0.01651 0.0524 31 640: 100% 30/30 [00:23<00:00, 1.27it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.43it/s] all 120 120 0.981 0.956 0.985 0.67 Epoch gpu_mem box obj cls total labels img_size 38/49 3.04G 0.02756 0.008425 0.01379 0.04977 35 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.87it/s] all 120 120 0.979 0.967 0.984 0.65 Epoch gpu_mem box obj cls total labels img_size 39/49 3.04G 0.02563 0.009084 0.01572 0.05044 40 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.65it/s] all 120 120 0.95 0.975 0.984 0.639 Epoch gpu_mem box obj cls total labels img_size 40/49 3.04G 0.02511 0.008971 0.01684 0.05092 42 640: 100% 30/30 [00:24<00:00, 1.24it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.43it/s] all 120 120 0.966 0.975 0.985 0.658 Epoch gpu_mem box obj cls total labels img_size 41/49 3.04G 0.02154 0.009098 0.01421 0.04485 37 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 1.66it/s] all 120 120 0.988 0.95 0.985 0.68 Epoch gpu_mem box obj cls total labels img_size 42/49 3.04G 0.02811 0.008275 0.01537 0.05175 34 640: 100% 30/30 [00:23<00:00, 1.26it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.26it/s] all 120 120 0.989 0.966 0.986 0.674 Epoch gpu_mem box obj cls total labels img_size 43/49 3.04G 0.02775 0.008882 0.0157 0.05233 36 640: 100% 30/30 [00:24<00:00, 1.24it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.61it/s] all 120 120 0.981 0.958 0.988 0.656 Epoch gpu_mem box obj cls total labels img_size 44/49 3.04G 0.02123 0.008521 0.01461 0.04436 40 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.08it/s] all 120 120 0.992 0.961 0.989 0.68 Epoch gpu_mem box obj cls total labels img_size 45/49 3.04G 0.02376 0.008377 0.0143 0.04644 37 640: 100% 30/30 [00:25<00:00, 1.20it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:02<00:00, 2.00it/s] all 120 120 0.965 0.975 0.988 0.701 Epoch gpu_mem box obj cls total labels img_size 46/49 3.04G 0.0289 0.008536 0.01608 0.05352 38 640: 100% 30/30 [00:25<00:00, 1.18it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.09it/s] all 120 120 0.96 0.959 0.987 0.699 Epoch gpu_mem box obj cls total labels img_size 47/49 3.04G 0.02046 0.00847 0.01375 0.04268 40 640: 100% 30/30 [00:25<00:00, 1.17it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.24it/s] all 120 120 0.986 0.957 0.987 0.708 Epoch gpu_mem box obj cls total labels img_size 48/49 3.04G 0.02173 0.008446 0.01309 0.04327 41 640: 100% 30/30 [00:24<00:00, 1.22it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:01<00:00, 2.04it/s] all 120 120 0.971 0.975 0.989 0.71 Epoch gpu_mem box obj cls total labels img_size 49/49 3.04G 0.02241 0.008406 0.0145 0.04532 30 640: 100% 30/30 [00:25<00:00, 1.19it/s] Class Images Labels P R mAP@.5 mAP@.5:.95: 100% 4/4 [00:04<00:00, 1.01s/it] all 120 120 0.977 0.975 0.989 0.709 goo 120 40 0.962 1 0.993 0.687 choki 120 40 0.969 0.95 0.98 0.659 par 120 40 1 0.975 0.992 0.782 50 epochs completed in 0.403 hours. Optimizer stripped from runs/train/yolov7-tiny_jk4/weights/last.pt, 12.3MB Optimizer stripped from runs/train/yolov7-tiny_jk4/weights/best.pt, 12.3MB
!pip install onnx
Collecting onnx Downloading onnx-1.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.6/14.6 MB 37.4 MB/s eta 0:00:00 Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from onnx) (1.23.5) Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx) (3.20.3) Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx) (4.5.0) Installing collected packages: onnx Successfully installed onnx-1.14.1
!python export.py --weights runs/train/yolov7-tiny_jk4/weights/best.pt
Import onnx_graphsurgeon failure: No module named 'onnx_graphsurgeon' Namespace(weights='runs/train/yolov7-tiny_jk4/weights/best.pt', img_size=[640, 640], batch_size=1, dynamic=False, dynamic_batch=False, grid=False, end2end=False, max_wh=None, topk_all=100, iou_thres=0.45, conf_thres=0.25, device='cpu', simplify=False, include_nms=False, fp16=False, int8=False) YOLOR 🚀 v0.1-126-g84932d7 torch 2.0.1+cu118 CPU Fusing layers... IDetect.fuse Model Summary: 208 layers, 6013008 parameters, 0 gradients Starting TorchScript export with torch 2.0.1+cu118... TorchScript export success, saved as runs/train/yolov7-tiny_jk4/weights/best.torchscript.pt CoreML export failure: No module named 'coremltools' Starting TorchScript-Lite export with torch 2.0.1+cu118... TorchScript-Lite export success, saved as runs/train/yolov7-tiny_jk4/weights/best.torchscript.ptl Starting ONNX export with onnx 1.14.1... /content/drive/MyDrive/try/yolov7/models/yolo.py:582: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if augment: /content/drive/MyDrive/try/yolov7/models/yolo.py:614: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if profile: /content/drive/MyDrive/try/yolov7/models/yolo.py:629: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if profile: ============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== ONNX export success, saved as runs/train/yolov7-tiny_jk4/weights/best.onnx Export complete (8.19s). Visualize with https://github.com/lutzroeder/netron.
物体認識の深層学習タスク:YOLO V7 で作成・使用した「anaconda」上の仮想環境(py38a)を利用する
プロジェクト・フォルダは「/anacondawin/work/yolov7」
使用するカスタムデータによる学習済みモデルは「/anacondawin/work/yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx」
(py38a) PS > cd /anaconda_win/work/yolov7
入力ソース | コマンド | 実行結果 |
カメラ画像 | python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i cam -d GPU | - |
janken3.jpg | python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Images/janken3.jpg -d GPU | |
janken_test2.mp4 | python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Videos/janken_test2.mp4 |
(py38a) PS > python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Images/janken3.jpg -d GPU -o janken3_yolov7-tiny_jk4.jpg Starting.. - Program title : Object detection YOLO V7 - OpenCV version : 4.5.5 - OpenVINO engine: 2022.1.0-7019-cdb9bec7210-releases/2022/1 - Input image : ../../Images/janken3.jpg - Model : ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx - Device : GPU - Confidence thr : 0.25 - IOU threshold : 0.45 - Label : janken.names_jp - Log level : 3 - Title flag : y - Speed flag : y - Processed out : janken3_yolov7-tiny_jk4.jpg - Preprocessing : False - Batch size : 1 - number of inf : 1 - With grid : False FPS average: 9.90 Finished. (py38a) PS > python object_detect_yolo7.py -m ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx -l janken.names_jp -i ../../Videos/janken_test2.mp4 -o janken_test2_yolov7-tiny_jk4.mp4 Starting.. - Program title : Object detection YOLO V7 - OpenCV version : 4.5.5 - OpenVINO engine: 2022.1.0-7019-cdb9bec7210-releases/2022/1 - Input image : ../../Videos/janken_test2.mp4 - Model : ../yolov7-main/runs/train/yolov7-tiny_jk4/weights/best.onnx - Device : CPU - Confidence thr : 0.25 - IOU threshold : 0.45 - Label : janken.names_jp - Log level : 3 - Title flag : y - Speed flag : y - Processed out : janken_test2_yolov7-tiny_jk4.mp4 - Preprocessing : False - Batch size : 1 - number of inf : 1 - With grid : False FPS average: 5.10 Finished.
PukiWiki 1.5.2 © 2001-2019 PukiWiki Development Team. Powered by PHP 7.4.3-4ubuntu2.24. HTML convert time: 0.036 sec.