私的AI研究会 > StarGAN2
「StarGAN-V2」で特徴操作・顔の合成をおこない顔を変化させる
ローカル環境で「StarGAN-V2」を動かす
cd /anaconda_win/workspace_2 ← Windows の場合 cd ~/workspace_2 ← Linux の場合 git clone https://github.com/clovaai/stargan-v2.git
update └─workspace_2 └─stargan-v2 ← GitHub からクローンしたプロジェクトに上書きする ├─assets │ └─representative ├─core └─expr ├─checkpoints └─results・解凍してできる「update/」フォルダ以下を次のフォルダの下に上書きコピーする
(py38_gan) PS > python main.py --mode sample --num_domains 2 --resume_iter 100000 --w_hpf 1 --checkpoint_dir expr/checkpoints/celeba_hq --result_dir expr/results/celeba_hq --src_dir assets/representative/celeba_hq/src_0 --ref_dir assets/representative/celeba_hq/ref_0
(py38_gan) python main2.py --mode sample --num_domains 2 --resume_iter 100000 --w_hpf 1 --checkpoint_dir expr/checkpoints/celeba_hq --result_dir expr/results/celeba_hq --src_dir assets/representative/celeba_hq/src_0 --ref_dir assets/representative/celeba_hq/ref_0
コマンドオプション | 引数 | 初期値 | 意味 |
--model | str | 'align' | 実行モード(固定) |
--img_size | int | 256 | 画像サイズ |
--inp_dir | str | '' ダイアログによる指定 | 変換する画像のあるフォルダ |
--out_dir | str | 'assets/representative/custom_src_align' | 出力するフォルダ |
--wing_path | str | 'expr/checkpoints/wing.ckpt' | |
--lm_path | str | 'expr/checkpoints/celeba_lm_mean.npz' |
(py38_learn) python starv2_align.py StarGAN v2 mode align Ver 0.01: Starting application... --mode : align --img_size : 256 --inp_dir : C:/anaconda_win/workspace_2/stargan-v2/assets/representative/custom_src --out_dir : assets/representative/custom_src_align --wing_path : expr/checkpoints/wing.ckpt --lm_path : expr/checkpoints/celeba_lm_mean.npz Saved the aligned image to f_kiyohara_4.jpg... Saved the aligned image to f_suzuki_2.jpg... Saved the aligned image to f_yoshida_1.jpg... Saved the aligned image to f_yoshinaga_5.jpg... Saved the aligned image to f_yoshine_1.jpg... Saved the aligned image to f_yoshioka_5.jpg... Saved the aligned image to ietake.jpg... Saved the aligned image to m_kamiki_2.jpg... Saved the aligned image to m_kusakari_1.jpg... Saved the aligned image to okegawa.jpg...
else: # self.ckptios = [CheckpointIO(ospj(args.checkpoint_dir, '{:06d}_nets_ema.ckpt'), data_parallel=True, **self.nets_ema)] self.ckptios = [CheckpointIO(ospj(args.checkpoint_dir, '100000_nets_ema.ckpt'), data_parallel=True, **self.nets_ema)]
if self.data_parallel: # module.module.load_state_dict(module_dict[name]) module.module.load_state_dict(module_dict[name], False)
def video_ref(nets, args, x_src, x_ref, y_ref, fname): video = [] frames = []
if len(ref.x) > 2: fname = ospj(args.result_dir, 'video_ref.mp4') print('Working on {}...'.format(fname)) utils.video_ref(nets_ema, args, src.x, ref.x, ref.y, fname)
# small_blurred = gaussian(cv2.resize(img, (W, H)), H//100, multichannel=True) small_blurred = gaussian(cv2.resize(img, (W, H)), H//100, channel_axis=1)