An Intuitive Approach to inference StyleGAN2
-
Create a python virtual environment using anaconda (Tested on python 3.7.3)
-
Run requirement.txt for stylegan2_pytorch (in your created virtual environment)
pip install pip install -r requirements.txt
-
Download and Convert Pretrained Model
- Download Pretrained StyleGAN2 Model (this project use model and dlatents from GWERN StyleGAN2 pretrained model)
- Convert StyleGAN2 tf pretrained model to pytorch using script in ./stylegan2_pytorch/pretrained_model
run_convert_from_tf.py --input="Path/To/PKL_MODEL" --output="stylegan2_pytorch/pretrained_model"
-
Install packages
Yarn install and Yarn start
-
Setup python path using interface in application
You can modify directional latent at stylegan2_pytorch/pretrained_model/modded_dlatents/tag_dirs_cont.pkl
Yarn Package
There is a tutorial notebook in /stylegan2_pytorch folder which demostrate a basic function to Generate and Interpolate Image
- https://github.com/gwern for anime stylegan2 pretrained model
- https://github.com/halcy for pretrained latent direction and the crucial concept of StyleGAN2
- https://github.com/viuts for the ways to convert tf model to pytorch model to support my macbook pro cpu