Skip to content

nampyohong/StyleCLIP-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StyleCLIP-PyTorch: Text-Driven Manipulation of StyleGAN Imagery

  • With PTI (Pivot Tuning Inversion)
  • Global Direction Methods

Following will be updated soon

  • Explanation and instruction for module
  • Colab notebook demo

References

  1. stylegan2-ada-pytorch
  2. CLIP
  3. StyleCLIP
  4. Pivot Tuning Inversion

Installation

Docker build

$ sh build_img.sh
$ sh build_container.sh [container-name]

Install package

$ docker start [container-name]
$ docker attach [container-name]
$ pip install -v -e .

Pretrained weights

Download and save this pretrained weights in pretrained/ directory

Extract W, S, S_mean, S_std

FFHQ1024

$ python extract.py

FFHQ256

$ python extract.py --ckpt=pretrained/ffhq256.pkl --dataset_name=ffhq256

Extract global image direction

FFHQ1024

$ python manipulator.py extract

FFHQ256

$ python manipulator.py extract --ckpt=pretrained/ffhq256.pkl --face_preprocess=True --dataset_name=ffhq256

RUN demo.ipynb on jupyter notebook

  • Scripts for CLI env will be added.

Manipulation option

  • Source image
    • Input image projection
    • Generate z from random seed
  • Text description(neutral, target)
  • Manipulation strength (alpha)
  • Disentangle threshold (beta)

TODO

  • Save generator checkpoint by generated by pivot tuning inversion(FFHQ)
  • Global direction module refactoring(especially in gpu usage)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published