This is an unofficial Pytorch implementation of the paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, ICCV 2017
arxiv. I referred to the official implementation in Torch. I used pretrained weights of vgg19 and decoder from naoto0804.
Install requirements by $ pip install -r requirements.txt
- Python 3.7+
- PyTorch 1.10
- Pillow
- TorchVision
- Numpy
- imageio
- tqdm
You can access a demo and perform style transfer at 2022-AdaIN-pytorch-Demo Huggingface Space.
If you would like to run the Streamlit app on your local system do the following steps:
Install requirements by:
$ pip install -r streamlit_app/requirements.txt
Following additional packages are for the web app:
- streamlit
- gdown
- packaging
Run the webapp by:
$ streamlit run streamlit_app/app.py
The above command will open a window in your default browser (if available), and will also display the local url, which you can navigate to, to use the app.
The encoder uses pretrained vgg19 network. Download the vgg19 weight. The decoder is trained on MSCOCO and wikiart dataset. Run the script train.py
$ python train.py --content_dir $CONTENT_DIR --style_dir STYLE_DIR --cuda
usage: train.py [-h] [--content_dir CONTENT_DIR] [--style_dir STYLE_DIR]
[--epochs EPOCHS] [--batch_size BATCH_SIZE] [--resume RESUME] [--cuda]
optional arguments:
-h, --help show this help message and exit
--content_dir CONTENT_DIR
content images folder path
--style_dir STYLE_DIR
style images folder path
--epochs EPOCHS Number of epoch
--batch_size BATCH_SIZE
Batch size
--resume RESUME Continue training from epoch
--cuda Use CUDA
Download vgg19 weight, decoder weight under the main directory.
To test basic style transfer, run the script test.py. Specify --content_image
, --style_image
to the image path, or specify --content_dir
, --style_dir
to iterate all images under this directory. All outputs are saved in ./results/
. Specify --grid_pth
to collect all outputs in a grid image. Specify --color_control
to preserve the content image color.
$ python test.py --content_image $IMG --style_image $STYLE --cuda
optional arguments:
-h, --help show this help message and exit
--content_image CONTENT_IMAGE
single content image file
--content_dir CONTENT_DIR
content image directory, iterate all images under this directory
--style_image STYLE_IMAGE
single style image
--style_dir STYLE_DIR
style image directory, iterate all images under this directory
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] controls style transfer level
--cuda Use CUDA
--grid_pth GRID_PTH
Specify a grid image path (default=None) if generate a grid image
that contains all style transferred images
--color_control Preserve content image color
To test style transfer interpolation, run the script test_interpolate.py. Specify --style_image
with multiple paths separated by comma. Specify --interpolation_weights
to interpolate once. All outputs are saved in ./results_interpolate/
. Specify --grid_pth
to interpolate with different built-in weights and provide 4 style images. Specify --color_control
to preserve the content image color.
$ python test_interpolate.py --content_image $IMG --style_image $STYLE $WEIGHT --cuda
optional arguments:
-h, --help show this help message and exit
--content_image CONTENT_IMAGE
single content image file
--style_image STYLE_IMAGE
multiple style images file separated by comma
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] (default=1.0) controls style transfer level
--interpolation_weights INTERPOLATION_WEIGHTS
Interpolation weight of each style image, separated by comma.
Do not specify if input grid_pth.
--cuda Use CUDA
--grid_pth GRID_PTH
Specify a grid image path (default=None) to perform interpolation style
transfer multiple times with different built-in weights and generate a
grid image that contains all style transferred images. Provide 4 style
images. Do not specify if input interpolation_weights.
--color_control Preserve content image color
To test video style transfer, run the script test_video.py. All outputs are saved in ./results_video/
.
$ python test_video.py --content_video $VID --style_image $STYLE --cuda
optional arguments:
-h, --help show this help message and exit
--content_video CONTENT_IMAGE
single content video file
--style_image STYLE_IMAGE
single style image
--decoder_weight DECODER_WEIGHT decoder weight file (default='decoder.pth')
--alpha {Alpha Range}
Alpha [0.0, 1.0] controls style transfer level
--cuda Use CUDA
--color_control Preserve content image color
![]() |
![]() |
---|---|
w/o color control | w/ color control |
Original Video
cutBunny.mp4
Style Image
Style Transfer Video
cutBunny_style_picasso_self_portrait.mp4
- X. Huang and S. Belongie. "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.", in ICCV, 2017. arxiv
- Original implementation in Torch
- Pretrained weights
- List of all source URLs of images collected from the internet. Image_sources.txt
- L. A. Gatys, A. S. Ecker, M. Bethge, A. Hertzmann, and E. Shechtman. Controlling perceptual factors in neural style transfer. In CVPR, 2017. arxiv
- A. Hertzmann. Algorithms for Rendering in Artistic Styles. PhD thesis, New York University, 2001.