Skip to content

zengbohan0217/FNeVR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FNeVR: Neural Volume Rendering for Face Animation

Limited by the related treaties, only the testing code is available now.
paper

Environment configuration

The codes are based on python3.8+, CUDA version 11.0+. The specific configuration steps are as follows:

  1. Create conda environment

    conda create -n fnerv python=3.8
    conda activate fnerv
  2. Install pytorch

    conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
  3. Installation profile

    pip install -r requirements.txt

Pre-trained checkpoint

Checkpoint can be found under following link: one-drive.

Image reenactment/reconstruction

To run a reenactment demo, download checkpoint and run the following command:

python demo.py  --config config/vox_256.yaml --driving_video sup-mat/driving.mp4 --source_image sup-mat/source.png --checkpoint path/to/checkpoint --mode reenactment --relative --adapt_scale

To run a reconstruction demo, download checkpoint and run the following command:

python demo.py  --config config/vox_256.yaml --driving_video sup-mat/driving.mp4 --checkpoint path/to/checkpoint --mode reconstruction

The result will be stored in result.mp4.

Acknowledgement

Our FNeVR implementation is inspired by FOMM and DECA. We appreciate the authors of these papers for making their codes available to the public.