This repository is for Real-Time High-Resolution Background Matting, CVPR 2021
(pdf) reimplementation.
I referred to the official implementaion in PyTorch.
I used pretrained weights of DeepLabV3 from VainF.
I share anaconda environment yml file.
Create environment by conda env create -n $ENV_NAME -f py38torch1110.yml
You can also check requirements from the yml file.
The Base Network includes ASPP module from DeepLabV3. I used pretrained DeepLabV3 weight(best_deeplabv3_resnet50_voc_os16.pth).
usage: python train_base.py
This repo use Hydra for experiment configuration. The configuration file is under ./app/configs
.
For training base network, please set the corrensponding parameters in ./app/configs/train_base.yaml
.
arguments:
checkpoint_path checkpoint saving dir path
logging_path path to save logs
batch_size batch size
num_workers num workers
epochs epochs to train
pretrained_model pretrained model path
defaults:
data configuration file which handling the dataset path configuration. See below.
For dataset path configuration, please refer to ./app/configs/data/default.yaml
original_work_dir the root directory of the repository
data_root the root directory of the dataset
rgb_data_dir the directory of the rgb dataset
bck_data_dir the directory of the background dataset
train_rgb_path foreground data directory path for training
train_alp_path alpha matte data directory path for training
valid_rgb_path foreground data directory path for validation
valid_alp_path alpha matte data directory path for validation
train_bck_path background data directory path for training
valid_bck_path background data directory path for validation
After training the Base Network, train the Base Network and Refinement Network jointly.
usage: python train_refine.py
For training refine network, please set the corrensponding parameters in ./app/configs/train_refine.yaml
arguments:
checkpoint_path checkpoint saving dir path
logging_path path to save logs
batch_size batch size
num_workers num workers
epochs epochs to train
pretrained_model pretrained model path
defaults:
data configuration file which handling \
the dataset path configuration. \
Same as base training.
You can download my trained weight form here.
Using trained weight, you can test image background matting.
Make sure that related image and background data are same order in each directory.
usage: python test_image.py
For tesing the network, please set the corrensponding parameters in ./app/configs/test_image.yaml
original_work_dir the root directory of the repository
pretrained_model pretrained model path
src_path source directory path
bck_path background directory path
output_path output directory path
output_type choose output types from
[composite layer, alpha matte,\
foreground residual, error map,\
reference map]
Limited datasets are available on the official website.
![]() |
![]() |
![]() |
---|---|---|
![]() |
![]() |
![]() |
source image | predicted alpha matte | predicted foreground |
- S.Lin, A.Ryabtsev, S.Sengupta, B.Curless, S.Seitz, I.Kemelmacher-Shlizerman. "Real-Time High-Resolution Background Matting.", in CVPR, 2021. (pdf)
- Official Home Page
- Official implementation in PyTorch
- DeepLabV3 pretrained weights
- L.C.Chen, G.Papandreou, F.Schroff, H.Adam. "Rethinking Atrous Convolution for Semantic Image Segmentation.", in CVPR 2017. (arxiv)