Skip to content

[CVPR 2023 (Highlight)] Offical implementation of the paper "RepMode: Learning to Re-parameterize Diverse Experts for Subcellular Structure Prediction".

License

Notifications You must be signed in to change notification settings

Correr-Zhou/RepMode

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CVPR 2023 (Highlight)] RepMode: Learning to Re-parameterize Diverse Experts for Subcellular Structure Prediction

Project Paper Project

We introduce subcellular structure prediction (SSP), which aims to predict the 3D fluorescent images of multiple subcellular structures from a 3D transmitted-light image, to the computer vision community. To overcome two crucial challenges of SSP, i.e. partial labeling and multi-scale, we propose Re-parameterizing Mixture-of-Diverse-Experts (RepMode), a network that dynamically organizes its parameters with task-aware priors to handle specified single-label prediction tasks of SSP. RepMode achieves SOTA overall performance in SSP and show its potential in task-increment learning.

🔥 Updates

  • 2023.09: We release the download link of the formatted data for your convenience!
  • 2023.04: The source code is now available!
  • 2023.03: The RepMode website is now online!
  • 2023.03: This paper is selected as a CVPR Highlight (Top 2.6% of 9155 submissions)!
  • 2023.02: We are delighted to announce that this paper was accepted by CVPR 2023!

💻 Device Requirements

To run this code, please make sure that you have:

  1. A computer with >50GB RAM (for loading the dataset)
  2. An NVIDIA GPU with >20GB memory (we used an NVIDIA V100 GPU with 32GB memory)
  3. A Linux operating system (we used Ubuntu 18.04.5)

🛠️ Installation

  1. Create a Conda environment for the code:
conda create --name SSP python=3.9.12
  1. Activate the environment:
conda activate SSP
  1. Install the dependencies:
pip install -r requirements.txt
  1. Following this guide to set up W&B (a tool for experimental logging).

💽 Preparing Datasets

We adopt a dataset collection (released by Allen Institute for Cell Science) containing twelve partially labeled datasets, each of which corresponds to one category of subcellular structures (i.e. one single label prediction task).

Downloading Data

Change the current directory of the terminal to the path to this code:

cd {PATH-TO-THIS-CODE}

Download the dataset collection:

bash scripts/dataset/download_all_data.sh

Splitting Data

First, conduct train/val/test dataset splitting for eleven datasets:

bash scripts/dataset/split_train_set.sh
bash scripts/dataset/split_val_set.sh

Then, construct the DNA dataset based on the existing eleven datasets:

bash scripts/dataset/make_dna_dataset.sh

Each transmitted-light image in this collection has an extra DNA annotation but there is not a specialized DNA dataset. Therefore, the samples of the DNA dataset are collected from the other datasets.

Integrating data

The last step is to format these twelve datasets and integrate them:

bash scripts/dataset/integ_dataset.sh

The above command would run main.py to process the data and finally generate three .pth files for train/val/test, respectively. Besides, it can also check if the code is ready to run. At the beginning of training, the resultant .pth files will be directly loaded into RAM to accelerate the dataset loading during training.

❗❗❗ You can directly download the formatted data (i.e. the .pth files) here and move them to {PATH-TO-THIS-CODE}/data/all_data. However, we still recommend conducting the above operations to obtain the original data.

🔬 Training and Evaluation

Run the following command to train and evaluate a model:

bash scripts/run/train_and_eval.sh {GPU_IDX} {MODEL_NAME} {EXP_NAME}

Command-line arguments are as follows:

  1. {GPU_IDX}: The index of GPU which you want to adopt. (default: 0)
  2. {MODEL_NAME}: The model used for training and evaluation. (default: RepMode)
  3. {EXP_NAME}: The directory to save the experimental data (logs, checkpoint, etc.). (default: exps/test | we recommend setting it to a sub-directory of exps/)

For example, to train and evaluate RepMode with the GPU 1 and save the experimental data in exps/benchmark_RepMode, please run:

bash scripts/run/train_and_eval.sh 1 RepMode exps/benchmark_RepMode

If you only want to evaluate a trained model, please run:

bash scripts/run/only_eval.sh {GPU_IDX} {MODEL_NAME} {EXP_NAME} {MODEL_PATH}

Command-line arguments are as follows:

  1. {GPU_IDX}, {MODEL_NAME}, {EXP_NAME}: The same as above.
  2. {MODEL_PATH}: The path to the trained model used for evaluation. (default: exps/test/checkpoints/model_best_test.p)

If you would like to customize this code, check out scripts\, main.py, eval.py, and config.py for more details!

📑 Citation

If you find that our work is helpful in your research, please consider citing our paper:

@InProceedings{Zhou_2023_CVPR,
    author    = {Zhou, Donghao and Gu, Chunbin and Xu, Junde and Liu, Furui and Wang, Qiong and Chen, Guangyong and Heng, Pheng-Ann},
    title     = {RepMode: Learning to Re-Parameterize Diverse Experts for Subcellular Structure Prediction},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {3312-3322}
}

✉️ Contact

Feel free to contact me (Donghao Zhou: [email protected]) if anything is unclear or you are interested in potential collaboration.

🤝 Acknowledgement

Our code is built upon the repository of pytorch_fnet. We would like to thank its authors for their excellent work. If you want to use and redistribe our code, please follow this license as well.

About

[CVPR 2023 (Highlight)] Offical implementation of the paper "RepMode: Learning to Re-parameterize Diverse Experts for Subcellular Structure Prediction".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published