Skip to content

[ISBI 2024] SegReg: Segmenting OARs by Registering MR Images and CT Annotations

Notifications You must be signed in to change notification settings

steve-zeyu-zhang/SegReg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SegReg: Segmenting OARs by Registering MR Images and CT Annotations
ISBI 2024

Zeyu Zhang*, Xuyin Qi, Bowen Zhang, Biao Wu, Hien Le, Bora Jeong, Zhibin Liao, Yunxiang Liu, Johan Verjans, Minh-Son To, Richard Hartley

*Contact: [email protected]

Website arXiv OpenReview Papers With Code BibTeX

Organ at risk (OAR) segmentation is a critical process in radiotherapy treatment planning such as head and neck tumors. Nevertheless, in clinical practice, radiation oncologists predominantly perform OAR segmentations manually on CT scans. This manual process is highly time-consuming and expensive, limiting the number of patients who can receive timely radiotherapy. Additionally, CT scans offer lower soft-tissue contrast compared to MRI. Despite MRI providing superior soft-tissue visualization, its time-consuming nature makes it infeasible for real-time treatment planning. To address these challenges, we propose a method called SegReg, which utilizes Elastic Symmetric Normalization for registering MRI to perform OAR segmentation. SegReg outperforms the CT-only baseline by 16.78% in mDSC and 18.77% in mIoU, showing that it effectively combines the geometric accuracy of CT with the superior soft-tissue contrast of MRI, making accurate automated OAR segmentation for clinical practice become possible.

pipeline

demo

News

(02/10/2024) 🎉 Our paper has been accepted to ISBI 2024!

(02/07/2024) 👉 Please see our latest work: 3D Medical Imaging Segmentation: A Comprehensive Survey for latest updates on 3D medical imaging segmentation.

(11/16/2023) 🎉 Our paper has been promoted by CVer.

Citation

@article{zhang2023segreg,
  title={SegReg: Segmenting OARs by Registering MR Images and CT Annotations},
  author={Zhang, Zeyu and Qi, Xuyin and Zhang, Bowen and Wu, Biao and Le, Hien and Jeong, Bora and To, Minh-Son and Hartley, Richard},
  journal={arXiv preprint arXiv:2311.06956},
  year={2023}
}

Hardware

2 Intel Xeon Platinum 8360Y 2.40GHz CPUs, 8 NVIDIA A100 40G GPUs, and 256GB of RAM

Environment

For docker container:

docker pull stevezeyuzhang/colab:1.7.1

For dependencies:

conda create -n segreg
conda activate segreg
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
cd SegReg/nnUNet
pip install -e .
export nnUNet_raw_data_base="/code/SegReg/DATASET/nnUNet_raw"
export nnUNet_preprocessed="/code/SegReg/DATASET/nnUNet_preprocessed" 
export RESULTS_FOLDER="/code/SegReg/DATASET/nnUNet_trained_models" 
source /root/.bashrc 

Dataset

For dataset, see https://han-seg2023.grand-challenge.org/

File directories as follows

├── SegReg
│   ├── DATASET
│   │   ├── nnUNet_preprocessed
│   │   ├── nnUNet_raw
│   │   │   ├── nnUNet_cropped_data
│   │   │   └── nnUNet_raw_data
│   │   │   │   ├── Task001_<TASK_NAME>
│   │   │   │   │   ├── dataset.json
│   │   │   │   │   ├── imagesTr
│   │   │   │   │   │   ├── case_01_0000.nii.gz
│   │   │   │   │   │   ├── case_01_0001.nii.gz
│   │   │   │   │   │   ├── case_02_0000.nii.gz
│   │   │   │   │   │   ├── case_02_0001.nii.gz
│   │   │   │   │   ├── imagesTs
│   │   │   │   │   ├── inferTs
│   │   │   │   │   ├── labelsTr
│   │   │   │   │   │   ├── case_01.nii.gz
│   │   │   │   │   │   ├── case_02.nii.gz
│   │   │   │   │   └── labelsTs
│   │   └── nnUNet_trained_models
│   └── nnUNet

Registration

python register.py <INSTANCE_NUMBER> <TRANSFORMATION>

For transformation, see https://antspy.readthedocs.io/en/latest/registration.html

Segmentation

Data Preprocessing

nnUNet_plan_and_preprocess -t <TASK_ID>

Training

nnUNet_train 3d_fullres nnUNetTrainerV2 <TASK_ID> <FOLD>

Inferencing

You can train your own model or find our checkpoint here.

nnUNet_predict -i /code/SegReg/DATASET/nnUNet_raw/nnUNet_raw_data/Task001_<TASK_NAME>/imagesTs -o /code/SegReg/DATASET/nnUNet_raw/nnUNet_raw_data/Task001_<TASK_NAME>/inferTs -t <TASK_ID> -m 3d_fullres -f <FOLD> -chk model_best

Comparative Studies

Ablation Studies

Acknowledgments

Also thanks to the works we used in comparative studies: