Skip to content

NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation

License

Notifications You must be signed in to change notification settings

PengchengShi1220/NexToU

Repository files navigation

NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation

| 📃 Paper | 📂 Weight Files |

💡 News

  • (Jun 16, 2024): NexToU v1.2 release, based on nnU-Net v2.2.
  • (October 13, 2023): 🎉 Our NexToU-based solution won the second place 🥈 in both the MICCAI 2023 TopCoW 🐮 and MICCAI 2023 CROWN 👑 Challenge.
  • (September 19, 2023): Launched NexToU architecture and training codes for nnU-Net V2.
  • (June 14, 2023): Updated NexToU installation and running demo.
  • (May 26, 2023): Released NexToU architecture and training codes for nnU-Net V1.

Overview

The proposed NexToU architecture follows a hierarchical U-shaped encoder-decoder structure that includes purely convolutional modules and x topological ones. NexToU incorporates improved Pool GNN and Swin GNN modules from Vision GNN (ViG), designed to learn both global and local topological representations while minimizing computational costs. It reformulates the topological interaction (TI) module based on the nature of binary trees, rapidly encoding the topological constraints into NexToU. This unique approach enables effective handling of containment and exclusion relationships among various anatomical structures. To maintain consistency in data augmentation and post-processing, we base our NexToU architecture on the nnU-Net framework which can automatically configure itself for any new medical image segmentation task. NexToU Architecture

Usage

NexToU consists of several main components. The following links will take you directly to the core parts of the codebase:

To integrate NexToU with nnUNet, you can directly download NexToU v1.2 (based on nnU-Net v2.2) using:

wget https://github.com/PengchengShi1220/NexToU/releases/download/v1.2.0/NexToU_v1.2_nnU-Net_v2.2.tar.gz

Alternatively, follow these steps:

  1. Clone the NexToU repository from GitHub using the command:
git clone https://github.com/PengchengShi1220/NexToU.git
  1. Download v2.2 version of nnUNet using the command:
wget https://github.com/MIC-DKFZ/nnUNet/archive/refs/tags/v2.2.tar.gz
  1. Extract the v2.2.tar.gz file using the command:
tar -zxvf v2.2.tar.gz
  1. Copy the NexToU loss functions, network architecture, and network training code files to the corresponding directories in nnUNet-2.0 using the following commands:
cp NexToU-NexToU_nnunetv2/loss/* nnUNet-2.2/nnunetv2/training/loss/
cp NexToU-NexToU_nnunetv2/network_architecture/* nnUNet-2.2/nnunetv2/training/nnUNetTrainer/variants/network_architecture/
cp NexToU-NexToU_nnunetv2/nnUNetTrainer/* nnUNet-2.2/nnunetv2/training/nnUNetTrainer/
  1. Install nnUNet-2.2 with the NexToU related function and run it:
cd nnUNet-2.2 && pip install -e .
pip install timm
pip install einops

If you're using the 3d_fullres_nextou configuration, make sure to update your nnUNet_preprocessed/DatasetXX/nnUNetPlans.json file. You may need to update the "patch_size" parameter to ensure that each dimension is a multiple of 32. You can add the following JSON snippet to your existing nnUNetPlans.json. Have a look at the example provided in the nnUNetPlans.json:

"3d_fullres_nextou": {
    "inherits_from": "3d_fullres",
    "patch_size": [
        64,
        160,
        160
    ]
}

For BTCV dataset:

nnUNetv2_train 111 3d_fullres_nextou 0 -tr nnUNetTrainer_NexToU_BTI_Synapse

For RAVIR dataset:

nnUNetv2_train 810 2d 0 -tr nnUNetTrainer_NexToU_BTI_RAVIR

For ICA dataset:

nnUNetv2_train 115 3d_fullres_nextou 0 -tr nnUNetTrainer_NexToU_BTI_ICA_noMirroring

You can use the relevant components of NexToU in your own projects by importing them from the respective files. Please ensure that you abide by the license agreement while using the code.

If you have any issues or questions, feel free to open an issue on our GitHub repository.

License

NexToU is licensed under the Apache License 2.0. For more information, please see the LICENSE file in this repository.

Citation

If you use NexToU in your research, please cite:

@article{shi2023nextou,
  title={NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation},
  author={Shi, Pengcheng and Guo, Xutao and Yang, Yanwu and Ye, Chenfei and Ma, Ting},
  journal={arXiv preprint arXiv:2305.15911},
  year={2023}
}

About

NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages