Skip to content

Latest commit

 

History

History
62 lines (50 loc) · 2.86 KB

README.md

File metadata and controls

62 lines (50 loc) · 2.86 KB

DeepGCNs: Can GCNs Go as Deep as CNNs?

In this work, we present new ways to successfully train very deep GCNs. We borrow concepts from CNNs, mainly residual/dense connections and dilated convolutions, and adapt them to GCN architectures. Through extensive experiments, we show the positive effect of these deep GCN frameworks.

[Project] [Paper] [Slides] [Tensorflow Code] [Pytorch Code]

Overview

We do extensive experiments to show how different components (#Layers, #Filters, #Nearest Neighbors, Dilation, etc.) effect DeepGCNs. We also provide ablation studies on different type of Deep GCNs (MRGCN, EdgeConv, GraphSage and GIN).

Further information and details please contact Guohao Li and Matthias Müller.

Requirements

Conda Environment

In order to setup a conda environment with all neccessary dependencies run,

conda env create -f environment.yml

Getting Started

You will find detailed instructions how to use our code for semantic segmentation of 3D point clouds, in the folder sem_seg. Currently, we provide the following:

  • Conda environment
  • Setup of S3DIS Dataset
  • Training code
  • Evaluation code
  • Several pretrained models
  • Visualization code

Citation

Please cite our paper if you find anything helpful,

@InProceedings{li2019deepgcns,
    title={DeepGCNs: Can GCNs Go as Deep as CNNs?},
    author={Guohao Li and Matthias Müller and Ali Thabet and Bernard Ghanem},
    booktitle={The IEEE International Conference on Computer Vision (ICCV)},
    year={2019}
}
@misc{li2019deepgcns_journal,
    title={DeepGCNs: Making GCNs Go as Deep as CNNs},
    author={Guohao Li and Matthias Müller and Guocheng Qian and Itzel C. Delgadillo and Abdulellah Abualshour and Ali Thabet and Bernard Ghanem},
    year={2019},
    eprint={1910.06849},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

License

MIT License

Acknowledgement

This code is heavily borrowed from PointNet and EdgeConv. We would also like to thank 3d-semantic-segmentation for the visualization code.