Skip to content

[ICPR 2022] Data, code and pretrained models for Deep Surface Reconstruction from Point Clouds with Visibility Information

License

Notifications You must be signed in to change notification settings

raphaelsulzer/dsrv-data

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the ICPR 2022 paper (arXiv).

Point cloud Reconstruction Point cloud with Visibility Reconstruction

Data

ModelNet10

  • The ModelNet10 models made watertight using ManifoldPlus can be downloaded here on Zenodo.
  • The ModelNet10 scans used in our paper can be downloaded here on Zenodo. The dataset also includes training and evaluation data for ConvONet, Points2Surf, Shape As Points, POCO and DGNN.

ShapeNetv1 (13 class subset of Choy et al.)

  • The watertight ShapeNet models can be downloaded here (provided by the authors of ONet).
  • Please open an issue if you are interested in the scans used in our paper.

Synthetic Rooms Dataset

  • The watertight scenes can be downloaded here (provided by the authors of ConvONet).
  • Please open an issue if you are interested in the scans used in our paper.

Scanning Procedure

You can create point clouds with visibility information of your own dataset using the scan tool. You can use the precompiled scan executable from this repository (which should work on most Ubuntu systems), or compile it youself using mesh-tools.

./scan -w path/to/workingDir -i filenameMeshToScan --export npz

For creating the scans used in the paper the follwing settings were used:

--points 3000 --noise 0.005 --outliers 0.0 --cameras 10

Data Loading

You can use the dataloader.py script to load visibility augmented point clouds from the scan.npz files.

Code and Pretrained Models

You can find our modified code and pretrained models for the surface reconstruction methods tested in our paper below. All methods support point clouds with and without visibility information.

References

If you find the code or data in this repository useful, please consider citing

@INPROCEEDINGS{sulzer2022deep,
  author={Sulzer, Raphael and Landrieu, Loïc and Boulch, Alexandre and Marlet, Renaud and Vallet, Bruno},
  booktitle={2022 26th International Conference on Pattern Recognition (ICPR)}, 
  title={Deep Surface Reconstruction from Point Clouds with Visibility Information}, 
  year={2022},
  volume={},
  number={},
  pages={2415-2422},
  doi={10.1109/ICPR56361.2022.9956560}}

About

[ICPR 2022] Data, code and pretrained models for Deep Surface Reconstruction from Point Clouds with Visibility Information

Topics

Resources

License

Stars

Watchers

Forks

Languages