Skip to content
/ dhd2020 Public

Repository for the Workshop "Deep Learning for Visual Media", DHd 2020, Paderborn

Notifications You must be signed in to change notification settings

ghowa/dhd2020

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Repository for the Workshop "Deep Learning and Visual Media", Tue 3 March 2020, DHd 2020

Installation

-1. Installing Detectron2 on Linux systems is pretty straightforward.

For Windows 10, a relatively easy solution is to install the Windows subsystem for Linux, instructions here: https://docs.microsoft.com/en-us/windows/wsl/install-win10. You will have to install the following packages as well (in the shell): python, python-dev and opencv-python.

For OS X, you can try to follow the Anaconda instructions posted here: https://medium.com/deepvisionguru/how-to-embed-detectron2-in-your-computer-vision-project-817f29149461. Please note that this is not yet compatible with our demo.py script, we hope to fix that soon.

  1. Clone this repository and enter folder (or download, extract and enter folder):
git clone https://github.com/ghowa/dhd2020.git
cd dhd2020
  1. Create virtual environment to make sure we don't mess with your system python install and install all needed packages:

If you have a Conda python install, try this:

conda create --nsame detectron2
conda activate detectron2

For vanilla python, try this:

python -m venv detectron2
source detectron2/bin/activate
pip install -r detectron2/requirements.txt
  1. Install precompiled Detectron with CPU support only:
pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/index.html

OR: Install precompiled Detectron for CUDA 10.1:

pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
  1. Create a new Jupyter kernel which uses the virtual environment you have just created:
ipython kernel install --user --name=detectron2
  1. Download Labelme2COCO converter and make it executable:
curl -JLO https://raw.githubusercontent.com/wkentaro/labelme/master/examples/instance_segmentation/labelme2coco.py
chmod 700 labelme2coco.py

Try out pretrained models

  1. Copy your sample images to dhd2020/input

  2. Run the following network from Detectron's model zoo:

python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/COCO-PanopticSegmentation/panoptic_fpn_R_101_3x.yaml --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS detectron2://COCO-PanopticSegmentation/panoptic_fpn_R_101_3x/139514519/model_final_cafdb1.pkl

Other interesting models:

python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/Cityscapes/mask_rcnn_R_50_FPN.yaml --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS detectron2://Cityscapes/mask_rcnn_R_50_FPN/142423278/model_final_af9cf5.pkl
python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml   --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl
python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS detectron2://COCO-Detection/faster_rcnn_R_101_FPN_3x/137851257/model_final_f6e8b1.pkl
python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x.yaml --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS detectron2://COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x/138363331/model_final_997cc7.pkl
python detectron2/demo.py --config-file detectron2/lib/python3.7/site-packages/detectron2/model_zoo/configs/LVIS-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml --input input/* --output output  --opts MODEL.DEVICE cpu MODEL.WEIGHTS  detectron2://LVIS-InstanceSegmentation/mask_rcnn_R_50_FPN_1x/144219072/model_final_571f7c.pkl

Analysis

Run Jupyter notebook

jupyter notebook

Open deep_watching.ipynb and make sure the 'detectron2' kernel is used

Annotate

  1. Copy training images to train/ and test images to test/

  2. Add your own categories to the text file 'labels'

  3. Run Labelme

labelme train/ --labels labels
labelme test/ --labels labels
  1. Convert into COCO json
./labelme2coco.py --labels labels train/ train-coco/
./labelme2coco.py --labels labels test/ test-coco/
  1. Test annotations in Jupyter notebook Run Jupyter notebook
jupyter notebook

Open coco_test.ipynb and make sure the 'detectron2' kernel is used

Train

  1. Create yaml for Detectron2

  2. Register custom train/test set to Detectron2

  3. Run training

About

Repository for the Workshop "Deep Learning for Visual Media", DHd 2020, Paderborn

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published