Skip to content

Cone detector trained using the Tensorflow Object Detection API

License

Notifications You must be signed in to change notification settings

fediazgon/cone-detector-tf

Repository files navigation

project-logo
cone-detector-tf

Cone detector trained using the Tensorflow Object Detection API

RequirementsUsageModelLicense

project-demo

Requirements

Install the following packages with pip, preferably inside a virtual environment.

pip install opencv-python
pip install tensorflow

Usage

Run the program with the following command:

python cone_detector.py

Keep in mind that the app takes square crops for each frame in the video before performing the detection. This is because the model used has been trained using a pre-trained model from Tensorflow detection model zoo (ssd_mobilenet_v1_coco), whose input volume was a square image; and it performs much better after taking square crops.

However, if the video that you use has a different resolution you might want to change the size of the crops using the variables CROP_WIDTH, CROP_SIZE and CROP_STEP_X. If you don't want to take crops (the detection it is much faster but less accurate), uncomment the following lines:

# CROP_WIDTH = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
# CROP_HEIGHT = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
...
# crops = np.array([crops[0]])
# crops_coordinates = [crops_coordinates[0]]

Model

The model was trained using arround 200 images (70% train and 30% for validation) extracted from a longer version of the sample video, so it might perform worse on different images. If you want to finetune this model, you can find the .ckpt files in the model folder.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.