This project deals with estimating velocity of a single moving car in a video using Classical Approaches as well as a Deep-Learning Based Method (RAFT).
The package is based on https://github.com/princeton-vl/RAFT.
test2.yml file of the conda environment is provided.
or
create an environment with python 3.8.
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
conda install matplotlib tensorboard scipy opencv
The weights for yolo need to be downloaded and need to be placed in the root folder.
The code has been tested with For running the demo using our trained model. -python demo.py --model=models/enpm673_raft-kitti.pth --path=frames For running with videos. -python inference.py --model=models/enpm673_raft-kitti.pth
## Required Data
To evaluate/train RAFT, you will need to download the required datasets.
* [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow)
## Evaluation
You can evaluate a trained model using `evaluate.py`
```Shell
python evaluate.py --model=models/enpm673_raft-kitti.pth --dataset=kitti --mixed_precision
We used the following training schedule in our paper (2 GPUs). Training logs will be written to the runs
which can be visualized using tensorboard
./train_standard.sh
If you have a RTX GPU, training can be accelerated using mixed precision. You can expect similiar results in this setting (1 GPU)
./train_mixed.sh
You can optionally use our alternate (efficent) implementation by compiling the provided cuda extension
cd alt_cuda_corr && python setup.py install && cd ..
and running demo.py
and evaluate.py
with the --alternate_corr
flag Note, this implementation is somewhat slower than all-pairs, but uses significantly less GPU memory during the forward pass.
Other files and folder:
- weights and cfg files for yolo were obtained from https://github.com/pjreddie/darknet
- Yolo weights can be downloaded from https://drive.google.com/drive/folders/1h9PqeZ3l5RUURJIxeNMXGt-Ilil3ngrO?usp=sharing, I got this from here https://pjreddie.com/media/files/yolov3.weights
- 1684366668.0944755.mp4 video for testing
- stream.py- for streaming the video from the pi cam from https://singleboardblog.com/real-time-video-streaming-with-raspberry-pi/
Note:
- To visualize the motion vectors of the car uncomment line 51-56 in demo.py and 62-68 for inference.py
- To visualize the bounding rectangle of the car uncomment line 133-137 in demo.py and 132-135 for inference.py
- for live video streaming hardware is needed but we used line 210 of inference.py to do so. The IP address will change.
- We downloaded images online and annotated them for yolo training and testing but we could not get right output so we switched to existing trained models.
Results:
YOLO Result
RAFT Result
Combined Result
My teammates for thier contributions to this project. [Nishant] (https://github.com/nishantpandey4) for making the github repo.