Skip to content

Camera-Radar-Lidar Fusion detection net based on ROS, YOLOv3, OpenPCDet integration.

License

Notifications You must be signed in to change notification settings

unsw-cse-capstone-project/CRLFnet

 
 

Repository files navigation

CRLFnet

experimental CodeQL pages-build-deployment

Codecov GitHub GitHub top language GitHub last commit DOI

The source code of the CRLFnet.

INSTALL & BUILD

Env: Ubuntu20.04 + ROS(Noetic) + Python3.x

Absolute paths may need your mind:

file path Line(s)
src/camera_info/get_cam_info.cpp 26,64,102,140,170,216,254,292,330,368,
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pointrcnn.yaml 5
src/LidCamFusion/OpenPCDet/tools/cfgs/custom_models/pv_rcnn.yaml 5

Docker

Build project from Dockerfile:

docker build -t [name]:tag /docker/

or pull image directly:

docker pull gzzyyxy/crlfnet:yxy

Rad-Cam Fusion

Necessary Configurations on GPU and model data

  • If GPU and cuda is available on your device, you can set the parameter use_cuda to True in src/site_model/config/config.yaml.

  • Please download yolo_weights.pth from jbox, and move it to src/site_model/src/utils/yolo/model_data.

Run The Rad-Cam Fusion Model

The steps to run the radar-camera fusion is listed as follows.

For the last command, an optional parameter --save or -s is available if you need to save the track of vehicles as images. The --mode or -m parameter has three options, which are normal, off-yolo and from-save. The off-yolo and from-save modes enable the user to run YOLO seprately to simulate a higher FPS.

    cd /ROOT_DIR/

    # load the simulation scene
    roslaunch site_model spawn.launch   # load the site
    roslaunch pkg racecar.launch        # load the vehicle
    rosrun pkg servo_commands.py        # control the vehicles manually
    rosrun pkg keyboard_teleop.py       # use WASD to control the vehicle

    # run the radar message filter
    rosrun site_model radar_listener.py
    
    # run the rad-cam fusion program
    cd src/site_model
    python -m src.RadCamFusion.fusion [-m MODE] [-s]

Camera Calibration

Two commands are needed for camera calibration after spawn.launch is launched. Relative files are already exist in the repository. If the poses of components of models in .urdf files haven't been modified, skip this step.

    rosrun site_model get_cam_info # get relevant parameters of cameras from gazebo
    python src/site_model/src/tools/RadCamFusion/generate_calib.py # generate calibration formula according to parameters of cameras

Lid-Cam Fusion

This part use OpenPCDet as the detection tool, refer to CustomDataset.md to find how to train self-product dataset.

Config Files

Configurations for model and dataset need to be specified:

  • Model Configs tools/cfgs/custom_models/XXX.yaml
  • Dataset Configs tools/cfgs/dataset_configs/custom_dataset.yaml

Now pointrcnn.yaml and pv_rcnn.yaml are supported.

Datasets

Create dataset infos before training:

    cd OpenPCDet/
    python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml

File custom_infos_train.pkl, custom_dbinfos_train.pkl and custom_infos_test.pkl will be saved to data/custom.

Train

Specify the model using YAML files defined above.

    cd tools/
    python train.py --cfg_file path/to/config/file/

For example, if using PV_RCNN for training:

    cd tools/
    python train.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --batch_size 2 --workers 4 --epochs 80

Pretrained Model

Download pretrained model through these links:

model time cost URL
PointRCNN ~3h https://drive.google.com/file/d/11gTjqraBqWP3-ocsRMxfXu2R7HsM0-qm/view?usp=sharing
PV_RCNN ~6h https://drive.google.com/file/d/11gTjqraBqWP3-ocsRMxfXu2R7HsM0-qm/view?usp=sharing

Predict (Local)

Prediction on local dataset help to check the result of training.

python pred.py --cfg_file path/to/config/file/ --ckpt path/to/checkpoint/ --data_path path/to/dataset/

For example:

python pred.py --cfg_file cfgs/custom_models/pv_rcnn.yaml --ckpt ../output/custom_models/pv_rcnn/default/ckpt/checkpoint_epoch_80.pth --data_path ../data/custom/testing/velodyne/

Visualize the results in rviz like:

Lid-Cam Fusion

Follow these steps for only lidar-camera fusion. Some of them need different bash terminals. For the last command, additional parameter --save_result is required if need to save the results of fusion in the form of image.

    cd to/ROOT_DIR/

    roslaunch site_model spawn.launch # start the solid model

    # (generate camera calibrations if needed)

    python src/site_model/src/LidCamFusion/camera_listener.py # cameras around lidars start working

    python src/site_model/src/LidCamFusion/pointcloud_listener.py # lidars start working

    rosrun site_model pointcloud_combiner # combine all the point clouds and fix their coords

    cd src/site_model/
    python -m src.LidCamFusion.fusion [--save_result] # start camera-lidar fusion

Run the whole model

The whole project contains several different parts which need to be start up through commands. Following commands show how to start.

    cd to/ROOT_DIR/

    source ./devel/setup.bash
    
    roslaunch site_model spawn.launch

    # (generate camera calibrations if needed)

    rosrun site_model src/tools/radar_listener.py
    
    cd src/site_model
    python -m src.RadCamFusion.fusion [--save_result]


    python src/site_model/src/LidCamFusion/camera_listener.py

    python src/site_model/src/LidCamFusion/pointcloud_listener.py

    rosrun site_model pointcloud_combiner

    cd src/site_model/src/LidCamFusion/
    python -m src.LidCamFusion.fusion [--save_result]

Issues

Some problems may occurred during debugging.

wakatime

About

Camera-Radar-Lidar Fusion detection net based on ROS, YOLOv3, OpenPCDet integration.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 58.9%
  • JavaScript 17.5%
  • C++ 13.5%
  • Cuda 6.1%
  • CMake 1.7%
  • HTML 1.7%
  • Other 0.6%