Skip to content

Uncertainty-Aware DRL for Autonomous Vehicle Crowd Navigation in Shared Space (IEEE-IV-2024)

License

Notifications You must be signed in to change notification settings

Golchoubian/UncertaintyAware_DRL_CrowdNav

Repository files navigation

UncertaintyAware_DRL_CrowdNav

This repository contains the code for our paper titled "Uncertainty-Aware DRL for Autonomous Vehicle Crowd Navigation in Shared Space". For more detailes please refer to our paper. A video of the simulation results is also provided here.

illustration

Abstract

Our method introduces an innovative approach for safe and socially compliant navigation of low-speed autonomous vehicles (AVs) in shared environments with pedestrians. Unlike existing deep reinforcement learning (DRL) algorithms, which often overlook uncertainties in pedestrians' predicted trajectories, our approach integrates prediction and planning while considering these uncertainties. This integration is facilitated by a model-free DRL algorithm trained in a novel simulation environment reflecting realistic pedestrian behavior in shared spaces with vehicles. We employ a novel reward function that encourages the AV to respect pedestrians' personal space, reduce speed during close approaches, and minimize collision probability with their predicted paths.

Installtion

Create a virtual environment or conda environmnet using python version 3.9, and Install the required python packages:

pip install -r requirements.txt

Install pytorch version 1.12.1 using the instructions here:

pip install torch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1

Install OpenAI Baselines

git clone https://github.com/openai/baselines.git
cd baselines
pip install -e .
cd ..

Overview

There are four main folders within this repository:

  • move_plan/: Contains the configuration file and policy used for the robot/AV in the simulation environment.

  • ped_pred/: Includes files for running inference on our pedestrian trajectory prediction model, named PollarCollisionGrid (PCG) and its uncertainty-aware version (UAW-PCG).

  • ped_sim/: Contatins the files for the simulation environment.

  • rl/: Holds the files for the DRL policy network, the ppo algorithm, and the wrapper for the prediction network.

  • trained_model/: Contains the trained models reported in our paper

Simulation Environment

In our project focusing on crowd navigation for Autonomous Vehicles (AVs) in shared spaces, we have developed a custom simulation environment. Based on the HBS dataset, which captures pedestrians' and vehicles' trajectories in shared spaces, our simulation environment replicates real pedestrian behaviors. This enables effective AV training by providing realistic scenarios. Additionally, we utilize this dataset to train our data-driven pedestrian trajectory prediction module.

From the HBS dataset, we have extracted 310 scenarios corresponding to the vehicles in the dataset, which are divided into training, testing, and validation sets. Pedestrian behaviors are simulated using real-world data, while AV actions are governed by the DRL policy network. These scenarios present dynamic situations where AVs must adapt by decelerating to yield to pedestrians or accelerating to avoid collisions.

The simulation environment being developed, based on real trajectory data, provides the advantage of having human-driven trajectories for each scenario, which can be compared with the trajectories of AVs' trained navigation policy. Integrated into a gym environment, our simulation environment named PedMove_gym can serve as a benchmark for training AVs in crowd navigation.

Model training

To customize the training process for your desired model, adjust the configurations in the following two files, with key parameters outlined below:

  • Environment configurations in move_plan/configs/config.py.

    • sim.predict_method offers the following options:
      • inferred: Uses predicted trajectories of pedestrians in the DRL algorithm, relying on the PCG/UAW-PCG predictor model.
      • none: Does not use predicted trajectories of pedestrians in the DRL algorithm.
      • truth: Uses predicted trajectories of pedestrians in the DRL algorithm, relying on the ground truth prediction from the dataset.
  • Network configurations in arguments.py

    • env_name offers the following options and must align with sim.predict_method in move_plan/configs/config.py
      • PedSim-v0: When using prediction (either PCG/UAW-PCG or ground truth)
      • PedSimPred-v0: When not using prediction.
    • uncertainty_aware is a boolean argument:
      • True: When using the uncertainty-aware Polar Collision Grid prediction model (UAW-PCG)
      • False: When using the origianl Polar Collision Grid prediction model (PCG)

Once all adjustments have been made, execute the following command:

python train.py

Model testing

To test our pre-trained models as presented in the paper and reproduce the results of Table III, adjust the arguments in lines 24-31 of the test.py file and execute the following command:

python test.py

The three main arguments to adjust are as follows:

  • test_model: Specifies the name of the trained model to test:

    • UAW_PCG_pred: DRL model trained with the UAW-PCG prediction model.

    • PCG_pred: DRL model trained with the PCG prediction model.

    • GT_pred: DRL model trained with the ground truth prediction model.

    • No_pred: DRL model trained without any prediction data.

    • No_pred_SD: The DRL model trained without any prediction data but with a speed-dependent danger penalty reward function.

    • Human_Driver: Trajecotry of the human dirver in the dataset

      Note: the config.py and arguments.py in the saved models folder will be loaded, instead of those in the root directory. Therefore, there is no need to change the config and argument files of the root directory when generating the test result for each provided trained model.

  • test_case: Specifies the scenarios in the test set to test the model on:

    • -1: Runs the test on all scenarios in the test set

    • A single number in raneg: [248-310]: Run the test on the specified scenario number in the test set

      Note: Among the 310 extracted scenarios from the HBS dataset, here are the scenario numbers within each subdivided category of train, validation, and test:

      • validation: scenario numbers: 0 - 48
      • train: scenario numbers: 48 - 247
      • test: scenario numbers: 248 - 310
  • visualize: If set to true, will visualize the simulation environment with the GIF saved in traine_models/ColliGrid_predictor/visual/gifs

    Note: Visualization will significantly slow down testing.

GIF 1

Citation

@article{golchoubian2024uncertainty,
  title={Uncertainty-Aware DRL for Autonomous Vehicle Crowd Navigation in Shared Space},
  author={Golchoubian, Mahsa and Ghafurian, Moojan and Dautenhahn, Kerstin and Azad, Nasser Lashgarian},
  journal={IEEE Transactions on Intelligent Vehicles},
  year={2024},
  publisher={IEEE}
}

@inproceedings{golchoubian2023polar,
  title={Polar Collision Grids: Effective Interaction Modelling for Pedestrian Trajectory Prediction in Shared Space Using Collision Checks},
  author={Golchoubian, Mahsa and Ghafurian, Moojan and Dautenhahn, Kerstin and Azad, Nasser Lashgarian},
  booktitle={2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)},
  pages={791--798},
  year={2023},
  organization={IEEE}
}

Acknowledgment

This project is builds upon the codebase from CrowdNav_Prediction_AttnGraph repsitory.