Skip to content

4. Building an autonomous robot with ROS

Hamsadatta edited this page Mar 8, 2020 · 1 revision

Intro

over the past few years, we can see a paradigm shift in the automotive industry. The impact of new technologies like self-driving cars, connected vehicles has brought in a new form of mobility. The way we conceive them has changed, now the modern car has the capability to sense its surroundings and react accordingly. They possess state-of-the-art sensors and advanced control systems to identify appropriate navigation paths when given a waypoint. Many companies are adopting and implementing such technologies.

What are self-driving cars?

A self-driving car is a computer-controlled car that perceives its surroundings using a variety of sensors to drive autonomously without human intervention or in simple terms we call them “cars with brains”.

Concept of self-driving cars

The concept of self-driving cars can be described in five different parts sensor fusion, mapping, localization, navigation, and control. although mapping and localization can be treated as a single process it's worth discussing in detail.

Sensor fusion is a process of combining sensor data from multiple sensors and using the best possible information that has less possible uncertainty. Mapping and localization are predominantly used in self-driving car research which is governed by SLAM algorithms. Localization is all about where the entity is in the given world or a map.

once the entity is localized, now it is the decision to be made where to navigate when given a waypoint. This is called path planning and the next step would be the actuation that comes under control, of the entity.

There are various methods to realize this. Using ROS (Robotic operating system) is one of them.

Real-time implementation

Further, we validated the simulation results by building a robot with an identical dimension similar to that of the imaginary robot that we created for simulation. The key components of the robot were wooden chassis, Raspberry Pi 3 Rev. B+, Motors with encoders along with L298 motor driver, 2D Lidar, Kinect Depth camera, and an Arduino to act as a bridge between ROS and actual Motors. We also installed hardware-specific libraries to integrate the sensors with ROS. This example utilizes the ROS multi-machine configuration. The host PC and remote host both are connected to the same access point with different IP addresses of which host PC acts as the master and remote host is the slave. All the visualization is done on the host PC. Raspberry pi3 B+ is used as the remote host. The remote host is present on the mobile robot and is responsible to receive and transmit sensor data. The communication between Arduino and raspberry pi3 is realized using the ROS_arduino_bridge package.

Results Having successfully mapped, localized and navigated our virtual robot in a gazebo simulation, we have validated the simulation results on a physical robot and the following are the results which we have obtained in Realtime. These results show the navigation of the bot in the real world.