Skip to content

ettore9x9/ros2-TIAGo-followMe

Repository files navigation

Simulation for TIAGo robot tracking a moving object.

Assignment of SofAR class from Robotics Engineering course at Università degli Studi di Genova. The assignment regards the development of a software architecture in ROS 2 where, in a simulation launched on Webots, TIAGo robot (made by @palrobotics) follows a moving item seen by his RGB-D camera.

To complete the task, we decided to have a different approach to the problem. Using libraries of computer vision and simple PID controllers for the robot, we implemented a reasonable architecture where TIAGo follows a target and tries to keep it at the center of its vision.

Installing the package.

First of all the simulation runs on ROS2 version foxy. To install the ROS2 look at the following link. Then you have to build your workspace which can be found here.

The approach to the project was computer vision like, so we had to decide wheter library of open source cv use, we decided to use OpenCV. To install OpenCV:

sudo apt update
sudo apt install python3-opencv

To develop a good and simple PID controller, we decided to use this library. To install it:

sudo apt update
sudo apt install python3-pip
pip install simple-pid

The simulation enviroment we decided to use is Webots for ROS2. Please keep in mind Webots works only on amd64 architectures. To install it:

wget -qO- https://cyberbotics.com/Cyberbotics.asc | sudo apt-key add -
sudo apt-add-repository 'deb https://cyberbotics.com/debian/ binary-amd64/'
sudo apt-get update
deb https://cyberbotics.com/debian/ binary-amd64/
sudo apt-get update
sudo apt-get install webots

Our approach to the problem.

First of all, we decided to split the problem in main aspects, trying to schedule the macro problem in some smaller problems.

  1. Finding an initial compliance simulation where TIAGo can move in a space.
  2. Creating a package to get the rectangle coordinates (and its centroid's ones) on the image where the object seems to be. This means getting the dinstances too.
  3. Controlling the robot behaviour (mobile base and head) to respect the requirements we gave him (distance less than a treshold) with a PID controller.
  4. Adding a simple module of obstacle avoidance which can even be improved.

Now we will explain how we manage to develop each point.

Finding a simulation.

Packages: tiagosim

First of all we had to find a simulation in ROS2 with TIAGo where we could move him in the space. We found a good solution looking at the TIAGo Iron docs on ROS2 here. The simulation is a room containing different obstacles to train the robot.

Creating the computer vision and image processing module.

Packages: cv_basics, tiagosim, vision_opencv.

We wanted this assignment not to work with just only one kind of item, but everything that could be seen and recognised by the openCV library in python. As we wanted to make a forward step in our development, we decided to use training models of human people (which can be found on GitHub). Please, remember that every model you gave him as input in the webcam_sub.py can be changeable. This means that every traing model .xml can work!

We used primarly two different models:

  • Frontal Human Face ./src/cv_basics/cv_basics/haarcascade_frontalface_alt2.xml
  • Profile Human Face ./src/cv_basics/cv_basics/haarcascade_profile.xml

As we wanted the possibility to switch betweeen profile and frontal face when the robot was moving, which makes the robot recognization of the face way smoother:

faces = self.face_cascade_front.detectMultiScale(gray, 1.1, 4)
if len(faces) == 0 :
      faces = self.face_cascade_profile.detectMultiScale(gray, 1.1, 4)

Once we have computed the centroid of the rectangle where the face is detected, we decided to send a request to depth finder which collects the centroid and answers true, later it matches the centroid with the depth image. This is important because when depth_finder.py node will receive the depth information, it will have to combine the image to the right depth (frequency_datadepth is higher than frequency_datacentroid because of the image processing), which is our case.

Controlling the robot behaviour (PID control) with a simple idea of obstacle avoidance.

Packages related: tiago_nodes, Obstacle_Avoidance_ROS2.

We want to keep the robot following a target. To do that, the best way to achieve the goal is controlling the robot behaviour with simple tasks. These tasks are:

  • Keeping the centroid of the face on the center of the image (Setpoint = 2m).
  • Keeping a fixed distance between the robot and the human (Setpoint = 320 px).

We configured the PID controllers with the PID library of python. As we had parameters to be configured and an important command (desired velocity) to be sent to the robot, we thought it was a good idea to use a request-process-reply design pattern as it follows:

This becomes helpful when approaching different ideas for the obstacle avoidance, as it is an independent node where the code can be modified as the developer wishes. The pid controllers are two, one for the differential wheels mobility of the base and the other one for the distance, so for the linear mobility. We decided to use as gains (Proportional, Integrative and Derivative) the following values:

  • Linear velocity control: K_p = -1, K_i = 0, K_d = -2.
  • Angulaar velocity control: K_p = 0.004, K_i = 0, K_d = 0.0008.

All the values listed were assigned in an hempirical approach.

UML Graphs.

We decided to create two different diagrams, one structural and another one behavioural. For the structural we chose the component diagram, while for the behavioural we used the activity diagram.

Component Diagram

Activity Diagram

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published