For carnival 2019, I upgraded my R2D2 with a Raspberry Pi 3B+, a 2D Lidar as well as a camera. The target was a fully autonomous R2D2 which follows me while avoiding obstacles on its path.
Overview
Material
- RPLidar A2M8
- Microsoft Lifecam HD 3000 & wide-angle lens add-on for smartphones
- Raspberry Pi 3B+ & Cooling case
Setup
Everything runs with ROS (Robot Operating System). ROS is a library for languages like C++ and Python which provides an easy way of communicating between different tasks (for example the joystick listener & the motors driver). Due to the fact that every task (in ROS, we call it a node) is modular and communication between nodes is held on public broadcast channels (we call them topics), ROS is a perfect choice for working with robotic platforms. For a great tutorial to get started with ROS (in C++), have a look at the course held at ETH Zurich „Introduction to ROS“ on Youtube.
Modes
R2 supports three modes: JOYSTICK, FOLLOW_THE_LEADER and DRIVE_AROUND.
JOYSTICK mode
In JOYSTICK mode, R2D2 listens to the light sabers thumbstick state and moves according to its position. Basically, it follows the same behavior as in 2015, this time implemented with ROS on a Raspberry Pi instead of a Arduino.
DRIVE_AROUND mode
In DRIVE_AROUND mode, R2D2 tries to avoid obstacles by using its 2D Lidar. This is done by creating three virtual curves sticked together, starting with a radius ∞ (straight lines). If the path is blocked, the algorithm starts to bend the curves with different radii until it finds a curve which does not intersect with a detected obstacle. The reason for modelling the path with curves is that R2 uses a differential drive, i.e. two motors are separately controlled. This structure can be decomposed into a forward velocity v and an angular velocity ω, which together span a circle in the 2D plane.
In the following video, DRIVE_AROUND mode can be seen. Note that the white dots correspond to the currently used obstacle-free path, the red dots correspond to the Lidars detection and the bright grey squares represent the obstacle free 2D map (Yes, it also runs SLAM explained later).
FOLLOW_THE_LEADER mode
While in FOLLOW_THE_LEADER, the camera is used to detect a known infrared LED pattern using a blob detector and estimate its pose relative to the (calibrated) camera using P3P algorithm. For this, I am using a great library by ETH Zurich/University of Zurich Robotics and Perception Group RPGs Monocular Pose Estimator with a nice ROS wrapper. As soon as the pattern is detected, R2 tries to find a collision-free path to the leader (me) and stops 1.5m before collision.
Often, the pattern is not recognized anymore since the path chosen may at some point face away from its target position. To overcome that problem and still be able to reach the leader, a SLAM algorithm (hector_slam) is running in the background. Therefore, every time the leader was recognized, a target point is added to the global map (and the global map is reset due to the non-static environment at carnival events).