Introduction

BaleBot coordinates multiple TurtleBot2 robots to function as a single entity. It achieves this by maintaining a rigid body structure as it tracks a trajectory from an initial state to a final state.This is an interesting problem because the system is nonlinear and nonholonomic. Dynamics need to be taken into account when planning and control loops need to be synchronized between all agents. The main application for this project would be in warehouse robotics. It can enable several smaller robots to move large objects by working collaboratively.

Design

The project was broken into 3 distinct subproblems: sensing, planning, and control. Sensing had to create reasonable state estimates of all robots with respect to the goal position, planning had to generate a valid trajectory from the starting point to the goal, and control had to track the trajectory point-by-point.

To make state estimation as accurate as possible, we utilize an external camera with AR tags to determine position and orientation of each robot. Although this provides a direct measurement of the state, it limits the workspace of the system to the camera's field-of-view and is prone to high-frequency noise as the robots move further away from the camera.

Planning had to take into consideration the system model when generating a path. The primary requirement was that the derivative of trajectory had to be continuous. This would enable nonholonomic systems to track the trajectory without instantaneous rotational or horizontal motion. An algorithm presented by Professor Alessandro De Luca from the Sapienza University of Rome was utilized for its simplicity and flexibility.


Control had to drive the system to the next state while maintaining its rigid body structure. To achieve this, we designed a hierarchical structure that consists of a single global controller and multiple local controllers. The global controller would generate control signals for the global robot which would then be fed as an input to the local controllers. The local controllers utilize this to determine a feedforward signal that is used in conjunction with a proportional controller to derive the control signals for each robot.


The major tradeoff in the design was maintaining the rigid body structure at the controls level rather than the planning level. Although this allows the system to maintain its structure very accurately, it limits the robot configurations such that the global robot can still be described by a unicycle-model. The alternative would be to generate a path for the global robot and then translate it for each local robot. This would permit arbitrary robot configurations at the cost of a less reliable rigid body structure.

Implementation

The project was implemented in ROS Kinetic and tested with three TurtleBot2 robots. The only external packages used were ar_track_alvar for AR tracking and usb_cam as a driver for the Logitech C922x camera.


To make communication simple, we created a custom message type defined in msg/State.msg. It comprises of a float tuple (x, y, θ) that is used to fully describe the state-space.

src/frame_publisher.py is a node that is used to update the goal frame in the case it is obstructed from view. src/state_observer.py determines the state estimate of all robots with respect the the goal frame. It utilizes TF transformations and performs a rolling average before converting to a State message. src/path_planner.py gets the initial state of the global robot and calculates a trajectory with respect to the goal frame. It then samples this path to create State waypoints for the control node to follow. It continuously monitors the state of the global robot to determine when to publish the next waypoint. src/motion_controller.py runs the global and local controllers. The global controller uses a proportional-integral controller to derive control signals based on the next waypoint published by the planner along with the current state of the global robot. The local controller then interprets these commands as a feedforward term for a proportional controller that corrects for the state error between where the local robot currently is and where it should be with respect to the global robot.

There are also several launch files to simplify execution. launch/rviz.launch starts RVIZ along with the AR tracker and camera driver. launch/robot.launch is used to run the system on hardware while launch/simulator.launch is used to run the system in simulation. These files also contain settings such as global robot configuration, AR marker numbers and command topics.

Results

Overall, the project met our most important design criteria of maintaining a rigid body structure while in motion. It was able to move between states in an obstacle-free environment provided all robots remained in the camera’s field-of-view.

The global controller was relatively robust provided it did not miss a waypoint in the path. This was prevented by appropriately selecting the distance between waypoints and the curvature of the trajectory. The system was able to reach its targets within 10cm and 5°.

The local controllers were robust to disturbances in initial conditions as well as deviations from the global robot. They maintained their states within 5cm of translational error and 5° of rotational error.

Improvements

Currently, robot configurations are limited. All configurations must have a robot located at the origin of the global robot and the local robots can only be placed along the horizontal axis. For arbitrary configurations to be possible, state estimation of the global frame needs to be achieved with reasonable accuracy using only the state estimations of the local frames. In order to place robots away from the horizontal axis, the controller has to be redesigned since the overall system changes from a unicycle-model robot to a bicycle model robot.

Another addition would be the use of onboard sensors to improve state estimation and enable obstacle avoidance. State estimation is done primarily through an external camera and AR tags in key locations. However, combining this with odometry using a complementary filter would reduce sensor noise and generate a better state estimate. The TurtleBot2 platform also have depth cameras, allowing them to perform SLAM. This can be used to detect nearby obstacles and recompute the trajectory if it becomes unobtainable.

Team