The Robotarium

Multi-robotic testbeds are an integral part of multi-agent research, yet they are expensive to develop and operate. This in turn makes them unaffordable for most but a select few researchers at well-endowed universities, slows the rate of progress of multi-agent research, and limits the number of educational multi-robot tools available to students. The goal of the Robotarium project is to develop a shared, remotely accessible multi-robot testbed that aims at remedying these issues and will enable researchers to remotely access a state-of-the-art multi-robot test facility. The Robotarium hosts custom-designed miniature robots – the GRITSBots. These robots are tracked through an integrated overhead camera system that enables them to recharge autonomously. The Robotarium will ultimately facilitate swarm-robotic experiments through the following main features:

  • Large numbers of low-cost robots (on the order of hundreds)
  • Convenience features to simplify maintenance of large collectives of robots (automatic registration of robots with server, autonomous charging, wireless code upload to the robots, automatic sensor calibration).
  • Immersive user-experience through a fully remotely accessible testbed with live video and data streaming and the option to add virtual robots to the testbed.
  • Public interface to allow users to schedule time on the testbed

Current multi-agent robotics testbeds come at a (prohibitively) high price tag or are highly specialized pieces of equipment that require skilled technicians capable of maintaining a collective of robots. A typical multi-agent setup features wheeled robots (such as Khepera robots, e-Puck robots, or the like) and a tracking system to provide global position data to the robots (to close the feedback control loop). The Robotarium aims at replicating such a setup albeit at a much lower cost as well as with robots significantly reduced in size without limiting the capabilities of the individual robots.

Design Considerations

As a shared, remotely accessible, multi-robot facility, the Robotarium’s main purpose is to lower the barrier of entrance into multi-agent robotics. Similar to open-source software that provides access to high quality software, the Robotarium aims at providing universal access to state-of-the-art robotic infrastructure and enabling fast prototyping of multi-agent and swarm robotic algorithms. The Robotarium was designed primarily as a research and educational tool that will function as a gateway to higher STEM education on one hand and as an accessible and capable research instrument on the other hand. As such it is conceivable that a pervasive robotic testbed such as the Robotarium will have to exhibit a subset or all of the following high-level characteristics to fulfill its intended use effectively.

  • Simple and inexpensive replicability of the system – both the testbed itself and the robots it contains.
  • Intuitive interaction with and data collection from the testbed
  • Tight and seamless integration of the simulation workflow for algorithm prototyping and code execution on the robots.
  • Minimization of both cost and maintenance effort while keeping the robots and testbed extensible.
  • Built in safety and security measures to protect the system from damage and misuse.

An overview of our current instantiation of such a remote-access multi-robot testbed is shown in the figure below. Although it currently only offers 10 robots, this implementation already serves as a fully functional small-scale prototype.

Prototype Design

In this section, we outline the technical details of the current instantiation of the Robotarium. However, it should be noted that the Robotarium, by design, has to evolve over time in response to user needs in order to provide an effective research instrument as opposed to a static showcase. The current prototype is shown in the figure above, and contains the testbed hardware, the tracking camera feedback loop, wireless communications, the coordinating server application, APIs, and simulation infrastructure. All these components are already fully functional, while initial versions of the code verification and compilation modules are being developed. The following sections discuss the robot design as well as the testbed setup.

Robot Design

The Robotarium leverages the GRITSBot, a miniature robot that we introduced in [1]. The GRITSBot is an inexpensive differential drive miniature robot featuring a modular design that allows hardware capabilities to be adapted easily to different tasks. A key feature that makes the GRITSBot the basis for the Robotarium is that it allows for a straightforward transition from typical multi-robot systems because it closely resembles popular platforms in capabilities and architecture (such as the Khepera robots). The robot’s main features include (i) high resolution and accuracy locomotion through miniature stepper motors, (ii) range and bearing measurements through infrared distance sensing, (iii) global positioning system an overhead camera system, and (iv) communication with a global host through a wireless transceiver. This section briefly discusses these features and summarizes design changes compared to the first revision of the robot in [1].

Actuation

The actuation system of the GRITSBot is based on two miniature stepper motors, whose main advantage is the ease of acquiring accurate odometric information. By replacing common wheel encoders with counting steps, one obviates the need for motor speed estimation.

Processing

The main processor on the GRITSBot is an ESP8266 microcontroller running at up to 160 MHz, fast enough to handle wireless communication, pose estimation, low-level control of the robot (including the nonlinear velocity and position controller), as well as high-level behaviors. A second microcontroller on the motor board – an Atmega 88 – is responsible for motor control, i.e. ensuring the precise timing required to run the stepper motors at speeds up to 8 rotations per second.

Communication

The main ESP8266 microcontroller doubles as a WiFi transceiver supporting the IEEE 802.11 B/G/N standards. Unlike the wireless transceivers used on the GRITSBot in [1], WiFi offers much higher bandwidth but comes at the cost of higher power consumption (on average 150 mA). To offset the reduced battery life, we have doubled battery capacity allowing a battery life of approximately an hour at full activity level. The benefits of WiFi however far outweigh its increased power consumption. WiFi offers a reliable communication channel based on standard UDP sockets and a single WiFi access points is able to service hundreds of clients.

Sensing

The current sensor board of the GRITSBot houses six infrared distance sensors with a range of up to 10 cm and can be equipped with a digital compass, gyroscope, and an accelerometer. An additional battery voltage and current sensor is located on the main board. The modular architecture of the robot allows to easily extend or change capabilities of the robot by replacing the sensor board with a custom board or simply stacking a second sensor board on top.

The current revision of the robot is shown in the figure below – the 3D model of the robot, the actual hardware prototype, and the 3D-printed shell of the robot (from left to right).

Testbed

The design of the GRITSBot allows a single user to easily operate and maintain a large collective of robots through built-in features such as (i) automatic sensor calibration, (ii) automatic battery charging, (iii) wireless (re)programming, and (iv) automated registration with the overhead tracking system of the robots after powering them up.

Camera System

The overhead tracking system is based on standard webcams (currently Microsoft LifeCam Studio HD cameras). The video stream is fed into a tag-tracking algorithm based on ArUco (a minimal library for Augmented Reality applications based on OpenCV). This library allows to track robots in three dimensions and recovers the homogeneous transform from the camera to the robot coordinate system. Update rates of up to 30 Hz are high enough to run swarm algorithms on a team of robots as well as to remotely position- and/or velocity control individual robots.

Recharging

Arguably the most crucial component of a self-sustaining and maintenance-free testbed is an automatic recharging mechanism for the robots. The GRITSBot has been designed for autonomous recharging through two extending magnetic prongs that can connect to magnetic charging strips built into the arena walls. This setup together with global position control through the camera feedback loop allows the GRITSBot to autonomously recharge its battery (see figure below)

Sensor Calibration

Since the IR distance sensors of the robot only measure voltages that correspond to distances, we have to establish a mapping between measured voltage and distance. Variations in sensor quality require each robot to be calibrated separately and possibly repeatedly throughout its lifetime. Therefore, we have developed a calibration tool that provides an automated mechanism for the calibration of the robots IR sensors. A detailed description of the automated calibration feature can be found in [1].

Wireless Reprogramming

The main ESP8266 microcontroller supports over-the-air programming (OTA), which enables wireless reprogramming of individual robots, groups of robots, or even reprogramming of the whole swarm in a broadcast fashion. It is even possible for one robot to reprogram another, which offers an array of research challenges in the area of wireless security, as well as evolutionary and collaborative robotics.

Simulation Interface

In addition to the hardware framework we outlined before, we have also developed a simulator for the Robotarium that supports more rapid prototyping than hardware alone can support. Simulator code can additionally be used for real hardware experiments, reducing the turnaround time for testing a prospective algorithm. More specifically, the simulator incorporates the following:

  • A communication framework that models network congestion and bit rate errors. In addition, the simulation provides a flexible format for specifying packets.
  • Sensing including 6 IR sensors, battery current, and battery voltage levels.
  • Customizable global computer behavior for investigating the interaction of slow, global information with fast, local information.
  • Customizable robot behavior including a low level controller, an obstacle avoidance controller, a neighborhood manager, and a state estimator. Users are not limited to a homogeneous implementation.
  • A web-based graphical user interface that uses WebGL. A screenshot of the simulator running a rendezvous controller with 100 robots is shown in the figure below.
  • Logging for every robot at every timestep of a simulation returned in a simple data structure for analysis.

The implementation emphasizes usability by using a high-level language (Python) and incorporating a modular framework for specifying robot and global computer behavior. While usability is paramount in the simulation, reasonable levels of fidelity have been achieved without sacrificing runtime by writing a large portion of the simulation in C/C++. In particular, the physics model incorporates simple collisions and commanded wheel velocities, the communications model incorporates network congestion and bit rate errors, and actuators are subject to failure as power levels decrease. Thus, users can investigate for example the effects of translating a unicycle model to a differential drive robot, the effects of scale on communication capabilities, or the effect of robots dropping out of a formation due to power constraints.

Code written for the simulator can be directly used to conduct an experiment with the actual hardware. In the current implementation, we achieve this through a Python API which calls the simulation robot controller for commanded wheel velocities, which then sends wheel velocities to the robots. In the next iteration, user code will be verified, cross-compiled, and uploaded to the robots for execution. Users will be able to write their algorithms in a high level language, test them in an easy-to-use simulation to verify their algorithms, and submit them for actual hardware experiments for a tight integrated environment.

Experiments

The following videos show experiments conducted on the Robotarium with a team of six robots.

Related Publications