ing the latest advances in artificial intelligence and
robot planning. The goal of ARIAC is to solidify the
field of robot agility, while also progressing the state
of the art.
The organizers define industrial robot agility as a
robotic system’s (robot, controller, and sensors) ability to respond to a dynamic environment. This
dynamic response includes handling errors like
dropped parts or responding to changes in orders, all
without operator intervention.
The competition addresses the aspect of robot
agility that focuses on software, including knowledge
representation, planning, and decision-making.
While hardware aspects (such as different types of
grippers) can play a large role in agility, they are not
the focus of this competition. Perception and grasping have played a minimal role in the first year of the
competition, but are expected to increase in importance in future years.
The competition2 was held completely in simulation using the Gazebo Robot Simulator. Gazebo is an
open source Linux-based simulation environment
that works very closely with the robot operating system (ROS).
3 Gazebo4 was chosen because it is commonly used in academia. Additionally, it is free and
therefore no mandatory monetary investment is
Teams competed by submitting robot control code
and a sensor configuration for a kitting operation, as
shown in figure 1. The organizers chose kitting
because of its similarity to assembly. Unlike assem-
bly, however, kitting does not require a high-fidelity
physics engine. Teams were tasked with assembling a
kit both from bins of stationary parts and from a
moving conveyor. After the robotic system finished
the kit, the kit was placed on an autonomous guided
vehicle (AGV) and taken away.
Teams were faced with such challenges as forced
dropped parts and in-process order changes. Each
team’s system had to address these challenges and
attempt to finish the kit autonomously in real time.
The scoring metrics used in the competition were
based partially on the robot agility metrics developed
by NIST (Downs, Harrison, and Schlenoff 2016).
Competition scoring took into account whether the
kit was completed (both quantitative and qualitative
metrics), how fast the kit was completed, and the cost
of the sensor configuration. Each team’s system was
given a cost based on the number and type of sensors
used. Typically, sensors that gave more information
were priced higher than sensors that gave less. The
total cost of each team’s configuration factored into
their score, where cheaper configurations resulted in
The competition was preceded by three qualification phases. To compete in the competition, each
Figure 1: Example of the Simulation Environment.