robots from tools to teammates (Phillips et al.
2011). There are four main research areas and
required capabilities: ( 1) OPTEMPO maneuvers1
in unstructured environments, including mobility in dynamic scenes and across rough terrain,
( 2) human-robot execution of complex missions
requiring situation awareness of unstructured environments and distributed mission execution,
( 3) mobile manipulation in cluttered spaces, and
( 4) integrated research that combines and assesses
capabilities delivered from the other thrusts on
multiple robotic platforms. Although a number of
research interests are being addressed to advance
teaming, context plays an important role in all
of these areas. In particular, subcategories of RCTA
research that drive the advancements in context-driven AI include advancements in semantic perception, adaptive behavior generation, metacognition, machine learning, and a hybrid cognitive
and metric world model.
Semantic perception moves robotic perception
beyond simply detecting what is or is not an obstacle toward semantic understanding of an environment in a way similar to how human team members
would perceive or reason about the environment, for
example, by recognizing the types of objects and terrains of interest for a specific task, such as navigation
(Oh et al. 2015a; Oh et al. 2015b; Oh et al. 2016; Shiang
et al. 2017). Adaptive behavior generation combines
previously developed robotic planning algorithms,
machine learning techniques, and semantic understanding of an environment within the context of
a high-level task. This enables robots to generate
effective mission plans in partially known and unstructured environments and to compute these plans
online whenever necessary by following natural language commands (Boularias et al. 2015; Boularias
et al. 2016; Paul et al. 2016; Paul et al. 2017; Paul
et al. 2018; Tucker et al. 2017) or navigating while
Figure 1. RCTA Multimodal Interface Visual Interface.
The display includes a semantic map (left), video from the robot’s perspective or other imagery data (top right), and the robot’s action and
health status (bottom right).