(sometimes referred to as an autonomous agent) that
exhibits deliberative behaviors (Ingrand and Ghallab
2014). More specifically, this means that at a specified
level of abstraction determined by system requirements,
the vehicle is able to behave with goals in mind.
Generally speaking, goal-based rationality requirements
can be classified along a spectrum from the purely reactive (the system is able to respond in real time to avoid
hazards) to the purely deliberative (the system is able to
choose mission goals and perform actions that accomplish these goals). Between these extremes is a
collection of tactical capabilities that allow the system
to observe and act on the world in ways that contribute
to accomplishing goals or remaining safe.
Cognitive capabilities for autonomy include mission
and trajectory planning, observing events and features
of the world relevant to accomplishing goals, learning
and adapting to changes to the operational environment, and refining planned actions into triggers of
motor forces (Ingrand and Ghallab 2014). Autonomy
capabilities are implemented by hardware and software
that interface with a machine platform. Autonomy
software implements AI models and algorithms for
predicting, understanding, and acting on the world.
Conceptual design for autonomy requires discussion
of how the cognitive components of autonomy fit
together into an integrated system. In this discussion
we distinguish between a vehicle architecture and a
broader notion of an operational architecture. The
components of an operational architecture include
the vehicle components and organization but also
may include remote processors that reside on other
vehicles or a ground system, as well as the components that enable human inputs to the system (figure
1). For aeronautical applications especially, vehicle
architectures derive from operational architectures,
and therefore the latter take precedence in design.
The primacy of operational architecture is evident
in any reasonable definition of classes of autonomy,
such as the one in the paper by Shladover (2016) for
self-driving automobiles. Even at the highest (level 5)
class of autonomy (the car behaves like a true chauffeur, “retain[ing] full vehicle control, need[ing] no
human backup and drive[ing] in all conditions”), the
human is still present and is controlling operations,
such as by setting navigation goals.
In the following, we distinguish among three issues: defining the structure and style of an architecture, determining the distribution of capabilities, and
designing for human-machine coordination.
Architecture Structure and Style
It has been often claimed that autonomous systems
consist of layers (Alami et al. 1998; Coste-Maniere and
Simmons 2000; Bayouth, Nourbakhsh, and Thorpe
1997). This means that high-level functionality can be
recursively broken down into functionally simpler
subsystems. Many architectures distinguish among
three tiers: a deliberative layer, an executive layer, and a
control layer (terms defining these layers may vary).
Figure 2 shows the placement of these generic layers, as
well as the hardware layer for the controlled vehicle. The
deliberative layer is responsible for mission planning,
including planning path trajectories and other actions.
It is also responsible for building and maintaining high-
level models of the world and the vehicle. The executive
layer acts as a bridge between the deliberative layer and
low-level control. Some of its capabilities include acti-
vating low-level behaviors, failure recognition, and
triggering replanning activities. Finally, the control layer
comprises the set of low-level behaviors and controllers
that directly interface with hardware.
The integration of autonomy components requires
design of the means by which components communicate with one another, sometimes referred to as the
architectural style. Examples of infrastructure for communication are client-server and publish-subscribe. Recently, more developers have been taking advantage of
the reliability, efficiency, and ease of use that externally
available communication packages such as the Robotic
Operating System (ROS) (Quigley et al. 2009) provide.
Distributing Cognitive Capabilities
Operational architectures are inherently distributed. We
look at two types of distributed operational architectures:
those that support the activities of a single agent and
those that support coordination of multiple agents. In
this section we are agnostic as to whether the capabilities
are automated or performed by a human. In the next part
we discuss human-machine distributed capabilities.
The first kind of distribution is a spatial distribution
that can arise due to restrictions on the size, weight, or
operational constraints of the vehicle. For example, due
to size and weight restrictions of sUAVs, it is common to
assign more computationally intensive or memory-intensive processing to a ground processor rather than
the vehicle. An important factor in this kind of distribution of capabilities is the possible communication
overhead incurred from ground to vehicle, as well as its
effects on the responsiveness of the vehicle.
The second kind of distribution of capabilities arise
when it makes sense to have coordinated activity
among many agents to achieve mission goals. Some
missions require solutions that involve autonomous
agents operating as teams. In addition to the sensing,
navigation, and control of capabilities of individual
vehicles, additional sensing, communication, and
planning capabilities are required to enable coordination. We may also distinguish between the middleware
for coordination and algorithms for enabling coordination, such as team formation and distributed path
planning. An example of the former is the use of a cloud-sourced database management system to enable communication of data and commands among networks of
sUAVs (Tyagi and Nanda 2016).
Operational architectures for autonomy are inher-
ently coordinated between human and machine. As
SUMMER 2019 5