these requirements can be classified into must-have capabilities (for example, the vehicle must contain a sensor
that can detect fires) and can’t-have capabilities (for example, a GPS and localization based on GPS, or vision-based navigation, which won’t work in dark spaces).
Furthermore, constraints stemming from requirements
propagate into other constraints. For example, the requirement of navigating through cluttered spaces could
rule out some classes of sUAV platforms because of size.
The requirement for smaller platforms in turn limits the
sensing and processing that can fit onboard.
Second, operational requirements induce constraints on the set of viable operational architectures.
These may include constraints on roles humans play
in the coordination of human and machine. In addition, operational requirements may induce constraints on the vehicle autonomy architecture.
Third, mission- or customer-centric requirements related to sizing could affect the degree of autonomy
allowed. Especially for small aircraft, such as sUAVs, the
equipment used to achieve high levels of automation
may scale disproportionately and actually begin to drive
size and performance metrics. One option for achieving
autonomy on sUAVs is to distribute the cognitive capabilities between onboard and ground processes. This
would increase the complexity of the design by requiring the addition of communication overhead between ground and vehicle; however, the need for a
cognitive capability might favor a distributed design, as
long as the required responsiveness of the system is
maintained despite the communication overhead.
Fourth, considerations of equipment and technologies needed for a mission raise questions about
the intended role of autonomy. For example, at NASA
it has been common to classify machine autonomy as
enabling or enhancing a mission. Autonomy is enabling if mission goals cannot be accomplished
without it; autonomy is enhancing if it offers a better
(safer, more effective, more robust) alternative to
purely manual operations. Put another way, an enabling capability is usually one for which manual
operations are impossible, too dangerous, or too dif-
ficult. Similarly, to be determined as enhancing often
means that some machine capability improves the
human operator’s cognitive capabilities or offers a
more robust, effective, and safe alternative to manual
operations. As a special case, a mission might have a
built-in requirement to test new technologies (for
example, the remote agent experiment on NASA’s
Deep Space One mission). 1 In such special cases, the
autonomy clearly becomes enabling for the vehicle.
Fifth, legal restrictions could affect all aspects of
autonomy design. Anticipating the focus of the next
section by way of illustration, current FAA rules on the
operation of sUAVs restrict both human and autonomous operations and, relatedly, provide constraints
on both vehicle and operational architectures.
Finally, as noted earlier, design requirements often arise
from company best practices. Autonomy is a new suite of
technologies, and therefore there might not be a history of
best practices associated with autonomous design. Tying
autonomy development to best practices is the best way to
achieve acceptance (Bayouth, Nourbakhsh, and Thorpe
1997). One way to accomplish this is through a design for
preplanned product improvement (Raymer 2012), a
configuration that allows for the evolution of autonomy
capabilities over time. We see examples of preplanned
product improvement (P3I) extensively in defense in-
dustries and in automobile development.
Following the outline of the conceptual design pro-
cess, once a viable operational architecture for auton-
omy has been selected from the set of requirements, the
next step is to conduct trade studies to determine the
best equipment (hardware and software) for imple-
menting the component. A performance study would
decide whether a candidate component has the desired
responsiveness to inputs or whether a component
exhibits the desired resolution. Another class of study
is more related to sizing constraints — does the pro-
posed component add too much weight or fit properly
into the vehicle? Do the processing requirements make
it impractical for the capability to be onboard? A third
class of trade study involves development cost: for
example, should the software required for the capa-
bility be developed in-house, or should an open source
version of the capability be considered?
To summarize, conceptual design for autonomy
consists of deriving component cognitive capabilities
from mission and operational requirements. For each
capability, a component analysis determines whether
machine autonomy is enabling or enhancing. Com-
ponent capabilities are combined into an architecture.
Architectural issues include identifying a communi-
cation infrastructure to tie the components together;
Figure 2. Generic Three-Tier Architecture for Autonomy.