as a high-level loop from the plan dispatcher to the
lower-level controllers. In this manner, low-level control
alternates between an IBVS for tracking and a trajectory-based controller for searching. Both consist of a control
loop that utilizes vision for estimating position (during
search) and for keeping a target of interest in view (for
tracking). Each controller sends control actions to
manage the flight modes of the sUAV and uses sensory
inputs inside their control loop. A hierarchical organization is transformed into an operational architecture by
deciding, roughly speaking, what goes where. This decision has two parts. The first allows for a determination
of the vehicle architecture, the capabilities that reside
onboard. One design trade-off here is communication
overhead and increased processing power (as well as
cost). sUAVs are intrinsically limited to small payload
capabilities as well as onboard processing power. One
solution is to migrate some of these capabilities to a
ground computer. For example, Chakrabarty et al.
(2017) describe a tracking system using an AR.Drone
with a visual search loop running on a ground computer.
The second part of the decision requires specifying
the boundaries between human and machine decision
making. In the search and track example, it may be decided that humans at a console perform the search phase
of the operation, whereas the sUAV conducts the tracking
on its own. Clearly, other hybrid designs between the
fully manual and the fully autonomous are possible.
During Conceptual Design
In an earlier section it was noted that part of the conceptual design phase involves analyzing and optimizing
candidate designs. We close this study in this section by
evaluating designs for autonomy, using the performance
metrics defined earlier and using the search and track
example. We focus on an example of evaluating a single
autonomous behavior: the ability of the executive component of an autonomous sUAV to alternate between
search and track to keep a target in view. This behavior
was chosen because it requires an effective combination of inference and sensing to perform effectively.
Morris et al. (2017) provide a more detailed discussion.
First, we review some general requirements. The
overall purpose of the mission was a special case of an
autonomy technology demonstration, that is, to test
new autonomy capabilities related to search and track.
For this reason, by definition, autonomy is enabling
for this mission. Second, the interest was in a fast
development and testing cycle, which means that
preference was given to cognitive components that
were readily available and could be integrated quickly.
For these reasons, the hardware and software were
chosen to enable quick development and integration.
First, an IBVS system (Pestana et al. 2014) and a tracker
(Nebehay and Pflugfelder 2014) were integrated as
modules in ROS (Quigley et al. 2009). A HOG-Haar
Figure 5. Simulating Motion Blur to Mimic Real Flight Data.