results, and the position of the sensors as a function of range. Deep-multimodal image fusion was
performed using the Defense Systems Information
Analysis Center Automation Target Recognition
Algorithm Development Image Database package
(DSIAC 2018). Inherently, the multiview swarm sensing advantage (figure 8a) must be balanced with the
action complexity (figure 8b). Figure 8b highlights
the outlined objects in red boxes to demonstrate
the complex contextual variations of the sensor
(near or far) and environment (open or cluttered).
The Defense Systems Information Analysis Center
Automation Target Recognition database provides
many images to investigate variations of scenarios
over models and methods. The combination of the
theoretical models (that is, data at rest) and DL
experimental results (that is, data in collect) enhance
performance. Figure 8c shows the results highlighting
that, for greater than 3 kilometers, DL image fusion
should be used but, as the sensor-to-object gets closer,
visual imagery should be selected.
For CGS multimodal sensing and action (that
is, data in motion), the user commands a desired
result and the context-based performance includes
the mathematical models, DL image analysis, and
UAV routing as a human–machine agent design
(command-guided architecture in figure 7). In a CGS
scenario, it is assumed that a user selects the desired
result such as verified object recognition from two
types of sensors. In a noncooperative sensing scenario, both UAVs pursue the target without knowledge of the other UAV. For the cooperative sensing
case, the human–machine agents guide the systems
to different positions, taking advantage of the context information. Examples include understanding
multiperspective coordination, environmental conditions, and object behavior.
The scenario presents three ideas for future context-based AI, including scenario autonomy, environmental reasoning, and situation understanding. The
first is that the human–machine teaming leverages
context-by commands for system autonomy. If the
user makes a command too early, the system does
not yet have the learned techniques to make the
appropriate decision based on data alone. However,
if the user provides a general command to maximize
object recognition, the CGS can optimize data collection in support of the mission. Context supports
achieving the goal from the system agents for cooperative intelligence.
The second attribute includes context-of information (Snidaro et al. 2016). A high object speed
indicates that the object is likely on a road network.
The CGS uses context-of estimates for the likely
directions of the moving target and can maneuver
platforms to optimize recognition. Other context-of methods include lighting conditions and range.
Hence, inclusion of the physical intelligence can
improve performance robustness.
The third attribute is context-for assessment
(Snidaro et al. 2016). Referenced entities or data
support understanding, such as the hypothesized
object intent behavior. The most probable loca-
tion provides a context-for assessment based on
the object intent from which agents can estimate
the appropriate actions. The context-for approach
provides social intelligence to provide situation
Future AI systems can leverage DL techniques
and contextual knowledge to build models incorporating physical intelligence, to use historical data as
social intelligence, and to afford human–machine
dynamic data for robust situation cooperative intelligence. The scenario analysis presents a simple example, providing a strategy leveraging multimodal
sensing and action for complex scenarios.
Many AI ML approaches are based on acquiring a
large set of labeled data. DL and statistical relational
leaning demonstrate context-based analysis for phys-
ical intelligence. However, DL contextual under-
standing is limited as the statistical model is built
on labeled data, but does not know details external
to the data; that is, a contextual label drift available
from social intelligence. Incomplete, partial, or
ambiguous data require information fusion as co-
operative intelligence to resolve dynamic situations.
This article presents context-based AI for human–
machine shared context by addressing data manage-
ment concerns (that is, at rest, in collect, in transit,
in motion, in use). A CGS example is presented that
leverages theoretical knowledge and experimental
deep multimodal image fusion results, where perfor-
mance increases with context information. Improve-
ments come from context-by goals for flexible
cooperative control, context-of environment infor-
mation for physical sensing, and context-for data
from social knowledge for situation understanding.
Future opportunities being pursued for context-based AI include generative adversarial networks
that act as agents based on information from real or
augmented, collected or modeled, and analyzed or
simulated data coordinated with IF, OI, or CD agents.
The AI context-agent approaches expand with human
inputs accounting for interface design considerations and complex scenarios. As many AI systems are
designed for only one context, distributed methods
utilizing transfer learning could support representations across agent contexts to support unknown situations and improve robustness.
Adomavicius, G.; Mobasher, B.; Ricci, F.; and Tuzhilin, A.
2011. Context-Aware Recommender Systems. AI Magazine
32( 3): 67–80. doi.org/10.1609/aimag.v32i3.2364.
Amershi, S.; Cakmak, M.; Knox, W. B.; and Kulesz, T.
2014. Power to the People: The Role of Humans in Interactive Machine Learning. AI Magazine 35( 4): 105–20. doi.