translated into a different pulse width or antenna
rotational speed.
Finally, the internal configuration logics are used
to provide the adaptation module with the information it needs to plan the adaptive actions. The
mission of the online reconfiguration module is to
change the internal structure of the fusion solution
to satisfy the goals or better respond to external circumstances (context). Therefore, the relevant context and changes in the goals are the two sources
used to decide actions over the structure of the fusion
solution — activation/deactivation of fusion nodes
or changes in the control flow.
Conclusions
The article identifies handling of CI and an appropri-
ate situation representation as key elements to develop
adaptive and explainable AI systems. Although AI
is certainly present in current applications and has
become a popular area of computer science, in part
because of business investments and media coverage
of AI success in games, some open challenges have
been discussed to motivate progress in this area,
called the next milestone or the third wave of AI.
Whereas previous milestones have been character-
ized by manually crafted knowledge (expert systems)
and statistical machine learning, the open challenge
is now to learn paradigms with explainable models
and context adaptation to gain generalization capa-
bility. Explainable AI can benefit from IF develop-
ments, context enhancement, and decision support
systems.
AI must address many problems that are current
in the area of IF: perception and object recognition from sensory data or SAW. A basic challenge
identified for both AI and IF future systems is
understanding context, the ability to represent and
relate how relevant the context is to the inference
problems addressed, and mechanisms to adapt the
inference processes to this context. In this parallelism, the challenges include perception, reasoning, and adaptation toward deploying AI and IF
systems to support knowledge representation and
situation understanding.
The preeminent challenge is context adapta-
tion common to both AI and IF research. A basic
objective is the capability to learn interpretable
models from contextual data to bind observations
with knowledge and use the semantics provided by
context. Current strategies for situation assessment
and HLIF have been reviewed and related with the
ongoing research in AI branches of computer sci-
ence (compositional learning models, cognitive
architectures, common-sense reasoning, and ra-
tional agents). A general approach for AI systems
should be based on conceptual representation to
allow automatic adaptation to domain conditions.
Future breakthroughs will stem from an architec-
ture to represent, access (middleware), and exploit
the context in IF processes that provide context-
enhanced systems.
Notes
1. See www.darpa.mil/program/explainable-artificial-intelligence.
2. Terms like information integration have been preferred
by some to connote greater generality than earlier, narrower
definitions of data fusion (and, perhaps, for distance from
old data fusion approaches and programs), but such manipulations do not contribute toward better representation or
understanding.
3. Quality is another term for which it is difficult to
achieve consensus; data quality has been written about
for general applications (Sycara et al. 2009) as well as
for IF-specific applications (Zadeh 1983). We address the
quality issue as part of the general content of this article
in various ways.
4. A definition of a priori is “formed or conceived beforehand,” and here we mean it in the sense of incorporating
contextual aspects in a design at the outset.
References
Baader, F.; Horrocks, I.; and Sattler, U. 2005. Description
Logics as Ontology Languages for the Semantic Web. In
Mechanizing Mathematical Reasoning, edited by D. Hutter
and W. Stephan, 228–48. Vol. 2605. Lecture Notes in Com-
puter Science. Berlin: Springer. doi.org/10.1007/978-3-540-
32254-2_ 14
Biran, O., and Cotton, C. V. 2017. Explanation and Justification in Machine Learning: A Survey. Paper presented at
the International Joint Conference on Artificial Intelligence
2017 Workshop on Explainable Artificial Intelligence.
Melbourne, Australia, August 20.
Blasch, E.; Kadar, I.; Salerno, J.; Kokar, M. M.; Das, S.;
Powell, G. M.; Corkill, D. D.; and Ruspini, E. H. 2006.
Issues and Challenges in Situation Assessment (Level 2
Fusion). Journal of Advances in Information Fusion 1( 2):
122–39.
Blasch, E. P.; Bosse, E.; and Lambert, D. A. 2012. High-Level
Information Fusion Management and Systems Design. Norwood,
MA: Artech House.
Davis, E., and Marcus, G. 2015. Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence. Communications of the ACM 58( 9): 92–103. doi.org/
10.1145/2701413.
Delcroix, M.; Kinoshita, K; Yu, C.; Ogawa, A.; Yoshioka, T.;
and Nakatani, T. 2016. Context Adaptive Deep Neural
Networks for Fast Acoustic Model Adaptation in Noisy
Conditions. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Picataway,
NJ: IEEE.
Deng, L. 2018. Artificial Intelligence in the Rising Wave of
Deep Learning: The Historical Path and Future Outlook.
IEEE Signal Processing Magazine 35( 1): 180, 173–7. doi.org/
10.1109/MSP.2017.2762725
Endsley, M. R. 1995. Toward a Theory of Situation Awareness
in Dynamic Systems. Human Factors Journal 37( 1): 32–64.
doi.org/10.1518/001872095779049543.
Giunchiglia, F. 1992. Contextual Reasoning. Epistemologia
345: 345–364.