new challenges related to social interactions and the
availability of unstructured and high-dimensional data.
The speakers were Kosuke Imai (Harvard University)
and Paul Hünermund (Maastricht University).
Finally, talks on the health sciences revolved around
how causal modeling has helped clarify long-standing
issues in epidemiology, as well as the risks of bias and
the (un)fairness of predictive algorithms. The speakers
were Maria Glymour (UCSF) and Mark Cullen (
Stanford University).
The symposium also held sessions of short talks
and posters for the contributed papers over a broad
variety of topics, such as bias analysis, causal discovery, missing data, instrumental variables, transportability, counterfactual reasoning, and data fusion.
The participants discussed how the recent growth in
popularity of machine-learning techniques in their
fields has rekindled interest in understanding their
theoretical limitations through the lens of causality,
and they exchanged ideas with experts from various
fields to put this discussion in a broader context. Participants agreed on several challenges that remain to
be addressed through the development of new methodological tools, which should be discussed in future
symposia.
Elias Bareinboim, Prasad Tadepalli, Sridhar
Mahadevan, Csaba Szepesvari, Bernhard Scholkopf,
and Judea Pearl served as cochairs of this symposium. Carlos Cinelli, Murat Kocaoglu, and Prasad
Tadepalli wrote this report.
Combining Machine
Learning with Knowledge Engineering
The AAAI 2019 Spring Symposium on Combining
Machine Learning with Knowledge Engineering aimed
to combine machine learning with knowledge engi-
neering. Machine learning helps to solve complex
tasks based on real-world data instead of pure intui-
tion. It is most suitable for building AI systems when
knowledge is not known or when knowledge is tacit.
Many business cases and real-life scenarios using machine-learning methods, however, demand
explanations of results and behavior, particularly
when decisions can have serious consequences. Furthermore, application areas such as banking, insurance, and medicine are highly regulated and require
compliance with laws and regulations. This specific
application knowledge cannot be learned but needs
to be represented, which is the area of knowledge
engineering.
Knowledge engineering, on the other hand, is
appropriate for representing expert knowledge, which
people are aware of and which has to be considered
for compliance reasons or explanations.
Knowledge-based systems that make knowledge
explicit are often based on logic and thus can explain
their conclusions. These systems typically require a
higher initial effort during development than sys-
tems that use machine-learning approaches. However,
symbolic machine- learning and ontology-learning
approaches show promise for reducing the effort of
knowledge engineering.
Because of their complementary strengths and
weaknesses, there is an increasing demand for the
integration of knowledge engineering and machine
learning. Conclusively, recent results indicate that
explicitly represented application knowledge could
help data-driven machine-learning approaches to
converge faster on sparse data and to be more robust
against noise.
More than 70 participants of the Combining
Machine Learning with Knowledge Engineering AAAI
symposium contributed to intense discussion during
presentation of 28 position papers and full papers
and four poster sessions and demonstrations. Topics
covered such application domains as health care,
drug development, social networks, material sciences,
fake news detection, and product recommendations.
The presentations typically focused primarily on
either machine learning or knowledge-based systems.
However, there was a strong commitment to the
importance of combining machine learning with
knowledge bases. Focusing on only one aspect will
not exploit the full potential of AI.
The participants had the opportunity to attend
several keynotes. On the first day, Doug Lenat emphasized a need for a more expressive logic language
in his keynote presentation. He gave a recap on the
Cyc knowledge base and showed ways to connect
knowledge-based systems with machine learning.
On the second day, Frank van Harmelen showed the
limitations of machine learning, in particular in
areas where not much knowledge is available, like the
recognition of rare diseases. He introduced the concept of boxology to represent the reusable architectural patterns for combining learning and reasoning.
In the plenary session on day two, Aurona Gerber
gave a short and witty overview of the AAAI-MAKE
symposium by using an analogy to Asterix. On the
final day, cochairs Knut Hinkelmann and Andreas
Martin concluded the symposium and emphasized
that this new joint community should continue
contributing on the topic of combining their fields.
There was consensus that the topic is worth exploring in the future.
Andreas Martin, Knut Hinkelmann, Aurona Gerber,
Doug Lenat, Frank van Harmelen, and Peter Clark
were part of the organizing team of this symposium
and served as session chairs. The papers of the symposium were published as CEUR Workshop Proceedings,
Volume 2350. This report was written by Andreas
Martin and Knut Hinkelmann.
Interpretable AI for Well-
Being: Understanding Cognitive
Bias and Social Embeddedness
The AAAI 2019 spring symposium on Interpretable
AI for Well-Being: Understanding Cognitive Bias and
Social Embeddedness discussed interpretable AI for
well-being. Interpretable AI is a method and system