62 AI MAGAZINE
One of the important issues in understanding
machine intelligence in human health and wellness
is cognitive bias. Advances in big data and machine
learning should not overlook some new threats to
enlightened thought, such as the recent trend of
social media platforms and commercial recommen-
dation systems being used to manipulate people's
inherent cognitive bias.
Another important issue is social embeddedness.
AI systems will be deeply embedded in society, and
we need to understand how AI is perceived at the
society level. Social embeddedness topics include the
role of AI in future economics (basic income, impact
of AI on GDP) and the well-being of society (
happiness of citizens, life quality).
Our symposium included four invited talks to
provide new perspectives on interpretable AI for
well-being. Pang Wei Koh (Stanford University) gave
a talk on understanding black-box deep learning predictions with influence functions. Avanti Shrikumar
(Stanford University) discussed the issues of interpretable deep learning for genomics. Judea Pearl (UCLA)
introduced the foundations of causal inference.
Sidharth Goel (Google AI) introduced DeepVariant,
deep learning for genomic variant calling. The final
speaker was Peter Pirolli (Florida Institute for Human
and Machine Cognition), who gave a talk on interpretable AI for well-being using mobile health in the
context of cognitive science.
The symposium technical presentations included
25 papers and 3 posters and demonstrations. Presentation topics included explainable AI, interpretable AI, social embeddedness, cognitive bias, and
well-being AI. Takashi Kido (Preferred Networks)
presented on limitations of current technologies
based on machine learning and discussed the challenges for interpretable AI for well-being. Amy Ding
(Carnegie Mellon University) proposed a model
of unbiased and explainable algorithmic decision
making that treats everyone fairly. Umang Bhatt
(Carnegie Mellon University) proposed the idea of
temporal explanations as a medical narrative. Sadeq
Rahimi (Harvard University) discussed extended
mind, embedded AI, and the barrier of meaning.
Morteza Shahrezay and Orestis Papakyriakopoulos
(Bavarian School of Public Policy at the Technical
University of Munich) reported research on estimating the political orientation of Twitter users. Ziehui
Leng (University of Tokyo) presented a cross-lingual
analysis on culinary perceptions to understand
cross-cultural differences. Yuichi Yoda (Ritsumeikan
University) reported on a study of basis of AI-based
information systems in the case of the AI shogi system Ponanza.
Takashi Kido and Keiki Takadama served as
cochairs of this symposium and wrote this report. The
symposium papers will be published online as a CEUR
and Language Technologies
Privacy remains an evolving and nuanced concern
of computer users, as new technologies that use the
web, smartphones, and the Internet of Things collect
myriad personal information. Rather than viewing
AI and human language technologies as problems
for privacy, the goal of this symposium was to flip
the script and explore how AI and human language
technology can help meet a user’s desire for privacy
when interacting with computers. This event was
a successor to Privacy and Language Technologies,
a previous AAAI Symposium held in fall 2016.
We focused on two flexibly defined research questions: How can AI and human language technologies
preserve or protect privacy in challenging situations?
and, How can AI and human language technologies
help interested parties (for example, computer users,
companies, regulatory agencies) understand privacy
in the status quo and what people want?
Talks by the keynote speakers followed these two
themes. Jessica Staddon (Google) spoke on opportunities for AI and human language technologies in
security and privacy incident management and discovery, leading to a discussion on opportunities for AI
and human language technologies to improve how
companies and large institutions manage breaches
in privacy and security. Serge Egelman (ICSI) gave a
talk on empowering users to make privacy decisions
in mobile environments, which led to a discussion of
how AI and human language technologies can better
connect smartphone users with information on how
their personal data are shared and collected.
The symposium program also consisted of oral
presentations of accepted papers, discussion forums,
and a poster session. Privacy policies of apps and
websites were a major theme: several participants presented work on improving the usability of privacy
policies by extracting key information from them
automatically. Other presenters addressed privacy
in online social networks and privacy-preserving
machine learning. Additionally, the symposium
included a joint session with the AI, Autonomous
Machines, and Human Awareness symposium to
explore potential collaborations.
The symposium lead organizer was Shomir Wilson
(Pennsylvania State University), and the coorganizers
were Sepideh Ghanavati (University of Maine), Kambiz
Ghazinour (Kent State University), and Norman
Sadeh (Carnegie Mellon University). The papers of the
symposium were published as a CEUR workshop proceedings. This report was written by Shomir Wilson.
The ability to tell and understand stories draws
on many aspects of intelligence, from language