Economist David Rothschild (Microsoft Research)
proposed to address misinformation as a market
problem, framing it in terms of outcomes such as
exposure to information and impact on opinion formation and decision making. Rothschild noted that
research in the field tends to focus on what content
people consume rather than the more difficult matter of how they actually absorb that information. He
also questioned whether mass ignorance on a particular issue may be more harmful than the consumption of fake news about that issue. Research may also
be distorted by overreliance on Twitter data, Rothschild suggested, which may not be as representative
of news consumption by the general population as
Facebook or television (Diaz et al. 2016).
Computer scientist Kazutoshi Sasahara (Nagoya
University) presented work in progress on a simple
model of online social network dynamics. The model demonstrates that online echo chambers are
inevitable within the current social media mechanisms for content sharing, which tend to cluster individuals into segregated and polarized groups.
Workshop participants responded to an online
wiki survey and identified five major research challenges related to the question of how to best study
the cognitive, social, and technological biases that
make us vulnerable to misinformation (see table 1).
What Countermeasures Are Most
Feasible and Effective and
Who Can Best Deliver Them?
The final panel discussed countermeasures against
misinformation, as well as who can best deliver
them. Computational journalist Nick Diakopoulos
(University of Maryland) identified three relevant
actors to be considered: tech platforms, individuals,
and civil society. He discussed which combinations
of the three groups could be most effective in combating fake news. Platforms are particularly powerful,
but they raise the concern of influencing public discourse through their algorithms — although this
could be mitigated through algorithmic transparency (Diakopoulos 2017; Diakopoulos and Koliska
2017). Civil society and individuals alone cannot
fact-check everything. Diakopoulos concluded that
the best partnership would be between civil society
and platforms.
Fact-checker David Mikkelson ( Snopes.com)
argued that fake news is only as problematic as poor
journalism: the very news outlets that are supposed
to question and disprove misinformation often help
spread it (Mikkelson 2016).
Computer scientist Tim Weninger (University of
Notre Dame) addressed the lack of research on Red-
dit, a far larger platform than Twitter. He reported on
findings that initial votes on posts have a strong
impact on their final visibility, allowing coordinated
attacks to game the system through a snowballing
effect (Glenski and Weninger 2017). Weninger also
found that a large portion of Reddit users merely scan
headlines: most up or down votes are cast without
even viewing the content (Glenski, Pennycuff, and
Weninger 2017).
Computer scientist Cong Yu (Google Research)
described the use of semantic web annotations such
as the Schema.org ClaimReview markup to help surface fact-checks of popular claims on search engines
(Kosslyn and Yu 2017). He argued that artificial intelligence can be a powerful tool to promote quality
and trust in information. However, Yu recognized
that users play a role in the spread of misinformation, which may be the most challenging problem to
address.
Communication scientist Melissa Zimdars (
Merrimack College) reported on efforts to collectively categorize news sources in an open-source fashion
( opensources.co). She also recounted how, ironically,
her research became the target of a fake news campaign.
Workshop participants responded to an online
wiki survey and identified five major research challenges related to the question of what countermeasures are most feasible/effective and who can best
deliver them (see table 1).
Conclusions
Unfortunately, AI is increasingly being exploited to
manipulate public opinion. For example, sophisti-
cated social bots can autonomously interact with
social media users in an attempt to influence them or
expose them to misinformation. Advances in
machine generation of realistic video and voice have
been already identified as the likely next-generation
weapons in the digital misinformation arsenal (Suwa-
janakorn, Seitz, and Kemelmacher-Shlizerman 2017;
Thies et al. 2016).
The good news is that AI can also play an important role in defending us from attacks against the
integrity of the information space. In such an arms
race, advances in supervised and unsupervised
machine learning, representation learning, and natural language processing will be needed to help meet
the above challenges.
Another area where more AI research is needed is
the study of algorithmic bias. Social media platforms
employ sophisticated ranking, filtering, and recommendation methods that are increasingly powered
by cutting-edge AI algorithms. Unfortunately, these
algorithms are also vulnerable to manipulation due
to their focus on engagement and popularity, leading
to echo chambers and selective exposure that amplify our own cognitive and social biases. A significant
challenge will be to improve algorithms to take into
account signals of trustworthiness and reliability.
Finally, reporters and fact-checking organizations
are in great need of tools to help them manage the