Countering misinformation while protecting freedom of speech will require collaboration between
stakeholders across the tech industry, journalism,
and academia. To foster such collaboration, the
Workshop on Digital Misinformation was held in
conjunction with the International Conference on
Web and Social Media (ICWSM) in Montreal, on May
15, 2017. The meeting brought together more than
100 stakeholders from academe, media, and tech
companies to discuss research challenges toward a
trustworthy web.
The workshop opened with a showcase of tools for
studying digital misinformation, developed by the
Center for Complex Networks and Systems Research
and the IU Network Science Institute. These are part
of a growing suite of publicly available tools called
the Observatory on Social Media (Davis, Ciampaglia,
et al. 2016). They include Hoaxy, a system for tracking competing claims and fact-checking that spread
online (Shao et al. 2017) (figure 1); Botometer, an AI
system for detecting social bots on Twitter (Davis,
Varol, et al. 2016; Varol, Ferrara, Davis, et al. 2017)
(figure 2); and unsupervised graph-mining methods
for automatically estimating factual accuracy from
DBpedia (Shiralkar et al. 2017) (figure 3). We then
presented empirical data showing that, on social
media, low-quality information often spreads more
virally than high-quality information (Qiu et al.
2017). Factors that can explain this finding include
the structural segregation and polarization in online
social networks (Conover et al. 2011). The resulting
echo chambers are exacerbated by algorithms that
personalize online experiences and hinder exposure
to ideologically diverse sources of information
(Nikolov et al. 2015). Other factors include information overload (Qiu et al. 2017), limited attention
(Weng et al. 2012), popularity bias (Nematzadeh et
al. 2017), and manipulation through social bots (
Ferrara et al. 2016). One of the key questions raised during discussion was how to empirically define quality
of information in modern social media.
A lightning talk session was opened by BuzzFeed
media editor Craig Silverman, who proposed a working definition of fake news as “fabricated news
intended to deceive with financial motive.” Silverman emphasized the global scope of the issue,
describing similar problems in Germany, Japan, Italy,
and Myanmar, even if the social networks used to
spread such news vary. Journalists are now aware of
the scope of the problem and are reporting on fake
news stories — though not always well. Silverman
called for more collaboration between journalists and
academic researchers for the study of misinformation. In support of this call to action, his team at Buzzfeed has developed a curated list of partisan news
sites that is openly shared (Silverman 2017).
Political scientist and communication scholar Leti-
cia Bode (Georgetown University) reported on a
number of findings about correcting social media
misinformation. Her work focuses on health and sci-
ence communication, where the distinction between
factual and opinionated claims is more clear com-
pared to other domains such as political communi-
cation (Bode and Vraga 2015). Bode and her collabo-
rators found that certain topics (for example, GMOs)
are easier to correct than others (for example, vac-
cines and autism). She also found that “social” fact-
checking is more effective if it links to credible
sources. Based on these findings, Bode recommended
that news organizations should emphasize easily
linked references and that corrections should be ear-
ly and repeated. A new partnership model with social
media platforms could satisfy these requirements,
she concluded.
The perspective of a leading social media platform
was given by Áine Kerr, leader of global journalism
partnerships at Facebook. She started by sharing figures to appraise the scale at which Facebook’s news-feed operates, with hundreds of millions of links
shared weekly. Kerr noted that the quality of those
links varies dramatically, and quoted Mark Zucker-berg’s call for amplifying the good effects of social
media and mitigating the bad. Facebook is pursuing
this goal with four approaches: ( 1) disrupting the
financial incentives for fake news; ( 2) developing
new products to curb the spread of fake news, such as
allowing users or third-party fact-checkers to flag
posted stories as untrue or unverified; ( 3) helping
people make informed decisions by educating them
on how to spot fake news; and ( 4) launching the
News Integrity Initiative, a partnership between
industry and nongovernmental organizations to promote media literacy (Mosseri 2016). Kerr noted that
Facebook regularly engages with the research community via collaborative programs and grants, but
acknowledged that there is a growing demand from
third-party researchers for data to tackle the above
problems. In its own bid to meet the problem head-on, the organization is constantly refining its best
practices around data sharing. A lively discussion followed about the commitment of platforms to curbing the spread of misinformation. For example, it was
pointed out that more should be done to deal with
abuses that exploit social bots, fake accounts, Facebook pages, and “verified account” badges. It was
also suggested that an API for access to public Facebook page data would be a great boon to the research
community.
Computer scientist Paul Resnick (University of
Michigan) argued that factual corrections are often
ineffective and slow: they rarely reach the people
originally influenced by the misinformation. Moreover, we are exposed to information filtered by socio-technical mechanisms that largely prioritize popularity over accuracy, such as search engines, upvotes,
and newsfeeds. To restore the balance in favor of
accuracy, Resnick called for the development of reputation-based filtering mechanisms, and reported on