suggested by previous speakers. He noted that while
spotting known rumors is easy after the fact, identifying false rumors in their early stages is difficult. He
proposed a four-part approach: ( 1) identifying emerging rumors early by mining search engine logs for
questions about a statement’s truthfulness; ( 2)
retrieving all the posts about a rumor; ( 3) analyzing
the impact of a rumor and its correction via visualization; and ( 4) predicting its future spread through
deep learning (Zhao, Resnick, and Mei 2015; Li et al.
2017; Qazvinian et al. 2011; Resnick et al. 2014).
Computer scientist Eni Mustafaraj (Wellesley Col-
lege) discussed an early example of fake news tactics,
namely, a Twitter bomb against Massachusetts senate
candidate Martha Coakley in 2010 (Mustafaraj and
Metaxas 2010). A group of Twitter bots used hashtags
targeted towards specific communities in a coordi-
nated campaign to spread negative information
about the candidate. Mustafaraj compared this attack
with recent ones, leveraging fake Facebook accounts
to target specific Facebook groups and spread links to
fake news stories (Mustafaraj and Metaxas 2017). She
examined three motives for spreading such inaccu-
rate information: financial, political, and ideologi-
cal/cultural (including prejudices like sexism and
xenophobia). She proposed that social media plat-
forms should highlight provenance information
about sources to help users determine their intents
and trustworthiness. Finally, Mustafaraj urged plat-
forms to provide researchers with data about how
recipients of misinformation engage with it.
Information scientist Kate Starbird (University of
Washington) delineated different types of rumor in a
crisis situation, such as hoaxes, collective sense-mak-
ing, and conspiracy theories (Arif et al. 2016). She
categorized methods for detecting misinformation
based on linguistic, network, account, URL domain,
and crowdsourced features (Maddock et al. 2015;
Huang et al. 2015; Dailey and Starbird 2014).
Workshop participants responded to an online
wiki survey and identified five major research chal-
lenges related to the question of how to define and
detect misinformation (table 1).
Workshop Report
Figure 3. An Example of Connections Identified by a Computational
Fact-Checking Algorithm to Verify a Claim about a Company and Its CEO.
A flow algorithm is used to identify relational paths in a
knowledge network. The width of an edge is proportional to the flow.
founded
product fieldOf
subsidiary
keyPerson
keyPerson
keyPerson
parentCompanyOf
companyOf
personOf
locationCity
locationCity
person
starring
influenced birthplaceOF
boardOf
Berkshire
Hathaway
Assurance
Secret Millionaires Club
Omaha,
Nebraska
The Giving
Pledge
Orange Julius Dairy Queen
List of assets
owned by
Berkshire Hathaway Oliver Chace
Peter
Cundill
Warren
Buffet
foundedBy subsidiary
parentCompanyOf
parentCompanyOf
parentCompanyOf Bill Gates