Mei 2015; Ma et al. 2016). More theoretical work is
necessary to understand why people spread rumors,
as this is less well understood than the incentives for
creating rumors, particularly the financial ones.
Several statistical approaches for detecting misinformation require labeled instances to train machine
learning models. Computer scientist James Caverlee
(Texas A&M University) reported on the critical issue
of producing such ground truth. He started by
describing intuitive identification methods, such as
the one according to which those users friending a
fake account are themselves deserving of some scrutiny. He argued that while powerful, these simple
heuristics can only address a fraction of the issue.
When it comes to more complex claims, fact-check-
ers are key to identifying the ground truth. However,
they cannot cope with the sheer volume and variety
of misinformation. Caverlee proposed exploiting
aggregated signals to infer the reliability of a given
piece of content, such as the reply-to-retweet ratio to
flag controversy (Alfifi and Caverlee 2017; Kaghaz-
garan, Caverlee, and Alfifi 2017). Caverlee warned
that crowdsourced information can be easily manip-
ulated; for example, it is easy to recruit workers for
astroturfing. He noted that the Chinese government
often fabricates social media posts in an attempt to
prevent the public from discussing civic issues.
Caverlee called for more research on the problem of
identifying the intent behind social media posts.
Progress in this area could lead to tools for distin-
guishing organic conversations from covert coordi-
nated campaigns on social media (Varol, Ferrara,
Menczer, et al. 2017).
Computer scientist Qiaozhu Mei (University of
Michigan) highlighted how hacked accounts can
bypass reputation systems, challenging solutions
Figure 2. Screenshot of the Botometer System.
This is a supervised learning framework that calculates the likelihood that a
given Twitter account is controlled by software, that is, a social bot ( botometer.iuni.iu.edu).