volume of digital misinformation at scale. Semantic
web technologies are reaching maturity and can be
leveraged toward the development of computational
fact-checking tools. Automatic extraction of semantic data from text provides large knowledge bases
which, in combination with inference and network
mining techniques, could accelerate the tasks of verification and news gathering, for example by connecting claims with relevant contextual information
and previous analyses (see figure 3). Special emphasis
should be placed upon interpretable results, which
can aid both journalists and the general public make
sense of the information to which they are exposed.
The ultimate goal of the workshop was to bootstrap a long-lasting initiative between various sectors
(namely industry, academe, journalism, and civil
society) with the aim of building a trustworthy web.
Follow-up discussion and further collaborative activity is currently under way through online community spaces. We believe that support from both private
foundations and federal agencies will be a key ingredient for the success of future collaborative activities,
the scope of which must include research, education,
We are grateful to Alessandro Flammini, who co-
organized the workshop. We acknowledge support
from the Network Science Institute ( iuni.iu.edu) and
the School of Informatics, Computing, and Engi-
neering ( sice.indiana.edu) at Indiana University.
Alfifi, M., and Caverlee, J. 2017. Badly Evolved? Exploring
Long-Surviving Suspicious Users on Twitter. In Proceedings
of SocInfo 2017, International Conference on Social Informatics.
Lecture Notes in Computer Science 10539, 218–233. Berlin:
Arif, A.; Shanahan, K.; Chou, F.-J.; Dosouto, Y.; Starbird, K.;
and Spiro, E. 2016. How Information Snowballs: Exploring
the Role of Exposure in Online Rumor Propagation. In
Proceedings of the 19th ACM Conference on Computer-Supported
Cooperative Work & Social Computing (CSCW’ 16), 466–477.
New York: Association for Computing Machinery.
Bode, L., and Vraga, E. K. 2015. In Related News, That Was
Wrong: The Correction of Misinformation Through Related
Stories Functionality in Social Media. Journal of Communication 65( 4):619–638. doi.org/10.1111/jcom.12166
Conover, M.; Ratkiewicz, J.; Francisco, M.; Gonçalves, B.,
Menczer, F.; and Flammini, A. 2011. Political Polarization
on Twitter. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM), 89–96. Palo
Alto, CA: AAAI Press. www.aaai.org/ocs/index.php/ICWSM/
Dailey, D., and Starbird, K. 2014. Journalists as Crowd-sourcerers: Responding to Crisis by Reporting with a Crowd.
Computer Supported Cooperative Work 23( 4–6): 445–481.
Davis, C. A.; Ciampaglia, G. L.; Aiello, L. M.; Chung, K.;
Conover, M. D.; Ferrara, E.; Flammini, A. 2016. OSoMe: The
IUNI Observatory on Social Media. PeerJ Computer Science
Davis, C. A.; Varol, O.; Ferrara, E.; Flammini, A.; and Menczer, F. 2016. BotOrNot: A System to Evaluate Social Bots. In
WWW ’ 16 Companion: Proceedings of the 25th International
Conference Companion on World Wide Web, 273–274. Cambridge, MA: W3C. doi.org/10.1145/2872518.2889302
Diakopoulos, N. 2017. Enabling Accountability of Algorithmic Media: Transparency as a Constructive and Critical
Lens. In Transparent Data Mining for Big and Small Data, eds.
T. Cerquitelli, D. Quercia, F. Pasquale, 25–43. Studies in Big
Data 32. Berlin: Springer. doi.org/10.1007/978-3-319-
Diakopoulos, N., and Koliska, M. 2017. Algorithmic Transparency in the News Media. Digital Journalism 5( 7): 809–
Diaz, F.; Gamon, M.; Hofman, J. M.; Kıcıman, K.; and Rothschild, D. 2016. Online and Social Media Data as an Imperfect Continuous Panel Survey. PLoS ONE 11( 1): e0145406.
Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; and Flammini,
A. 2016. The Rise of Social Bots. Communications of the ACM
59( 7): 96–104. doi.org/10.1145/2818717
Fredheim, R. 2017. Robotrolling. Riga, Latvia: NATO Strategic
Communications Centre of Excellence. ( www.stratcom-
Funk, C., and Kennedy, B. 2016. The Politics of Climate.
Washington, DC: Pew Research Center. ( www.pewinter-
Garrett, R. K.; Weeks, B. E.; and Neo, R. L. 2016. Driving a
Wedge Between Evidence and Beliefs: How Online Ideological News Exposure Promotes Political Misperceptions.
Journal of Computer-Mediated Communication 21( 5): 331–348.
Glenski, M.; Pennycuff, C.; and Weninger, T. 2017. Consumers and Curators: Browsing and Voting Patterns on Reddit. IEEE Transactions on Computational Social Systems 4( 4):
Glenski, M., and Weninger, T. 2017. Rating Effects on Social
News Posts and Comments. ACM Transactions on Intelligent
Systems and Technology 8( 6): 78. doi.org/10.1145/2963104
Huang, Y. L.; Starbird, K.; Orand, M.; Stanek, S. A.; and Ped-ersen, H. T. 2015. Connected Through Crisis: Emotional
Proximity and the Spread of Misinformation Online. In
Proceedings of the 18th ACM Conference on Computer Supported
Cooperative Work & Social Computing (CSCW’ 15). New York:
Association for Computer Machinery. doi.org/10.1145/
Kaghazgaran, P.; Caverlee, J.; Alfifi, M. 2017. Behavioral
Analysis of Review Fraud: Linking Malicious Crowdsourcing
to Amazon and Beyond. In Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM
2017), 560–563. Palo Alto, CA: AAAI Press. ( www.aaai.org/
Kang, J. H., and Lerman, K. 2015. VIP: Incorporating
Human Cognitive Biases in a Probabilistic Model of
Retweeting. In Proceedings of the International Conference on
Social Computing, Behavioral-Cultural Modeling, and Prediction
(SBP 2015). Lecture Notes in Computer Science 9021, 101–
110. Berlin: Springer. doi.org/10.1007/978-3-319-16268-
Kosslyn, J., and Yu, C. 2017. Fact Check Now Available in