suading drivers to not text while driving, or an online
safety expert persuading users of social media sites to
not reveal too much personal information online.
These all involve the persuader finding the right arguments to use with respect to the persuadee’s knowledge, priorities, and biases. Using artificial argumentation to build automated persuaders provides several
interesting research challenges the community is
starting to tackle. For example, the Framework for
Computational Persuasion project18 is developing a
computational model of argument for behavior
change in health care. This kind of application calls
for the development of rhetorical and dialogical layers in figure 1.
In the longer term, there are exciting possibilities
for developing artificial agents able to use argumentation as a general pattern of interaction with other
agents, exactly like humans are able to argue with
other humans to achieve collectively useful behaviors. Consider a situation where heterogeneous robots
need to work together to survey a situation such as a
large building on fire. Exactly like in a team of firefighters, each robot will have direct perception of
some local situation and will need to exchange information and coordinate actions with other robots in a
dynamic environment where, altogether, information will always be incomplete and inconsistent and,
consequently, goals and action plans might need to
be revised at any moment. Different capabilities of
the team members will have to be taken into account
too. These features call for high-level arguing capabilities, applicable in a variety of contexts among heterogeneous agents whose unique common property
might be the capability to argue itself. In this sense
artificial argumentation promises, in the long term,
to provide a sort of universal social glue for linking
together, in a plug and play and cooperative manner,
robots and any other kind of intelligent agents.
This work has been partially supported by EPSRC
grants EP/N008294/1 and EP/N014871/1, by EU
H2020 research and innovation programme under
the Marie Sklodowska-Curie grant agreement No.
690974 for the project MIREL: MIning and REason-
ing with Legal texts, and by funds provided by the
Institute for Computer Science and Engineering, Uni-
versidad Nacional del Sur, Argentina.
1. “Humans argue” is a truism. Either you already believe it
or you would need to argue against it.
10. debatepedia.idebate.org/en/index.php/Debate: Random
alcohol breath tests for drivers.
11. Argument A2 has to be read as [Random breath tests to]
other drivers [can hardly be called an invasion of privacy or
an investigation without due cause as they are] a major liability to the safety and lives of other drivers.
12. The number of involved annotators should be > 1 in
order to allow for the calculation of this measure and, as a
consequence, produce a reliable resource.
13. lidia.cs.uns.edu.ar/delp client/.
Bench-Capon, T., and Dunne, P. 2007. Argumentation in
Artificial Intelligence. Artificial Intelligence 171( 10–15): 619–
Besnard, P.; Javier García, A.; Hunter, A.; Modgil, S.; Prakken,
H.; Simari, G.; and Toni, F. 2014. Introduction to Structured
Argumentation. Argument and Computation 5( 1): 1–4. doi.
Bex, F.; Lawrence, J.; Snaith, M.; and Reed, C. 2013. Implementing the Argument Web. Communications of the ACM
56( 10): 951–989. doi.org/10.1145/2500891
Charwat, G.; Dvorak, W.; Gaggl, S.; Wallner, J.; and Woltran,
S. 2015. Methods for Solving Reasoning Problems in
Abstract Argumentation — A Survey. Artificial Intelligence
220: 28–63. doi.org/10.1145/2500891
Chesñevar, C.; McGinnis, J.; Modgil, S.; Rahwan, I.; Reed, C.;
Simari, G.; South, M.; Vreeswijk, G.; and Willmott, S. 2006.
Towards an Argument Interchange Format. Knowledge Engineering Review 21( 4): 293–316. doi.org/10.1017/
Dung, P. 1995. On the Acceptability of Arguments and Its
Fundamental Role in Nonmonotonic Reasoning, Logic Programming and n-Person Games. Artificial Intelligence 77( 2):
321–358. doi.org/10.1016/0004-3702( 94)00041-X
Gomes, C.; Kautz, H.; Sabharwal, A.; and Selman, B. 2008.
Satisfiability Solvers. In Handbook of Knowledge Representation, ed. F. Van Harmelen, V. Lifschitz, and B. Porter, 89–134.
Amsterdam, The Netherlands: Elsevier.
Gordon, T., and Karacapilidis, N. 1997. The Zeno Argumentation Framework. In Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL ’ 97), 10–18. New
York: Association for Computing Machinery. doi.org/10.
Horty, J., and Bench-Capon, T. 2012. A Factor-Based Definition of Precedential Constraint. Artificial Intelligence and Law
20: 181–214. doi.org/10.1007/s10506-012-9125-
Hunter, A., and Williams, M. 2012. Aggregating Evidence
About the Positive and Negative Effects of Treatments.
Artificial Intelligence in Medicine 56( 3): 173–190. doi.org/10.1016/
Karacapilidis, N., and Papadias, D. 2001. Computer Supported Argumentation and Collaborative Decision Making:
The HERMES System. Information Systems 26( 4): 259–277.