fers from the base narrative. Additive counternarratives contain additional events not in the base narrative, but no modifications of any of the events in the
base narrative. For example, let A’s base narrative be:
“I asked each of my fellow committee members to
express their opinion.” An additive counternarrative
would be: “A expressed his or her own opinion first.
Then he or she asked the other committee members
to express their own opinions.” (The fact that A
expressed his or her opinion first is significant if A
has social influence over the other members of the
committee.) Interpretative counternarratives do not
differ from the base narrative in terms of sequence of
events, but give different interpretations to the
events (for example, in terms of motivations and
emotions). For example, let A’s base narrative be:
“Everyone was asked to publicly voice their opinion,
so as to give every suggestion a fair chance.” An interpretative counternarrative might be: “Because all
opinions were publicly expressed, no one supported
E’s opinion; and because E has low social capital, E
felt pressured to support the majority opinion.”
Transformative counternarratives differ factually
from the base narrative, implicitly asserting that the
base narrative contains falsehoods. For example, if A’s
base narrative contains the statement “I expressed
my opinion last,” a transformative counternarrative
could instead assert that “A expressed his or her opinion first.”
There is a close connection between social awareness and counternarrative intelligence. For example,
an agent could sincerely believe a narrative, but identify it as a counternarrative to other agents’ narratives, and deliberate on whether it would be socially
advisable to express it, or how to express it so as to
minimize social damage. This situation is similar to
those in which agents that are not rebels reason that
their behavior may appear rebellious to others.
Existing work that can provide rebel agents with
various mechanisms of counternarrative intelligence
includes Holmes and Winston’s (2016) story-enabled
hypothetical reasoning, in which narrative variants
are generated based on varied alignments, and Li et
al.’s (2014b) use of different communicative goals to
provide variation in narrative discourse and emotional content.
We argued that it is beneficial for certain AI agents to
be able to rebel for positive, defensible reasons in a
variety of situations, and speculated that AI may nev-
er become fully socially intelligent without noncom-
pliance abilities. We presented an AI rebellion frame-
work and discussed sociocognitive dimensions
pertaining to it: rebellion awareness and counternar-
rative intelligence. The framework is intended to
inspire, guide, and provide terminology for ( 1) the
development and study of rebel agents that serve
positive purposes, ( 2) systematic discussion of the
ethics of AI rebellion (for, although we argue that AI
rebellion can be positive, we recognize that it is not
necessarily so), and ( 3) positive reframing of the AI
noncompliance narrative within the research com-
munity and popular culture.
We thank the editors and reviewers, our coauthors of
previous work on rebel agents, and all colleagues
who have shown interest in the topic and offered
their feedback. The Personal Assistant scenario is
based on a conversation with Jonathan Gratch. This
research was performed while Alexandra Coman held
an NRC Research Associateship award at the Naval
1. Taken from What Is a Counternarrative?, www.reference.
Abbott, H. P. 2008. The Cambridge Introduction to Narrative.
Cambridge, UK: Cambridge University Press.
Agravante, D. J.; Cherubini, A.; Bussy, A.; and Kheddar, A.
2013. Human-Humanoid Joint Haptic Table Carrying Task
with Height Stabilization Using Vision. In Proceedings of the
2013 IEEE/RSJ International Conference on Intelligent Robots
and Systems, 4609–14. Piscataway, NJ: Institute for Electrical
and Electronics Engineers.
Apker, T.; Johnson, B.; and Humphrey, L. 2016. LTL Templates for Play-Calling Supervisory Control. In Proceedings of
the 54th AIAA Science and Technology Forum Exposition. Red
Hook, NY: Curran Associates, Inc.
Asch, S. E. 1956. Studies of Independence and Conformity:
1. A Minority of One Against a Unanimous Majority.
Psychological Monographs: General and Applied 70( 9): 1–70.
Borenstein, J., and Arkin, R. 2016. Robotic Nudges: The
Ethics of Engineering a More Socially Just Human Being.
Science and Engineering Ethics 22( 1): 31–46.
Briggs, G.; McConnell, I.; and Scheutz, M. 2015. When
Robots Object: Evidence for the Utility of Verbal, but Not
Necessarily Spoken Protest. In Social Robotics: Seventh International Conference. Lecture Notes in Artificial Intelligence,
83–92. Berlin: Springer.
Briggs, G., and Scheutz. M. 2015. “Sorry, I Can’t Do That”:
Developing Mechanisms to Appropriately Reject Directives
in Human-Robot Interactions. In Artificial Intelligence for
Human-Robot Interaction: Papers from the AAAI Fall Symposium, edited by B. Hayes and M. Gombolay. Technical
Report FS-15-01. Palo Alto, CA: AAAI Press.
Cialdini, R. B., and Goldstein, N. J. 2004. Social Influence:
Compliance and Conformity. Annual Review of Psychology
55:591–621. Palo Alto, CA: Annual Reviews, Inc.
Coman, A., and Aha, D. W. 2017. Cognitive Support for
Rebel Agents: Social Awareness and Counternarrative Intelligence. In Proceedings of the Fifth Conference on Advances in
Cognitive Systems. Palo Alto, CA: Cognitive Systems Foundation.