ty of Nebraska) addressed this issue using Stackelberg
game models. The papers of Arjuna Flenner (NAVAIR,
China Lake) and Deanna Needle et al. (UCLA) discussed using graphs to improve classification and
using compressed sensing techniques to obtain theoretical results on single-bit classification, respectively.
In a military context, ethics equates to an operational commander taking responsibility, which
requires formal understanding of the micro- and
macro-level aspects of autonomous vehicle tasking.
A paper given by Don Brutzman et al. (Naval Postgraduate School) addressed rational behavior using
human-robot ethical constraints and a paper given
by Bonnie Johnson (Naval Postgraduate School)
addressed a system of systems approach to battle
Traditional data sciences, including statistics,
numerical analysis, machine learning, data mining,
business intelligence, and artificial intelligence, have
evolved into big data analytics and deep models. We
discussed Amazon’s AWS, the industry cloud computing leader’s big data platform, which includes the
ingestion tools Snowball, S3 storage, Glue Data Catalog, Kinesis analytics, and managed Hadoop/Spark.
We also discussed industrial infrastructure tools such
as GPU and Io T, and the analytic engines of Apache
tools, Caffe, Theano, TensorFlow, Keras, and Torch.
The presentations by Wallace Bow et al. (Sandia
National Laboratories), Philip Chan et al. (University
of Maryland Baltimore County), and
Krishnendu Ghosh et al. (Miami University) and the
invited talk by Roshan Punnoose (Enlighten IT Consulting) explore these ideas in more detail.
Deep models, ML, and AI will become the
lifeblood of military applications. But with opportunity can come risk. Can AI be trusted? AI can be
weaponized and data can be poisoned. However,
opportunities are plenty if we foster broader communities and collaboration. Inscrutability is
inevitable, as the system of systems approach
becomes more complex or in situations where
human intelligence is not easily understood. Risks
exist on autonomous systems against humans, the
weapons of mass destruction. A paper given by
Scott Humr (Marine Corps University) discussed
autonomous outcomes on shaping the future data
environment to build trust in artificial intelligence
and learning applications.
Ying Zhao, Arjuna Flenner, and Tony Kendall
served as cochairs of this symposium. The papers of
the symposium were published as AAAI Press Technical Report FS17-03.
Human-Agent Groups: Studies,
Algorithms, and Challenges
As robots and artificial agents become more promi-
nent in human lives, they are also increasingly
becoming parts of groups and teams. Group interac-
tion of humans and agents are present in a diverse
set of AI applications — for instance, a digital assis-
tant for the home, a social robot operating in a mall,
or a group of robots and virtual agents supporting
first responders. Despite the growing avenues for
human-agent group interaction, however, a majority
of the research on interaction between humans and
artificial agents still focuses on one human interact-
ing with one agent.
Research on group interactions between humans
and artificial agents (both virtual agents and physical
robots) is important, but often more challenging
than studying dyadic interactions. It requires gathering groups of humans and artificial agents, and
addressing additional factors that contribute to successful group interaction (such as intragroup dynamics). Further, while several research domains do tackle the challenges associated with group interaction,
the focus on human-agent groups has been limited.
For instance, research on multiagent systems within
AI has primarily focused on teams of artificial agents,
while research in social psychology and human factors engineering has primarily focused on human
teams. The goal of this symposium on human-agent
groups was to bring together scholars from a variety
of perspectives to discuss the state of the art and novel challenge in groups of humans and AI.
The symposium involved scholars from many
research fields, including autonomous agents and
multiagent systems, knowledge representation, conversational agents, decision support, human-in-the-loop planning, robotics, human-robot interaction,
social networks, social psychology, design, and science policy. It featured a set of six invited talks and
nine presentations from authors of contributed
papers, including empirical studies, novel algorithmic challenges, and potential solutions for human-agent groups.
Talks and discussion on empirical studies of
human teams and human-agent groups provided
foundational insights for modeling and representing
groups from an artificial intelligence perspective. For
instance, humans associate with multiple groups
(related to work, family, nationality), and group
membership drives their behavior and cooperation
within the group. Although an artificial agent engaging in group interactions can benefit from the ability to represent and identify such flexible and evolving group memberships, the ability to represent
flexible group memberships is currently missing from
classical AI models of groups. Brian Lickel (
University of Massachusetts, Amherst) pointed out that
humans have been successfully interacting with
groups of animals, which are nonhuman agents with
different physical and cognitive capabilities — and
that insights from these group interactions might be
helpful for informing the research on human-agent
groups. Yuichiro Yoshikawa (Osaka University) pre-