design; the transformation of fashion recommendation systems; identifying key needs of experts who
use machine learning; explorations into the psychology of end-user experiences with systems that use (or
don’t use) machine ; and issues of privacy.
Themes and topics included communication and
collaboration; automation, agency, and control; and
bias, trust, and power.
We addressed how to develop tools that support
communication and collaboration between system,
interaction, product, and service designers. How
might we support a more productive dialog between
those who apply machine-learning techniques, and
those who understand the implications of the choices that developers and designers make in the design
of these systems? We looked at getting beyond the
black box — enabling better experiments in model
training, and in tending and pruning data.
Reciprocal knowledge sharing will move both areas
forward and will enable us to create more trusted and
trustworthy user experiences. Bringing relevant and
inclusive case studies that reflect a range of diverse
use cases is one way for better formulation of design
Spurred in part by the panel on autonomous vehicles, we discussed the difficulty of designing for complex ecosystems that are multidevice, multiservice,
and interconnected — or sometimes disconnected?
They all utilize their own forms of learning and predictive modeling, making for considerable design and
user experience complexity, and need to work
between technical, physical, and social layers.
We also tackled more philosophical and political
issues. On the second day of the symposium, we discussed system transparency, with a call for clear
provenance models that make explicit the potential
biases in machine-learning data sets, sources, and
interactions. The basic call was always to provide
multiple points of view to mitigate issues to do with
bias, and to make bias an explicit topic of investigation itself.
Trust and power were key issues closing the symposium. What dialogue should systems have with
their users, and what does it mean for systems to be
personable, to have character, and to be socially
responsible? The symposium ended with a pledge to
craft a summary monograph that will be published to
augment the publications in the AAAI 2017 Spring
Symposium Technical Report.
The Designing the User Experience of Machine
Learning Systems symposium was organized by Mike
Kuniavsky, Elizabeth Churchill, and Molly Wright
Steenson. This report was written by Elizabeth
Churchill and Molly Wright Steenson. The papers
presented at the symposium were published as AAAI
Technical Report SS-17-04 in the AAAI Digital Library
and included in The 2017 AAAI Spring Symposium
Series: Technical Reports SS-17-01 – SS-16-08
Interactive Multisensory Object
Perception for Embodied Agents
Learning to perceive and reason about objects in
terms of multiple sensory modalities remains a long
standing challenge in robotics. Evidence from the
fields of psychology and cognitive science has
demonstrated that humans rely on multiple sensory
modalities (for example, audio, haptics, tactile) in a
broad variety of contexts ranging from language
learning to learning manipulation skills. Neverthe-
less, most object representations used by robots today
rely solely on visual input due to the difficulty of
robotic interaction. Relying on visual input does not
allow robots to learn or reason about nonvisual object
properties (weight, texture). The goal of the sympo-
sium was to investigate how multisensory object rep-
resentations can be learned and used by robots
through interaction with their environment.
The symposium brought together researchers from
a variety of different fields: machine learning, developmental and cognitive robotics, assistive robotics,
robotic manipulation and control, and neuroscience.
The papers accepted to the symposium spanned a
diverse set of problems and domains in which robots
interact with the environment and utilize visual and
nonvisual object representations. The research
showed that multisensory perception can allow robots
to learn a variety of skills and tasks and that such perception can complement computer vision techniques
in situations where vision alone is insufficient.
Several speakers gave invited talks. Alexander
Stoytchev discussed how exploratory behaviors coupled with multisensory perception enable autonomous mental development in robots. Byron Boots
presented machine-learning models for state estimation and filtering in high-dimensional spaces. Charlie
Kemp highlighted the practical benefits of multisensory perception in the domain of assistive robotics.
Oliver Brock proposed a design pattern for using multimodal perception in the context of learning manipulation skills. Katherine Kuchenbecker presented
methods that enabled robots to learn haptic properties of objects. Jivko Sinapov highlighted the importance of using multisensory perception when teaching robots language. The symposium also featured
two talks, from neuroscientists Allison Yamanashi
Leib and Moqian Tian.
The symposium attendees came together and
addressed the major question of how do we collect
large data sets from robots exploring the world with
multisensory inputs and what algorithms can we use
to learn and act with this data? This question was broken down into three main themes: ( 1) representations of multimodal robot knowledge; ( 2) learning for
robot perception; and ( 3) the benefits of multisensory information and how to collect and share data
within the community. Specific challenges within
these topics include issues such as different sensors