University) discussed artificial pain in robots as the
basis for empathy, morality, and ethics in a developmental process of consciousness.
Consciousness does not come in isolation, and
some talks pointed to the relational aspects of consciousness and their affiliation with theory of mind,
attention, and intentionality. Paul Bello (NRL) discussed the relationships between intentional actions
and consciousness, and Susmit Jha (SRI International) analyzed the role of shared intentionality as
a critical component in conscious AI systems. The
tight intermix between logical AI and consciousness
dates from a seminal 1995 paper by John McCarthy.
Selmer Bringsjord and Naveen Sundar Govindarajulu
(Rensselaer Polytechnic Institute) proposed an axiomatic theory of cognitive consciousness that may
lead to new reasoning capabilities for machines,
along with a new measure of cognitive consciousness.
A theme that spurred lively debated during the symposium was ethical concerns surrounding conscious
AI systems operating in society. Of the many talks,
both John Sullins (Sonoma State University) and John
Murray (San José State University) discussed a system
architecture for artificial wisdom in conscious AI systems. The principal argument during the plenary discussion was that the main obstacle to progress in this
field is the lack of significant funding, in part because
of cultural factors in certain countries and in part
because of the lack of cases for possible monetization
of the technologies related to conscious AI systems.
Antonio Chella (University of Palermo), David
Gamez (Middlesex University), Patrick Lincoln (SRI
International), Riccardo Manzotti (IULM University),
and Jonathan Pfautz (DARPA) served as cochairs of
this symposium. The papers of the symposium were
published as CEUR Workshop Proceedings, Volume 2287.
Antonio Chella and David Gamez wrote this report.
Verification of Neural Networks
The Verification of Neural Networks AAAI symposium was held at Stanford University, Stanford,
California, March 25 to 27, 2019. The symposium
brought together researchers interested in methods
and tools aimed at providing guarantees (formal or
otherwise) about the behaviors of neural networks
and systems built from them.
Methods based on machine learning are increasingly being deployed for a wide range of problems,
including recommender systems, machine vision,
and autonomous driving. While machine learning has
made significant contributions to such applications,
concerns remain about the lack of methods and tools
to provide formal guarantees about the behaviors of
the resulting systems.
In particular, for data-driven methods to be usable
in safety-critical applications, including autonomous
systems, robotics, cybersecurity, and cyber-physical
systems, it is essential that the behaviors generated
by neural networks are well understood and can be
predicted at design time. In the case of systems that
are learning at run time, it is desirable that any change
to the underlying system respect a given safety envelope
for the system.
Although the literature on verification of traditionally designed systems is wide and the resulting
tools have been successful, there has been a lack of
results and effort in this area until recently. The Verification of Neural Networks symposium brought
together researchers working on a range of techniques for the verification of neural networks, from
formal methods to optimization and testing. One
challenge in this upcoming research area is that results
are presently being published in several research
communities, including formal verification, security
and privacy, systems, and AI. The symposium served
as a venue for these various communities to interact
and build bridges to form a cross-cutting community interested in the verification and validation of
systems based on machine learning.
A major theme of the symposium was a focus on
specific techniques and tools that have been recently
developed for verifying neural networks. Included
in this theme were two invited talks by Suman Jana
(Columbia University) and Krishnamurthy Dvijotham
(DeepMind) as well as two surveys of the field, one
by Changliu Liu (Carnegie Mellon University) and
one by Nina Narodytska (VMware). Specific tools
presented included Marabou, RecurJac, RNSVerify,
Sherlock, and Verisig. Mixed integer linear programming was mentioned as a common underlying
technique. Other common techniques included
abstraction and reachability analysis.
Another theme was robustness. There has been a
lot of interest in adversarial robustness of neural networks in the machine learning community generally.
Several presentations focused on novel formal methods that can be used to provide guarantees about
robustness or to find counterexamples to robustness.
It was also stressed how robustness, and derived concepts, provide one basic property that is desirable for
many neural networks, particularly in vision applications. This is especially valuable in an area where it is
often difficult to obtain specifications for properties
that a given neural network should satisfy.
A final major theme was closed-loop systems.
Closed-loop systems are multicomponent, often
cyber-physical, systems that include a neural network
as one component. For such systems, it is important
to determine how the neural network fits into and
affects the system as a whole. Ideally, verification
techniques would guarantee the correct functioning
of the entire system, not just the neural network.
Various advances in this direction were presented.
The final session of the symposium focused on establishing an effort to collect benchmarks for the community and make them available in a standard format
(similar to the SMT-LIB initiative). Armando Tachella
(University of Genoa) and Clark Barrett (Stanford)
agreed to take a leadership role in this effort.
Clark Barrett and Alessio Lomuscio served as cochairs
of this symposium and wrote this report.