ed in building a strong NLU and knowledge components had the lowest response error rates leading to
higher user ratings.
Different conversational goals call for different
response-generation techniques, suggesting that
retrieval, generative, and hybrid mechanisms may all
be required within the same system. When the performance of a socialbot has converged, generative and
hybrid modules combined with a robust ranking and
selection module can lead to a better conversational
A response ranking and selection model greatly
impacts socialbot quality. The teams who built a
strong model-selection policy had significant
improvements in ratings and average number of dialogue turns.
Even if a socialbot has strong response-generation and
ranker modules, lack of good NLU and DM components adversely affect user ratings.
We expected that the grand challenge of 20-
minute conversations would take many years to
achieve — the Alexa Prize was set up as a multiyear
competition to enable sustained research on this
problem. Despite the difficulty of the challenge, it is
extremely encouraging to see the work that the inaugural cohort of the Alexa Prize has achieved in year
one of the competition. We have seen significant
advancements in research, and in the quality of
socialbots as observed through the customer ratings,
but much remains to be achieved. With the help of
Alexa users and the science community, Alexa Prize
2018 will continue to work towards the goal of 20-
minute-long coherent and engaging social conversations, and continue to advance the state of conversational AI.
We would like to thank all the university students
and their advisors (Alexa Prize Teams 2017) who par-
ticipated in the competition. We would also like to
thank the entire Alexa Prize team (Eric King, Kate
Bland, Qing Liu, Jeff Nunn, Ming Cheng, Ashish
Nagar, Yi Pan, Han Song, SK Jayadevan, Amanda
Wartick, Anna Gottardi, Gene Hwang, Art Pettigrue,
and Nate Michel) for their contribution in making
the Alexa Prize competition a success. We would also
like to thank Amazon leadership and Alexa principals
for their vision and support through this entire pro-
gram; the marketing, public relations, and legal
departments for helping drive the right messaging
and a high volume of traffic to the Alexa Prize skill,
ensuring that the participating teams received real-
world feedback for their research; Alexa engineering
for all the support and work on enabling the Alexa
Prize skill and supporting a custom Alexa Prize ASR
model, while always maintaining operational excel-
lence; and Alexa machine learning for continued
support with NLU and data services, which allowed
us to capture user requests to initiate conversations
and also provide high-quality annotated feedback to
the teams. We also want to thank ASK leadership and
the countless teams in ASK who helped us with the
custom APIs for Alexa Prize teams, enabling skill beta
testing for the Alexa Prize skills before it went gener-
al availability, and who further supported us with
skill management, QA, certification, marketing, oper-
ations, and solutions. We would also like to thank
the Alexa experiences organization for exemplifying
customer obsession by providing us with critical
input to share with the teams on building the best
customer experiences and driving us to track our
progress against customer feedback.
Finally, thank you to the Alexa customers who
engaged in tens of thousands of hours of conversations spanning millions of interactions with the
Alexa Prize socialbots and who provided the feedback
that helped teams improve over the course of the
4. See the Alexa Prize Proceedings, developer.amazon.com/
Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J.
2008. Freebase: A Collaboratively Created Graph Database
for Structuring Human Knowledge. In Proceedings of the
ACM SIGMOD International Conference on Management of
Data, 1247–50. New York: Association for Computing
Adewale, O.; Beatson, A.; Buniatyan, D.; Ge, J.; Khodak, M.;
Lee, H.; Prasad, N.; Saunshi, N.; Seff, A.; Singh, K.; Suo, D.;
Zhang, C.; Arora, S. 2017. Pixie: A Social Chatbot. Alexa
Prize Proceedings. Seattle, WA: Amazon.
Bowden, K. K.; Wu, J.; Oraby, S.; Misra, A.; Walker, M. 2017.
Slugbot: An Application of a Novel and Scalable Open
Domain Socialbot Framework. Alexa Prize Proceedings.
Seattle, WA: Amazon.
Bunt, H.; Alexandersson, J.; Carletta, J.; Choe, J.-W.; Fang, A.
C.; Hasida, K.; Lee, K.; Petukhova, V.; Popescu-Belis, A.;
Romary, L.; Soria, C.; and Traum, D. 2010. Towards an ISO
Standard for Dialogue Act Annotation. In Proceedings of the
International Conference on Language Resources and Evaluation.
Luxembourg: European Language Resources Association.
Cervone, A.; Tortoreto, G.; Mezza, S.; Gambi, E.; Riccardi,
G. 2017. Roving Mind: A Balancing Act between Open–
Domain and Engaging Dialogue Systems. Alexa Prize Proceedings. Seattle, WA: Amazon.
Fang, H.; Cheng, H.; Clark, E.; Holtzman, A.; Sap, M.;
Ostendorf, M.; Choi, Y.; Smith, N. 2017. Sounding Board –
University of Washington’s Alexa Prize Submission. Alexa
Prize Proceedings. Seattle, WA: Amazon.