Archive for December 8th, 2017

On Evaluating and Comparing Conversational Agents

Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Ama- zon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems.

In this paper, we propose a comprehensive evaluation strategy with multiple metrics de- signed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.

READ THE PAPER:

On Evaluating and Comparing Conversational Agents
by A Venkatesh, C Khatri, A Ram, F Guo, R Gabriel, A Nagar, R Prasad, M Cheng, B Hedayatnia, A Metallinou, R Goel, S Yang, A Raju
NIPS-2017 Workshop on Conversational AI

arxiv.org/abs/1801.03625

 

Topic-based Evaluation for Conversational Bots

Dialog evaluation is a challenging problem, especially for non task-oriented dialogs where conversational success is not well-defined. We propose to evaluate dialog quality using topic-based metrics that describe the ability of a conversational bot to sustain coherent and engaging conversations on a topic, and the diversity of topics that a bot can handle. To detect conversation topics per utterance, we adopt Deep Average Networks (DAN) and train a topic classifier on a variety of question and query data categorized into multiple topics. We propose a novel extension to DAN by adding a topic-word attention table that allows the system to jointly capture topic keywords in an utterance and perform topic classification. We compare our proposed topic based metrics with the ratings provided by users and show that our metrics both correlate with and complement human judgment. Our analysis is performed on tens of thousands of real human-bot dialogs from the Alexa Prize competition and highlights user expectations for conversational bots.

READ THE PAPER:
Topic-based Evaluation for Conversational Bots
by F Guo, A Metallinou, C Khatri, A Raju, A Venkatesh, A Ram
NIPS-2017 Workshop on Conversational AI

arxiv.org/abs/1801.03622