Archive for December, 2017

TED: Imagine a world of AI

Ashwin Ram works on the AI behind Alexa, one of several new bots that might change the way your home and your life function within the next few years. Imagine a bot that turns on your lights, shops for you, even helps you make decisions. Learn more about a bot-enabled future that might have you saying (like Shah Rukh Khan does): “Alexa, I love you!”

 

TALK & TRANSCRIPT:
#TomorrowsWorld made easier with Artificial Intelligence. #TEDTalksIndiaNayiSoch

ted.com/talks/ashwin_ram_could_bots_make_your_life_better

BEHIND THE SCENES:
Innovator and entrepreneur, Ashwin Ram believes AI will changes our lives in future. #TomorrowsWorld #TEDTalksIndiaNayiSoch

youtube.com/watch?v=kDvIsRuaq5k

FULL #TOMORROWSWORLD EPISODE:
Can you imagine what #TomorrowsWorld will be like? Shah Rukh Khan introduces.

tedtalksindianayisoch.hotstar.com/TED/episode-4.php

ALL TED TALKS INDIA NAYI SOCH:
#TEDTalksIndiaNayiSoch is a groundbreaking TV series showcasing new thinking from some of the brightest brains in India and beyond and hosted by “The King of Bollywood,” Shah Rukh Khan.

ted.com/series/ted_talks_india_nayi_soch

On Evaluating and Comparing Conversational Agents

Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Ama- zon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems.

In this paper, we propose a comprehensive evaluation strategy with multiple metrics de- signed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.

READ THE PAPER:

On Evaluating and Comparing Conversational Agents
by A Venkatesh, C Khatri, A Ram, F Guo, R Gabriel, A Nagar, R Prasad, M Cheng, B Hedayatnia, A Metallinou, R Goel, S Yang, A Raju
NIPS-2017 Workshop on Conversational AI

arxiv.org/abs/1801.03625

 

Topic-based Evaluation for Conversational Bots

Dialog evaluation is a challenging problem, especially for non task-oriented dialogs where conversational success is not well-defined. We propose to evaluate dialog quality using topic-based metrics that describe the ability of a conversational bot to sustain coherent and engaging conversations on a topic, and the diversity of topics that a bot can handle. To detect conversation topics per utterance, we adopt Deep Average Networks (DAN) and train a topic classifier on a variety of question and query data categorized into multiple topics. We propose a novel extension to DAN by adding a topic-word attention table that allows the system to jointly capture topic keywords in an utterance and perform topic classification. We compare our proposed topic based metrics with the ratings provided by users and show that our metrics both correlate with and complement human judgment. Our analysis is performed on tens of thousands of real human-bot dialogs from the Alexa Prize competition and highlights user expectations for conversational bots.

READ THE PAPER:
Topic-based Evaluation for Conversational Bots
by F Guo, A Metallinou, C Khatri, A Raju, A Venkatesh, A Ram
NIPS-2017 Workshop on Conversational AI

arxiv.org/abs/1801.03622