Archive for the ‘Agents’ Category

Making The Future Possible: Conversational AI in Amazon Alexa

No longer is AI solely a subject of science fiction. Advances in AI have resulted in enabling technologies for computer vision, planning, decision making, robotics, and most recently spoken language understanding. These technologies are driving business growth, and releasing workers to engage in more creative and valuable tasks.

I’ll talk about the moved from the age of the keyboard, to the age of touch, and are now entering the age of voice. Alexa is making this future possible. Amazon is committed to fostering a robust cloud-based voice service, and it is this voice service that the innovators of today, tomorrow, and beyond will be building. It is this voice service—and the ecosystem around it—that awaits the next generation of AI talent.

Keynote at Udacity Intersect Conference, Computer History Museum, Mountain View, CA, March 8, 2017.


Announcing the Sponsored Teams for the 2016-2017 Alexa Prize

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the ten teams who would be invited to participate in the competition. Wait, make that twelve; we received so many good applications from graduate and undergraduate students that we decided to sponsor two additional teams.

Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship.


The Alexa Prize: $2.5M to Advance Conversational AI

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Are you up to the challenge?


Augmented Social Cognition for Consumer Health and Wellness

In a recent Wall Street Journal essay, Marc Andreessen wrote: “Software is eating the world. Over the next 10 years, I expect many more industries to be disrupted by software. Healthcare and education are next up for fundamental software-based transformation.”

What is the impending disruption in healthcare, and what new technologies are driving it? I argue that the problem is not healthcare but health: creating new consumer-centric approaches to health and wellness that increase engagement, improve health literacy and promote behavior change.

The web is evolving from information (portals) to interaction (social/mobile) to influence: shaping attitudes and behaviors. This creates a unique opportunity to address the problem of consumer health and wellness. But, to do this effectively requires a new kind of technology: user modeling. It also requires an innovation methodology that is fundamentally about people, not technology.

At PARC, our research in Augmented Social Cognition is centered around the confluence of three technologies: social, mobile, and user modeling. I discuss these technologies and explain how we leverage artificial Intelligence (AI) and case-based reasoning (CBR) techniques to model users and create effective and sustainable behavior change.

Invited talk at CBR-2013 Industry Day, Saratoga Springs, NY, July 8, 2013.

Learning from Demonstration to be a Good Team Member in a Role Playing Game

We present an approach that uses learning from demonstration in a computer role playing game. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams.

Learning from Demonstration to be a Good Team Member in a Role Playing Game

by Michael Silva, Silas McCroskey, Jonathan Rubin, Michael Youngblood, Ashwin Ram

26th International FLAIRS Conference on Artificial Intelligence (FLAIRS-13).

Construction and Adaptation of AI Behaviors in Computer Games

Computer games are an increasingly popular application for Artificial Intelligence (AI) research, and conversely AI is an increasingly popular selling point for commercial digital games. AI for non playing characters (NPC) in computer games tends to come from people with computing skills well beyond the average user. The prime reason behind the lack of involvement of novice users in creating AI behaviors for NPC’s in computer games is that construction of high quality AI behaviors is a hard problem.

There are two reasons for it. First, creating a set of AI behavior requires specialized skills in design and programming. The nature of the process restricts it to certain individuals who have a certain expertise in this area. There is little understanding of how the behavior authoring process can be simplified with easy-to-use authoring environments so that novice users (without programming and design experience) can carry out the behavior authoring task. Second, the constructed AI behaviors have problems and bugs in them which cause a break in player expe- rience when the problematic behaviors repeatedly fail. It is harder for novice users to identify, modify and correct problems with the authored behavior sets as they do not have the necessary debugging and design experience.

The two issues give rise to a couple of interesting questions that need to be investigated: a) How can the AI behavior construction process be simplified so that a novice user (without program- ming and design experience) can easily conduct the authoring activity and b) How can the novice users be supported to help them identify and correct problems with the authored behavior sets? In this thesis, I explore the issues related to the problems highlighted and propose a solution to them within an application domain, named Second Mind(SM). In SM novice users who do not have expertise in computer programming employ an authoring interface to design behaviors for intelligent virtual characters performing a service in a virtual world. These services range from shopkeepers to museum hosts. The constructed behaviors are further repaired using an AI based approach.

To evaluate the construction and repair approach, we conduct experiments with human subjects. Based on developing and evaluating the solution, I claim that a design solution with behavior timeline based interaction design approach for behavior construction supported by an understandable vocabulary and reduced feature representation formalism enables novice users to author AI behaviors in an easy and understandable manner for NPCs performing a service in a virtual world. I further claim that an introspective reasoning approach based on comparison of successful and unsuccessful execution traces can be used as a means to successfully identify breaks in player experience and modify the failures to improve the experience of the player interacting with NPCs performing a service in a virtual world.

The work contributes in the following three ways by providing: 1) a novel introspective reasoning approach for successfully detecting and repairing failures in AI behaviors for NPCs performing a service in a virtual world.; 2) a novice user understandable authoring environment to help them create AI behaviors for NPCs performing a service in a virtual world in an easy and understandable manner; and 3) Design, debugging and testing scaffolding to help novice users modify their authored AI behaviors and achieve higher quality modified AI behaviors compared to their original unmodified behaviors.

Read the dissertation:

Construction and Adaptation of AI Behaviors in Computer Games

by Manish Mehta

PhD dissertation, College of Computing, Georgia Institute of Technology, August 2011.

Open Social Learning Communities

With the advent of open education resources, social networking technologies and new pedagogies for online and blended learning, we are in the early stages of a significant disruption in current models of education. The disruption is fueled by a staggering growth in demand. It is estimated that there will be 100 million students qualified to enter universities over the next decade. To educate them, a major university would need to be created every week.

Universities have responded to this need with Open Education Resources—thousands of free, high quality courses, developed by hundreds of faculty, used by millions worldwide. Unfortunately, online courseware does not offer a supporting learning experience or the engagement needed to keep students motivated. Students read less when using e-textbooks; video lectures are boring; and retention and course completion rates are low.

Therein lies the core problem: How to engage a generation of learners who live on the Internet yet tune out of school, who seek interaction on Facebook yet find none on iTunes U, who need community yet are only offered content. We propose a new approach to this problem: open social learning communities, anchored with open content, providing an interactive online study group experience akin to sitting with study buddies on a world-wide campus quad.

This solution is enabled by state-of-the-art web technologies: really real-time collaboration technologies for a highly interactive experience; intelligent recommender systems to help learners connect with relevant content and other learners; mining and analytics to assess learner outcomes; and reputation techniques to establish social capital.  We will discuss these technologies and how they can be combined to address the problem of education in a manner that is highly scalable yet interactive and engaging.

This approach can be used for other types of learning communities. We will show an application to healthcare information access to help consumers learn about their healthcare questions and needs.

Keynote talk at SIPA Conference: Entrepreneurship—Idea Wave 3.0, Mountain View, CA, November 12, 2011.
Keynote talk at the International Conference on Web Intelligence, Mining and Semantics (WIMS-11), Sogndal, Norway, May 27, 2011.

View the talk:

Read the paper:

View the slides: