Archive for the ‘Learning’ Category

Conversational News Experiences

News consumption is a passive experience—reading print or online newspapers, listening to radio shows and podcasts, watching television broadcasts. News producers create, curate, and organize  content which consumers absorb passively. With the advent of interactive conversational technologies ranging from chatbots to voice-based conversational assistants such as Amazon Alexa, there is an opportunity to engage consumers in more interactive experiences around news.

At the Computation+Journalism symposium held at Northwestern University this year, Emily Withrow, editor at Quartz Bot Studio and assistant professor at Northwestern’s Medill School of Journalism and I had a fireside chat to share recent technological developments in this area and explore what kinds of conversational news experiences these technologies might enable.

Panel at the 2017 Computation+Journalism Symposium, Northwestern University, Evanston, IL. #cj2017 

Announcing the 2017 Alexa Prize Finalists

We’ve hit another milestone in the Alexa Prize, a $2.5 million university competition to advance conversational AI. University teams from around the world have been hard at work to create a socialbot, an AI capable of conversing coherently and engagingly with humans on popular topics and news events for 20 minutes.

I am now excited to announce the university teams that will be competing in the finals! After hundreds of thousands of conversations, the two socialbots with the highest average customer ratings during the semifinal period are Alquist from the Czech Technical University in Prague and Sounding Board from the University of Washington in Seattle. The wildcard team is What’s Up Bot from Heriot-Watt University in Edinburgh, Scotland.

READ MORE:

developer.amazon.com/blogs/alexa/post/783df492-4770-4b11-81ac-59e009669d56/announcing-the-2017-alexa-prize-finalists

 

Talking AI with Sebastian Thrun

Udacity blog: Artificial Intelligence by its very nature promises so much, and the potential seems so vast it staggers the imagination. Excitement in this field runs higher every day, as the ongoing process of translating the possible into the actual produces newer and more incredible innovations.

With this excitement come concerns, of course, and it is perhaps understandable that some people continue to see Artificial Intelligence as some sort of a threat. This worry fails to take into consideration two key storylines: 1) AI is an augmentative technology; it extends our abilities, it does not replace them, and 2) AI, by assuming responsibility for repetitive and mundane tasks, frees us for more creative and fulfilling activity.  

Some observers have even gone so far as to suggest that intelligent machines represent a kind of end to “human-ness” itself; meaning, those things we think of as being most human—the ability to love, to make moral and ethical decisions, to create art—are predicted to fall by the wayside before the advance of intelligent machines.

Dr. Ashwin Ram, Senior Manager of AI at Amazon Alexa, spoke to Sebastian Thrun, President and Co-Founder of Udacity, about AI, and his foundational exposure to the “human side of technology” as he pursued his PhD in AI at Yale. This is a deeply insightful conversation, and should be required viewing for anyone interested in the past, present, and future of AI, and what it all means for humanity.

Read more / View the talk:
blog.udacity.com/2017/07/ashwin-ram-sebastian-thrun-discuss-ai.html

Conversational AI: Voice-Based Intelligent Agents

As we moved from the age of the keyboard, to the age of touch, and now to the age of voice, natural conversation in everyday language continues to be one of the ultimate challenges for AI. This is a difficult scientific problem involving knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning, as well as a complex product design problem involving user experience and conversational engagement.

I will talk about why Conversational AI is hard, how conversational agents like Amazon Alexa understand and respond to voice interactions, how you can leverage these technologies for your own applications, and the challenges that still remain.

Variants of this talk presented (click links for video):
 
Keynote talks at The AI Conference (2017), O’Reilly AI Conference (2017), The AI Summit (2017), Stanford ASES Summit (2017), MLconf AI Conference (2017), Global AI Conference (2016).
 
Distinguished lectures at Georgia Tech/GVU (2017), Northwestern University (2017).
 
Keynote panel at Conversational Interaction Conference (2016).
 
Lightning TED-style talks at IIT Bay Area Conference (2017), Intersect (2017).
 

Join the Alexa Prize Journey and Test the Socialbots

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. In April, university teams from around the world assembled at the appropriately named Day 1 building in Seattle for the Alexa Prize Summit. The event was a base camp for teams to share learnings and make preparations for the most challenging leg of their journey: to build and scale an AI capable of conversing coherently and engagingly with humans for 20 minutes.

As they build their “socialbots,” they will encounter esoteric problems like context modeling and dialog planning as well as exoteric problems like user experience and conversational engagement. And they will need all the help they can get.

We invite you to join the students on their journey and help them along the way. You can interact with their socialbots simply by saying, “Alexa, let’s chat” on any device with Alexa.

READ MORE:
developer.amazon.com/blogs/alexa/post/e4cc64d1-f334-4d2d-8609-5627939f9bf7/join-the-alexa-prize-journey-and-test-the-socialbots

 

Making The Future Possible: Conversational AI in Amazon Alexa

No longer is AI solely a subject of science fiction. Advances in AI have resulted in enabling technologies for computer vision, planning, decision making, robotics, and most recently spoken language understanding. These technologies are driving business growth, and releasing workers to engage in more creative and valuable tasks.

I’ll talk about the moved from the age of the keyboard, to the age of touch, and are now entering the age of voice. Alexa is making this future possible. Amazon is committed to fostering a robust cloud-based voice service, and it is this voice service that the innovators of today, tomorrow, and beyond will be building. It is this voice service—and the ecosystem around it—that awaits the next generation of AI talent.

Keynote at Udacity Intersect Conference, Computer History Museum, Mountain View, CA, March 8, 2017.
 

READ MORE:

blog.udacity.com/2017/02/dr-ashwin-ram-intersect-2017-speaker.html

VIEW THE TALK:

linkedin.com/feed/update/urn:li:activity:6286681682187812864

 

Announcing the Sponsored Teams for the 2016-2017 Alexa Prize

On September 29, 2016, Amazon announced the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. We received applications from leading universities across 22 countries. Each application was carefully reviewed by senior Amazon personnel against a rigorous set of criteria covering scientific contribution, technical merit, novelty, and ability to execute. Teams of scientists, engineers, user experience designers, and product managers read, evaluated, discussed, argued, and finally selected the ten teams who would be invited to participate in the competition. Wait, make that twelve; we received so many good applications from graduate and undergraduate students that we decided to sponsor two additional teams.

Today, we’re excited to announce the 12 teams selected to compete with an Amazon sponsorship.

READ MORE:

developer.amazon.com/blogs/post/Tx1UXVV4VJTPYTL/announcing-the-sponsored-teams-for-the-2016-2017-alexa-prize

The Alexa Prize: $2.5M to Advance Conversational AI

Artificial intelligence (AI) is becoming ubiquitous. With advances in technology, algorithms, and sheer compute power, it is now becoming practical to utilize AI techniques in many everyday applications including transportation, healthcare, gaming, productivity, and media. Yet one seemingly intuitive task for humans still eludes computers: natural conversation. Simple and natural for humans, voice communication in everyday language continues to be one of the ultimate challenges for AI.

Today, we are pleased to announce the Alexa Prize, a $2.5 million university competition to advance conversational AI through voice. Teams of university students around the world are invited to participate in the Alexa Prize (see contest rules for details). The challenge is to create a socialbot, an Alexa skill that converses coherently and engagingly with humans on popular topics for 20 minutes. We challenge teams to invent an Alexa socialbot smart enough to engage in a fun, high quality conversation on popular topics for 20 minutes.

Are you up to the challenge?

READ MORE:

developer.amazon.com/public/community/post/Tx221UQAWNUXON3/Are-you-up-to-the-Challenge-Announcing-the-Alexa-Prize-2-5-Million-to-Advance-Co

Construction and Adaptation of AI Behaviors in Computer Games

Computer games are an increasingly popular application for Artificial Intelligence (AI) research, and conversely AI is an increasingly popular selling point for commercial digital games. AI for non playing characters (NPC) in computer games tends to come from people with computing skills well beyond the average user. The prime reason behind the lack of involvement of novice users in creating AI behaviors for NPC’s in computer games is that construction of high quality AI behaviors is a hard problem.

There are two reasons for it. First, creating a set of AI behavior requires specialized skills in design and programming. The nature of the process restricts it to certain individuals who have a certain expertise in this area. There is little understanding of how the behavior authoring process can be simplified with easy-to-use authoring environments so that novice users (without programming and design experience) can carry out the behavior authoring task. Second, the constructed AI behaviors have problems and bugs in them which cause a break in player expe- rience when the problematic behaviors repeatedly fail. It is harder for novice users to identify, modify and correct problems with the authored behavior sets as they do not have the necessary debugging and design experience.

The two issues give rise to a couple of interesting questions that need to be investigated: a) How can the AI behavior construction process be simplified so that a novice user (without program- ming and design experience) can easily conduct the authoring activity and b) How can the novice users be supported to help them identify and correct problems with the authored behavior sets? In this thesis, I explore the issues related to the problems highlighted and propose a solution to them within an application domain, named Second Mind(SM). In SM novice users who do not have expertise in computer programming employ an authoring interface to design behaviors for intelligent virtual characters performing a service in a virtual world. These services range from shopkeepers to museum hosts. The constructed behaviors are further repaired using an AI based approach.

To evaluate the construction and repair approach, we conduct experiments with human subjects. Based on developing and evaluating the solution, I claim that a design solution with behavior timeline based interaction design approach for behavior construction supported by an understandable vocabulary and reduced feature representation formalism enables novice users to author AI behaviors in an easy and understandable manner for NPCs performing a service in a virtual world. I further claim that an introspective reasoning approach based on comparison of successful and unsuccessful execution traces can be used as a means to successfully identify breaks in player experience and modify the failures to improve the experience of the player interacting with NPCs performing a service in a virtual world.

The work contributes in the following three ways by providing: 1) a novel introspective reasoning approach for successfully detecting and repairing failures in AI behaviors for NPCs performing a service in a virtual world.; 2) a novice user understandable authoring environment to help them create AI behaviors for NPCs performing a service in a virtual world in an easy and understandable manner; and 3) Design, debugging and testing scaffolding to help novice users modify their authored AI behaviors and achieve higher quality modified AI behaviors compared to their original unmodified behaviors.

Read the dissertation:

Construction and Adaptation of AI Behaviors in Computer Games

by Manish Mehta

PhD dissertation, College of Computing, Georgia Institute of Technology, August 2011.

smartech.gatech.edu/handle/1853/42724

Augmenting Human Innovation with Social Cognition

Social Media is everywhere: photos, videos, news, blogs, art, music, games… even business, finance, healthcare, government, design, and other serious applications are going social. These social media gave given rise to Social Cognition. What began with sharing has moved to creation. Consumers have become producers, and commerce has become a conversation.

Due to these conversations, individuals are no longer alone; whether you’re making a life decision, solving a critical business problem, or merely looking for a restaurant, your social graphs are available to augment your decision making process. These graphs have no geographic boundaries; professional networks are worldwide, and information streams from far corners of the globe into the palm of your hand.

Beyond media and commerce, the next big disruption is innovation. Humans everywhere want to innovate, and Social Cognition can augment human innovation in many everyday and expert domains.

I discuss three human capabilities that are amenable to social augmentation: problem solving, learning, and creativity. I illustrate them with challenge problems from my work: 1) healthcare: helping consumers find relevant health information without search; 2) energy: helping experts troubleshoot complex turbine failures; 3) learning: scaling education to a hundred million people; and 4) creativity: enabling average users to create artificial intelligence agents without programming, and 2) learning: scaling education to a hundred million people.

These technologies blend Cognitive Systems (artificial intelligence) and Cognitive Science (human cognition) in products that both exhibit and support cognition in large-scale social communities. This research not only provides scientific insight but also creates disruptive business opportunities.

Invited talk at PARC, Palo Alto, CA, April 7, 2011.
 
Invited talk at Wright State University, Center of Excellence in Human-Centered Innovation, Dayton, OH, October 24, 2010.
 

View the slides: