Posts Tagged ‘goal-driven learning’

Conversational AI: Voice-Based Intelligent Agents

As we moved from the age of the keyboard, to the age of touch, and now to the age of voice, natural conversation in everyday language continues to be one of the ultimate challenges for AI. This is a difficult scientific problem involving knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning, as well as a complex product design problem involving user experience and conversational engagement.

I will talk about why Conversational AI is hard, how conversational agents like Amazon Alexa understand and respond to voice interactions, how you can leverage these technologies for your own applications, and the challenges that still remain.

Variants of this talk presented:
 
Keynote talks at The AI Conference (2017), Stanford ASES Summit (2017), Stanford ASES Summit (2017), Global AI Conference (2016).
 
Keynote panel at Conversational Interaction Conference (2016).
 
Lightning TED-style talks at IIT Bay Area Conference (2017), Intersect (2017).
 

Making The Future Possible: Conversational AI in Amazon Alexa

No longer is AI solely a subject of science fiction. Advances in AI have resulted in enabling technologies for computer vision, planning, decision making, robotics, and most recently spoken language understanding. These technologies are driving business growth, and releasing workers to engage in more creative and valuable tasks.

I’ll talk about the moved from the age of the keyboard, to the age of touch, and are now entering the age of voice. Alexa is making this future possible. Amazon is committed to fostering a robust cloud-based voice service, and it is this voice service that the innovators of today, tomorrow, and beyond will be building. It is this voice service—and the ecosystem around it—that awaits the next generation of AI talent.

Keynote at Udacity Intersect Conference, Computer History Museum, Mountain View, CA, March 8, 2017.
 

READ MORE:

blog.udacity.com/2017/02/dr-ashwin-ram-intersect-2017-speaker.html

Real-Time Case-Based Reasoning for Interactive Digital Entertainment

(Click image to view the video – it’s near the bottom of the new page.)

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why authoring Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I propose an alternative approach to designing Game AI: Real-Time CBR. This approach extends CBR to real-time systems that operate asynchronously during game play, planning, adapting, and learning in an online manner. Originally developed for robotic control, Real-Time CBR can be used for interactive games ranging from multiplayer strategy games to interactive believable avatars in virtual worlds.

As with any CBR technique, Real-Time CBR integrates problem solving with learning. This property can be used to address the authoring problem. I will show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the role of CBR in AI-based Interactive Digital Entertainment.

Keynote talk at the Eighteenth Conference on Pattern Recognition and Artificial Intelligence (RFIA-12), Lyon, France, February 5, 2012.
Slides and video here: rfia2012.liris.cnrs.fr/doku.php?id=pub:ram
 
Keynote talk at the Eleventh Scandinavian Conference on Artificial Intelligence (SCAI-11), Trondheim, Norway, May 25, 2011.
 
Keynote talk at the 2010 International Conference on Case-Based Reasoning (ICCBR-10), Alessandria, Italy, July 22, 2010.
 
GVU Brown Bag talk, October 14, 2010. Watch the talk here: www.gvu.gatech.edu/node/4320 
 
Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

AI agents designed for real-time settings need to adapt themselves to changing circumstances to improve their performance and remedy their faults. Agents typically designed for computer games, however, lack this ability. The lack of adaptivity causes a break in player experience when they repeatedly fail to behave properly in circumstances unforeseen by the game designers.

We present an AI technique for game-playing agents that helps them adapt to changing game circumstances. The agents carry out runtime adaptation of their behavior sets by monitoring and reasoning about their behavior execution and using this reasoning to dynamically revise their behaviors. The evaluation of the behavior adaptation approach in a complex real-time strategy game shows that the agents adapt themselves and improve their performance by revising their behavior sets appropriately.

Read the paper:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

by Manish Mehta, Santi Ontañon, Ashwin Ram

ICCBR-10 Workshop on Case-Based Reasoning for Computer Games, Alessandria, Italy, 2010.
www.cc.gatech.edu/faculty/ashwin/papers/er-10-02.pdf

Run-Time Behavior Adaptation for Real-Time Interactive Games

Intelligent agents working in real-time domains need to adapt to changing circumstance so that they can improve their performance and avoid their mistakes. AI agents designed for interactive games, however, typically lack this ability. Game agents are traditionally implemented using static, hand-authored behaviors or scripts that are brittle to changing world dynamics and cause a break in player experience when they repeatedly fail. Furthermore, their static nature causes a lot of effort for the game designers as they have to think of all imaginable circumstances that can be encountered by the agent. The problem is exacerbated as state-of-the-art computer games have huge decision spaces, interactive user input, and real-time performance that make the problem of creating AI approaches for these domains harder.

In this paper we address the issue of non-adaptivity of game playing agents in complex real-time domains. The agents carry out run-time adaptation of their behavior sets by monitoring and reasoning about their behavior execution to dynamically carry out revisions on the behaviors. The behavior adaptation approaches has been instantiated in two real-time interactive game domains. The evaluation results shows that the agents in the two domains successfully adapt themselves by revising their behavior sets appropriately.

Read the paper:

Run-Time Behavior Adaptation for Real-Time Interactive Games

by Manish Mehta, Ashwin Ram

IEEE Transactions on Computational Intelligence and AI in Games, Vol. 1, No. 3, September 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-09.pdf

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

Goal Driven Learning (GDL) focuses on systems that determine by themselves what has to be learned and how to learn it. Typically GDL systems use meta-reasoning capabilities over a base reasoner, identifying learning goals and devising strategies. In this paper we present a novel GDL technique to deal with complex AI systems where the meta-reasoning module has to analyze the reasoning trace of multiple components with potentially different learning paradigms. Our approach works by distributing the generation of learning strategies among the different modules instead of centralizing it in the meta-reasoner. We implemented our technique in the GILA system, that works in the airspace task orders domain, showing an increase in performance.

Read the paper:

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

by Jai Radhakrishnan, Santi Ontañón, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-09), Pasadena, CA, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-02.pdf

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator.

We used an existing game artificial intelligence system called Darmok 2. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.

Read the thesis:

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

by Katie Long

Undergraduate Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 2009
www.cs.utexas.edu/users/katie/UgradThesis.pdf