Posts Tagged ‘goal-driven learning’

TED: Imagine a world of AI

Ashwin Ram works on the AI behind Alexa, one of several new bots that might change the way your home and your life function within the next few years. Imagine a bot that turns on your lights, shops for you, even helps you make decisions. Learn more about a bot-enabled future that might have you saying (like Shah Rukh Khan does): “Alexa, I love you!”

 

TALK & TRANSCRIPT:
#TomorrowsWorld made easier with Artificial Intelligence. #TEDTalksIndiaNayiSoch

ted.com/talks/ashwin_ram_could_bots_make_your_life_better

BEHIND THE SCENES:
Innovator and entrepreneur, Ashwin Ram believes AI will changes our lives in future. #TomorrowsWorld #TEDTalksIndiaNayiSoch

youtube.com/watch?v=kDvIsRuaq5k

FULL #TOMORROWSWORLD EPISODE:
Can you imagine what #TomorrowsWorld will be like? Shah Rukh Khan introduces.

tedtalksindianayisoch.hotstar.com/TED/episode-4.php

ALL TED TALKS INDIA NAYI SOCH:
#TEDTalksIndiaNayiSoch is a groundbreaking TV series showcasing new thinking from some of the brightest brains in India and beyond and hosted by “The King of Bollywood,” Shah Rukh Khan.

ted.com/series/ted_talks_india_nayi_soch

Conversational AI: Voice-Based Intelligent Agents

As we moved from the age of the keyboard, to the age of touch, and now to the age of voice, natural conversation in everyday language continues to be one of the ultimate challenges for AI. This is a difficult scientific problem involving knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning, as well as a complex product design problem involving user experience and conversational engagement.

I will talk about why Conversational AI is hard, how conversational agents like Amazon Alexa understand and respond to voice interactions, how you can leverage these technologies for your own applications, and the challenges that still remain.

Variants of this talk presented (click links for video):
 
Keynote talks at The AI Conference (2017), O’Reilly AI Conference (2017), The AI Summit (2017), Stanford ASES Summit (2017), MLconf AI Conference (2017), Global AI Conference (2016).
 
Distinguished lectures at Georgia Tech/GVU (2017), Northwestern University (2017).
 
Keynote panel at Conversational Interaction Conference (2016).
 
Lightning TED-style talks at IIT Bay Area Conference (2017), Intersect (2017).
 

Making The Future Possible: Conversational AI in Amazon Alexa

No longer is AI solely a subject of science fiction. Advances in AI have resulted in enabling technologies for computer vision, planning, decision making, robotics, and most recently spoken language understanding. These technologies are driving business growth, and releasing workers to engage in more creative and valuable tasks.

I’ll talk about the moved from the age of the keyboard, to the age of touch, and are now entering the age of voice. Alexa is making this future possible. Amazon is committed to fostering a robust cloud-based voice service, and it is this voice service that the innovators of today, tomorrow, and beyond will be building. It is this voice service—and the ecosystem around it—that awaits the next generation of AI talent.

Keynote at Udacity Intersect Conference, Computer History Museum, Mountain View, CA, March 8, 2017.
 

READ MORE:

blog.udacity.com/2017/02/dr-ashwin-ram-intersect-2017-speaker.html

VIEW THE TALK:

linkedin.com/feed/update/urn:li:activity:6286681682187812864

 

Real-Time Case-Based Reasoning for Interactive Digital Entertainment

(Click image to view the video – it’s near the bottom of the new page.)

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why authoring Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I propose an alternative approach to designing Game AI: Real-Time CBR. This approach extends CBR to real-time systems that operate asynchronously during game play, planning, adapting, and learning in an online manner. Originally developed for robotic control, Real-Time CBR can be used for interactive games ranging from multiplayer strategy games to interactive believable avatars in virtual worlds.

As with any CBR technique, Real-Time CBR integrates problem solving with learning. This property can be used to address the authoring problem. I will show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the role of CBR in AI-based Interactive Digital Entertainment.

Keynote talk at the Eighteenth Conference on Pattern Recognition and Artificial Intelligence (RFIA-12), Lyon, France, February 5, 2012.
Slides and video here: rfia2012.liris.cnrs.fr/doku.php?id=pub:ram
 
Keynote talk at the Eleventh Scandinavian Conference on Artificial Intelligence (SCAI-11), Trondheim, Norway, May 25, 2011.
 
Keynote talk at the 2010 International Conference on Case-Based Reasoning (ICCBR-10), Alessandria, Italy, July 22, 2010.
 
GVU Brown Bag talk, October 14, 2010. Watch the talk here: www.gvu.gatech.edu/node/4320 
 
Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

AI agents designed for real-time settings need to adapt themselves to changing circumstances to improve their performance and remedy their faults. Agents typically designed for computer games, however, lack this ability. The lack of adaptivity causes a break in player experience when they repeatedly fail to behave properly in circumstances unforeseen by the game designers.

We present an AI technique for game-playing agents that helps them adapt to changing game circumstances. The agents carry out runtime adaptation of their behavior sets by monitoring and reasoning about their behavior execution and using this reasoning to dynamically revise their behaviors. The evaluation of the behavior adaptation approach in a complex real-time strategy game shows that the agents adapt themselves and improve their performance by revising their behavior sets appropriately.

Read the paper:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

by Manish Mehta, Santi Ontañon, Ashwin Ram

ICCBR-10 Workshop on Case-Based Reasoning for Computer Games, Alessandria, Italy, 2010.
www.cc.gatech.edu/faculty/ashwin/papers/er-10-02.pdf

Run-Time Behavior Adaptation for Real-Time Interactive Games

Intelligent agents working in real-time domains need to adapt to changing circumstance so that they can improve their performance and avoid their mistakes. AI agents designed for interactive games, however, typically lack this ability. Game agents are traditionally implemented using static, hand-authored behaviors or scripts that are brittle to changing world dynamics and cause a break in player experience when they repeatedly fail. Furthermore, their static nature causes a lot of effort for the game designers as they have to think of all imaginable circumstances that can be encountered by the agent. The problem is exacerbated as state-of-the-art computer games have huge decision spaces, interactive user input, and real-time performance that make the problem of creating AI approaches for these domains harder.

In this paper we address the issue of non-adaptivity of game playing agents in complex real-time domains. The agents carry out run-time adaptation of their behavior sets by monitoring and reasoning about their behavior execution to dynamically carry out revisions on the behaviors. The behavior adaptation approaches has been instantiated in two real-time interactive game domains. The evaluation results shows that the agents in the two domains successfully adapt themselves by revising their behavior sets appropriately.

Read the paper:

Run-Time Behavior Adaptation for Real-Time Interactive Games

by Manish Mehta, Ashwin Ram

IEEE Transactions on Computational Intelligence and AI in Games, Vol. 1, No. 3, September 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-09.pdf

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

Goal Driven Learning (GDL) focuses on systems that determine by themselves what has to be learned and how to learn it. Typically GDL systems use meta-reasoning capabilities over a base reasoner, identifying learning goals and devising strategies. In this paper we present a novel GDL technique to deal with complex AI systems where the meta-reasoning module has to analyze the reasoning trace of multiple components with potentially different learning paradigms. Our approach works by distributing the generation of learning strategies among the different modules instead of centralizing it in the meta-reasoner. We implemented our technique in the GILA system, that works in the airspace task orders domain, showing an increase in performance.

Read the paper:

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

by Jai Radhakrishnan, Santi Ontañón, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-09), Pasadena, CA, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-02.pdf

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator.

We used an existing game artificial intelligence system called Darmok 2. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.

Read the thesis:

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

by Katie Long

Undergraduate Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 2009
www.cs.utexas.edu/users/katie/UgradThesis.pdf

New Directions in Goal-Driven Learning

Goal-Driven Learning (GDL) views learning as a strategic process in which the learner attempts to identify and satisfy its learning needs in the context of its tasks and goals. This is modeled as a planful process where the learner analyzes its reasoning traces to identify learning goals, and composes a set of learning strategies (modeled as planning operators) into a plan to learn by satisfying those learning goals.

Traditional GDL frameworks were based on traditional planners. However, modern AI systems often deal with real-time scenarios where learning and performance happen in a reactive real-time fashion, or are composed of multiple agents that use different learning and reasoning paradigms. In this talk, I discuss new GDL frameworks that handle such problems, incorporating reactive and multi-agent planning techniques in order to manage learning in these kinds of AI systems.

About this talk:

New Directions in Goal-Driven Learning

by Ashwin Ram

Invited keynote at International Conference on Machine Learning (ICML-08) Workshop on Planning to Learn, Helsinki, Finland, July 2008

Introspective Multistrategy Learning: On the Construction of Learning Strategies

A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task.

The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms.

We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.

Read the paper:

Introspective Multistrategy Learning: On the Construction of Learning Strategies

by Mike Cox, Ashwin Ram

Artificial Intelligence, 112:1-55, 1999
www.cc.gatech.edu/faculty/ashwin/papers/er-99-01.pdf