Posts Tagged ‘meta-reasoning’

Learning from Demonstration to be a Good Team Member in a Role Playing Game

We present an approach that uses learning from demonstration in a computer role playing game. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams.

Learning from Demonstration to be a Good Team Member in a Role Playing Game

by Michael Silva, Silas McCroskey, Jonathan Rubin, Michael Youngblood, Ashwin Ram

26th International FLAIRS Conference on Artificial Intelligence (FLAIRS-13).
www.cc.gatech.edu/faculty/ashwin/papers/er-13-01.pdf

Construction and Adaptation of AI Behaviors in Computer Games

Computer games are an increasingly popular application for Artificial Intelligence (AI) research, and conversely AI is an increasingly popular selling point for commercial digital games. AI for non playing characters (NPC) in computer games tends to come from people with computing skills well beyond the average user. The prime reason behind the lack of involvement of novice users in creating AI behaviors for NPC’s in computer games is that construction of high quality AI behaviors is a hard problem.

There are two reasons for it. First, creating a set of AI behavior requires specialized skills in design and programming. The nature of the process restricts it to certain individuals who have a certain expertise in this area. There is little understanding of how the behavior authoring process can be simplified with easy-to-use authoring environments so that novice users (without programming and design experience) can carry out the behavior authoring task. Second, the constructed AI behaviors have problems and bugs in them which cause a break in player expe- rience when the problematic behaviors repeatedly fail. It is harder for novice users to identify, modify and correct problems with the authored behavior sets as they do not have the necessary debugging and design experience.

The two issues give rise to a couple of interesting questions that need to be investigated: a) How can the AI behavior construction process be simplified so that a novice user (without program- ming and design experience) can easily conduct the authoring activity and b) How can the novice users be supported to help them identify and correct problems with the authored behavior sets? In this thesis, I explore the issues related to the problems highlighted and propose a solution to them within an application domain, named Second Mind(SM). In SM novice users who do not have expertise in computer programming employ an authoring interface to design behaviors for intelligent virtual characters performing a service in a virtual world. These services range from shopkeepers to museum hosts. The constructed behaviors are further repaired using an AI based approach.

To evaluate the construction and repair approach, we conduct experiments with human subjects. Based on developing and evaluating the solution, I claim that a design solution with behavior timeline based interaction design approach for behavior construction supported by an understandable vocabulary and reduced feature representation formalism enables novice users to author AI behaviors in an easy and understandable manner for NPCs performing a service in a virtual world. I further claim that an introspective reasoning approach based on comparison of successful and unsuccessful execution traces can be used as a means to successfully identify breaks in player experience and modify the failures to improve the experience of the player interacting with NPCs performing a service in a virtual world.

The work contributes in the following three ways by providing: 1) a novel introspective reasoning approach for successfully detecting and repairing failures in AI behaviors for NPCs performing a service in a virtual world.; 2) a novice user understandable authoring environment to help them create AI behaviors for NPCs performing a service in a virtual world in an easy and understandable manner; and 3) Design, debugging and testing scaffolding to help novice users modify their authored AI behaviors and achieve higher quality modified AI behaviors compared to their original unmodified behaviors.

Read the dissertation:

Construction and Adaptation of AI Behaviors in Computer Games

by Manish Mehta

PhD dissertation, College of Computing, Georgia Institute of Technology, August 2011.

smartech.gatech.edu/handle/1853/42724

Real-Time Case-Based Reasoning for Interactive Digital Entertainment

(Click image to view the video – it’s near the bottom of the new page.)

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why authoring Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I propose an alternative approach to designing Game AI: Real-Time CBR. This approach extends CBR to real-time systems that operate asynchronously during game play, planning, adapting, and learning in an online manner. Originally developed for robotic control, Real-Time CBR can be used for interactive games ranging from multiplayer strategy games to interactive believable avatars in virtual worlds.

As with any CBR technique, Real-Time CBR integrates problem solving with learning. This property can be used to address the authoring problem. I will show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the role of CBR in AI-based Interactive Digital Entertainment.

Keynote talk at the Eighteenth Conference on Pattern Recognition and Artificial Intelligence (RFIA-12), Lyon, France, February 5, 2012.
Slides and video here: rfia2012.liris.cnrs.fr/doku.php?id=pub:ram
 
Keynote talk at the Eleventh Scandinavian Conference on Artificial Intelligence (SCAI-11), Trondheim, Norway, May 25, 2011.
 
Keynote talk at the 2010 International Conference on Case-Based Reasoning (ICCBR-10), Alessandria, Italy, July 22, 2010.
 
GVU Brown Bag talk, October 14, 2010. Watch the talk here: www.gvu.gatech.edu/node/4320 
 
Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

AI agents designed for real-time settings need to adapt themselves to changing circumstances to improve their performance and remedy their faults. Agents typically designed for computer games, however, lack this ability. The lack of adaptivity causes a break in player experience when they repeatedly fail to behave properly in circumstances unforeseen by the game designers.

We present an AI technique for game-playing agents that helps them adapt to changing game circumstances. The agents carry out runtime adaptation of their behavior sets by monitoring and reasoning about their behavior execution and using this reasoning to dynamically revise their behaviors. The evaluation of the behavior adaptation approach in a complex real-time strategy game shows that the agents adapt themselves and improve their performance by revising their behavior sets appropriately.

Read the paper:

Meta-Level Behavior Adaptation in Real-Time Strategy Games

by Manish Mehta, Santi Ontañon, Ashwin Ram

ICCBR-10 Workshop on Case-Based Reasoning for Computer Games, Alessandria, Italy, 2010.
www.cc.gatech.edu/faculty/ashwin/papers/er-10-02.pdf

User-Generated AI for Interactive Digital Entertainment

CMU Seminar

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I argue that User-Generated AI is the next big frontier in the rapidly growing Social Gaming area. From Sims to Risk to World of Warcraft, end users want to create, modify, and share not only the appearance but the “minds” of their characters. I present my recent research on intelligent technologies to assist Game AI authors, and show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the future of AI-based Interactive Digital Entertainment.

CMU Robotics & Intelligence Seminar, September 28, 2009
Carnegie-Mellon University, Pittsburgh, PA.
MIT Media Lab Colloquium, January 25, 2010
Massachusetts Institute of Technology, Cambridge, MA.
Stanford Media X Philips Seminar, February 1, 2010
Stanford University, Stanford, CA.
Pixar Research Seminar, February 2, 2010

Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

Run-Time Behavior Adaptation for Real-Time Interactive Games

Intelligent agents working in real-time domains need to adapt to changing circumstance so that they can improve their performance and avoid their mistakes. AI agents designed for interactive games, however, typically lack this ability. Game agents are traditionally implemented using static, hand-authored behaviors or scripts that are brittle to changing world dynamics and cause a break in player experience when they repeatedly fail. Furthermore, their static nature causes a lot of effort for the game designers as they have to think of all imaginable circumstances that can be encountered by the agent. The problem is exacerbated as state-of-the-art computer games have huge decision spaces, interactive user input, and real-time performance that make the problem of creating AI approaches for these domains harder.

In this paper we address the issue of non-adaptivity of game playing agents in complex real-time domains. The agents carry out run-time adaptation of their behavior sets by monitoring and reasoning about their behavior execution to dynamically carry out revisions on the behaviors. The behavior adaptation approaches has been instantiated in two real-time interactive game domains. The evaluation results shows that the agents in the two domains successfully adapt themselves by revising their behavior sets appropriately.

Read the paper:

Run-Time Behavior Adaptation for Real-Time Interactive Games

by Manish Mehta, Ashwin Ram

IEEE Transactions on Computational Intelligence and AI in Games, Vol. 1, No. 3, September 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-09.pdf

Using Meta-Reasoning to Improve the Performance of Case-Based Planning

Case-based planning (CBP) systems are based on the idea of reusing past successful plans for solving new problems. Previous research has shown the ability of meta-reasoning approaches to improve the performance of CBP systems. In this paper we present a new meta-reasoning approach for autonomously improving the performance of CBP systems that operate in real-time domains.

Our approach uses failure patterns to detect anomalous behaviors, and it can learn from experience which of the failures detected are important enough to be fixed. Finally, our meta-reasoning approach can exploit both successful and failed executions for meta-reasoning.

We illustrate its benefits with experimental results from a system implementing our approach called Meta-Darmok in a real-time strategy game. The evaluation of Meta-Darmok shows that the system successfully adapts itself and its performance improves through appropriate revision of the case base.

Read the paper:

Using Meta-Reasoning to Improve the Performance of Case-Based Planning

by Manish Mehta, Santi Ontañón, Ashwin Ram

International Conference on Case-Based Reasoning (ICCBR-09), Seattle, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-06.pdf

An Ensemble Learning and Problem Solving Architecture for Airspace Management

In this paper we describe the application of a novel learning and problem solving architecture to the domain of airspace management, where multiple requests for the use of airspace need to be reconciled and managed automatically. The key feature of our “Generalized Integrated Learning Architecture” (GILA) is a set of integrated learning and reasoning (ILR) systems coordinated by a central meta-reasoning executive (MRE). Each ILR learns independently from the same training example and contributes to problem-solving in concert with other ILRs as directed by the MRE. Formal evaluations show that our system performs as well as or better than humans after learning from the same training data. Further, GILA outperforms any individual ILR run in isolation, thus demonstrating the power of the ensemble architecture for learning and problem solving.

Read the paper:

An Ensemble Learning and Problem Solving Architecture for Airspace Management

by XS Zhang et al.

International Conference on Innovative Applications of Artificial Intelligence (IAAI-09), Pasadena, CA, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-03.pdf

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

Goal Driven Learning (GDL) focuses on systems that determine by themselves what has to be learned and how to learn it. Typically GDL systems use meta-reasoning capabilities over a base reasoner, identifying learning goals and devising strategies. In this paper we present a novel GDL technique to deal with complex AI systems where the meta-reasoning module has to analyze the reasoning trace of multiple components with potentially different learning paradigms. Our approach works by distributing the generation of learning strategies among the different modules instead of centralizing it in the meta-reasoner. We implemented our technique in the GILA system, that works in the airspace task orders domain, showing an increase in performance.

Read the paper:

Goal-Driven Learning in the GILA Integrated Intelligence Architecture

by Jai Radhakrishnan, Santi Ontañón, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-09), Pasadena, CA, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-02.pdf

Emotional Memory and Adaptive Personalities

Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while maintaining a consistent personality. For short-term interactions in restricted environments, scripting and state machine techniques can create agents with emotion and personality, but these methods are labor intensive, hard to extend, and brittle in new environments. Fortunately, research in memory, emotion and personality in humans and animals points to a solution to this problem. Emotions focus an animal’s attention on things it needs to care about, and strong emotions trigger enhanced formation of memory, enabling the animal to adapt its emotional response to the objects and situations in its environment. In humans this process becomes reflective: emotional stress or frustration can trigger re-evaluating past behavior with respect to personal standards, which in turn can lead to setting new strategies or goals.

To aid the authoring of adaptive agents, we present an artificial intelligence model inspired by these psychological results in which an emotion model triggers case-based emotional preference learning and behavioral adaptation guided by personality models. Our tests of this model on robot pets and embodied characters show that emotional adaptation can extend the range and increase the behavioral sophistication of an agent without the need for authoring additional hand-crafted behaviors.

Read the paper:

Emotional Memory and Adaptive Personalities

by Anthony Francis, Manish Mehta, Ashwin Ram

Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, IGI Global, 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-08-10.pdf
Follow

Get every new post delivered to your Inbox.

Join 2,433 other followers