Archive for the ‘Learning’ Category

Learning from Human Demonstrations for Real-Time Case-Based Planning

One of the main bottlenecks in deploying case-based planning systems is authoring the case-base of plans. In this paper we present a collection of algorithms that can be used to automatically learn plans from human demonstrations. Our algorithms are based on the basic idea of a plan dependency graph, which is a graph that captures the dependencies among actions in a plan. Such algorithms are implemented in a system called Darmok 2 (D2), a case-based planning system capable of general game playing with a focus on real-time strategy (RTS) games. We evaluate D2 with a collection of three different games with promising results.

Read the paper:

Learning from Human Demonstrations for Real-Time Case-Based Planning

by Santi Ontañón, Kane Bonnette, Praful Mahindrakar, Marco Gómez-Martin, Katie Long, Jai Radhakrishnan, Rushabh Shah, Ashwin Ram

IJCAI-09 Workshop on Learning Structural Knowledge from Observations, Pasadena, CA, July 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-04.pdf

Emotional Memory and Adaptive Personalities

Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while maintaining a consistent personality. For short-term interactions in restricted environments, scripting and state machine techniques can create agents with emotion and personality, but these methods are labor intensive, hard to extend, and brittle in new environments. Fortunately, research in memory, emotion and personality in humans and animals points to a solution to this problem. Emotions focus an animal’s attention on things it needs to care about, and strong emotions trigger enhanced formation of memory, enabling the animal to adapt its emotional response to the objects and situations in its environment. In humans this process becomes reflective: emotional stress or frustration can trigger re-evaluating past behavior with respect to personal standards, which in turn can lead to setting new strategies or goals.

To aid the authoring of adaptive agents, we present an artificial intelligence model inspired by these psychological results in which an emotion model triggers case-based emotional preference learning and behavioral adaptation guided by personality models. Our tests of this model on robot pets and embodied characters show that emotional adaptation can extend the range and increase the behavioral sophistication of an agent without the need for authoring additional hand-crafted behaviors.

Read the paper:

Emotional Memory and Adaptive Personalities

by Anthony Francis, Manish Mehta, Ashwin Ram

Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, IGI Global, 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-08-10.pdf

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator.

We used an existing game artificial intelligence system called Darmok 2. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.

Read the thesis:

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

by Katie Long

Undergraduate Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 2009
www.cs.utexas.edu/users/katie/UgradThesis.pdf

On-Line Case-Based Planning

Some domains, such as real-time strategy (RTS) games, pose several challenges to traditional planning and machine learning techniques. In this paper, we present a novel on-line case-based planning architecture that addresses some of these problems. Our architecture addresses issues of plan acquisition, on-line plan execution, interleaved planning and execution and on-line plan adaptation. We also introduce the Darmok system, which implements this architecture in order to play Wargus (an open source clone of the well-known RTS game Warcraft II). We present empirical evaluation of the performance of Darmok and show that it successfully learns to play the Wargus game.

Read the paper:

On-Line Case-Based Planning

by Santi Ontañón, Neha Sugandh, Kinshuk Mishra, Ashwin Ram

Computational Intelligence, 26(1):84-119, 2010.
www.cc.gatech.edu/faculty/ashwin/papers/er-09-08.pdf
www3.interscience.wiley.com/journal/123263882/abstract

An Intelligent IDE for Behavior Authoring in Real-Time Strategy Games

Behavior authoring for computer games involves writing behaviors in a programming language and then iteratively refining them by detecting issues with them. The main bottlenecks are a) the effort required to author the behaviors and b) the revision cycle as, for most games, it is practically impossible to write a behavior for the computer game AI in a single attempt. The main problem is that the current development environments (IDE) are typically mere text editors that can only help the author by pointing out syntactical errors.

In this paper we present an intelligent IDE (iIDE) that has the following capabilities: it allows the author to program initial versions of the behaviors through demonstration, presents visualizations of behavior execution for revision, lets the author define failure conditions on the existing behavior set, and select appropriate fixes for the failure conditions to correct the behaviors. We describe the underlying techniques that support these capabilities inside our implemented iIDE and the future steps that need to be carried out to improve the iIDE. We also provide details on a preliminary user study showing how the new features inside the iIDE can help authors in behavior authoring and debugging in a real-time strategy game.

Read the paper:

An Intelligent IDE for Behavior Authoring in Real-Time Strategy Games

by Manish Mehta, Suhas Virmani, Yatin Kanetkar, Santi Ontañón, Ashwin Ram

4th Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-08), Stanford, CA, October 2008
www.cc.gatech.edu/faculty/ashwin/papers/er-08-08.pdf

New Directions in Goal-Driven Learning

Goal-Driven Learning (GDL) views learning as a strategic process in which the learner attempts to identify and satisfy its learning needs in the context of its tasks and goals. This is modeled as a planful process where the learner analyzes its reasoning traces to identify learning goals, and composes a set of learning strategies (modeled as planning operators) into a plan to learn by satisfying those learning goals.

Traditional GDL frameworks were based on traditional planners. However, modern AI systems often deal with real-time scenarios where learning and performance happen in a reactive real-time fashion, or are composed of multiple agents that use different learning and reasoning paradigms. In this talk, I discuss new GDL frameworks that handle such problems, incorporating reactive and multi-agent planning techniques in order to manage learning in these kinds of AI systems.

About this talk:

New Directions in Goal-Driven Learning

by Ashwin Ram

Invited keynote at International Conference on Machine Learning (ICML-08) Workshop on Planning to Learn, Helsinki, Finland, July 2008

Argumentation-Based Information Exchange in Prediction Markets

We investigate how argumentation processes among a group of agents may affect the outcome of group judgments. In particular we focus on prediction markets (also called information markets). We investigate how the existence of social networks (that allow agents to argue with one another to improve their individual predictions) effect on group judgments.

Social networks allow agents to exchange information about the group judgment by arguing about the most likely choice based on their individual experience. We develop an argumentation-based deliberation process by which the agents acquire new and relevant information. Finally, we experimentally assess how different social network connectivity affect group judgment.

Read the paper:

Argumentation-based Information Exchange in Prediction Markets

by Santi Ontañón and Enric Plaza

in ArgMAS 2008, pp. 181 – 196
www.cc.gatech.edu/faculty/ashwin/papers/er-08-12.pdf

Case-Based Reasoning for Game AI

Computer games are an increasingly popular application for Artificial Intelligence (AI) research, and conversely AI is an increasingly popular selling point for commercial games. Although games are typically associated with entertainment applications, there are many “serious” applications of gaming, including military, corporate, and advertising applications. There are also what the so called “humane” gaming applications—interactive tools for medical training, educational games, and games that reflect social consciousness or advocate for a cause. Game AI is the effort of taking computer games beyond scripted interactions, however complex, into the arena of truly interactive systems that are responsive, adaptive, and intelligent. Such systems learn about the player(s) during game play, adapt their own behaviors beyond the pre-programmed set provided by the game author, and interactively develop and provide a richer experience to the player(s).

In this talk, I discuss a range of CBR approaches for Game AI. I discuss differences and similarities between character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager or game director, for example). I explain why the AI must reason at multiple levels, including reactive, tactical, strategic, rhetorical, and meta, and propose a CBR architecture that lets us design and coordinate real-time AIs operating asynchronously at all these levels. I conclude with a brief discussion on the very idea of Game AI: is it feasible? realistic? and would we call it “intelligence” if we could implement all this stuff?

View the talk:

Google Tech Talk: Case-Based Reasoning for Game AI

by Ashwin Ram

Google Tech Talk, Mountain View, CA, April 2008
www.youtube.com/watch?v=s9G7DRTuB5s

Adaptive Computer Games: Easing the Authorial Burden

Game designers usually create AI behaviors by writing scripts that describe the reactions to all imaginable circumstances within the confines of the game world. The AI Programming Wisdom series provides a good overview of current scripting techniques used in the game industry. Scripting is expensive and it’s hard to plan. So, behaviors could be repetitive (resulting in breaking the atmosphere) or behaviors could fail to achieve their desired purpose. On one hand, creating AI with a rich behavior set requires a great deal of engineering effort on the part of game developers. On the other hand, the rich and dynamic nature of game worlds makes it hard to imagine and plan for all possible scenarios. When behaviors fail to achieve their desired purpose, the game AI is unable to identify such failure and will continue executing them. The techniques described in this article specifically deal with these issues.

Behavior (or script) creation for computer games typically involves two steps: a) generating a first version of behaviors using a programming language, b) debugging and adapting the behavior via experimentation. In this article we present techniques that aim at assisting the author from carrying out these two steps manually: behavior learning and behavior adaptation.

In the behavior learning process, the game developers can specify the AI behavior by demonstrating it to the system instead of having to code the behavior using a programming language. The system extracts behaviors from these expert demonstrations and stores them. Then, at performance time, the system retrieves appropriate behaviors observed from the expert and revises them in response to the current situation it is dealing with (i.e., to the current game state).

In the behavior adaptation process, the system monitors the performance of these learned behaviors at runtime. The system keeps track of the status of the executing behaviors, infer from their execution trace what might be wrong, and perform appropriate adaptations to the behaviors once the game is over. This approach to behavior transformation enables the game AI to reflect on the issues in the learnt behaviors from expert demonstration and revises them after post analysis of things that went wrong during the game. These set of techniques allow non-AI experts to define behaviors through demonstration that can then be adapted to different situations thereby reducing the development effort required to address all contingencies in a complex game.

Read the paper:

Adaptive Computer Games: Easing the Authorial Burden

by Manish Mehta, Santi Ontañón, Ashwin Ram

AI Game Programming Wisdom 4 (AIGPW4), Steve Rabin (editor), Charles River Media, 2008
www.cc.gatech.edu/faculty/ashwin/papers/er-08-03.pdf

Semantic Annotation and Inference for Medical Knowledge Discovery

We describe our vision for a new generation medical knowledge annotation and acquisition system called SENTIENT-MD (Semantic Annotation and Inference for Medical Knowledge Discovery). Key aspects of our vision include deep Natural Language Processing techniques to abstract the text into a more semantically meaningful representation guided by domain ontology. In particular, we introduce a notion of semantic fitness to model an optimal level of abstract representation for a text fragment given a domain ontology. We apply this notion to appropriately condense and merge nodes in semantically annotated syntactic parse trees. These transformed semantically annotated trees are more amenable to analysis and inference for abstract knowledge discovery, such as for automatically inferring general medical rules for enhancing an expert system for nuclear cardiology. This work is a part of a long term research effort on continuously mining medical literature for automatic clinical decision support.

Read the paper:

Semantic Annotation and Inference for Medical Knowledge Discovery

by Saurav Sahay, Eugene Agichtein, Baoli Li, Ernie Garcia, Ashwin Ram

NSF Symposium on Next Generation of Data Mining (NGDM-07), Baltimore, MD, October 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-16.pdf