Archive for January, 2007

Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL

The goal of transfer learning is to use the knowledge acquired in a set of source tasks to improve performance in a related but previously unseen target task. In this paper, we present a multi-layered architecture named CAse-Based Reinforcement Learner (CARL). It uses a novel combination of Case-Based Reasoning (CBR) and Reinforcement Learning (RL) to achieve transfer while playing against the Game AI across a variety of scenarios in MadRTS, a commercial Real-Time Strategy game. Our experiments demonstrate that CARL not only performs well on individual tasks but also exhibits significant performance gains when allowed to transfer knowledge from previous tasks.

Read the paper:

Transfer Learning in Real-Time Strategy Games Using Hybrid CBR/RL

by Manu Sharma, Michael Holmes, Juan Santamaria, Arya Irani, Charles Isbell, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-01.pdf

Towards Runtime Behavior Adaptation for Embodied Characters

Typically, autonomous believable agents are implemented using static, hand-authored reactive behaviors or scripts. This hand-authoring allows designers to craft expressive behavior for characters, but can lead to excessive authorial burden, as well as result in characters that are brittle to changing world dynamics.

In this paper we present an approach for the runtime adaptation of reactive behaviors for autonomous believable characters. Extending transformational planning, our system allows autonomous characters to monitor and reason about their behavior execution, and to use this reasoning to dynamically rewrite their behaviors. In our evaluation, we transplant two characters in a sample tag game from the original world they were written for into a different one, resulting in behavior that violates the author intended personality. The reasoning layer successfully adapts the character’s behaviors so as to bring its long-term behavior back into agreement with its personality.

Towards Runtime Behavior Adaptation for Embodied Characters

by Peng Zang, Manish Mehta, Michael Mateas, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-02.pdf

Case-Based Learning from Proactive Communication

We present a proactive communication approach that allows CBR agents to gauge the strengths and weaknesses of other CBR agents. The communication protocol allows CBR agents to learn from communicating with other CBR agents in such a way that each agent is able to retain certain cases provided by other agents that are able to improve their individual performance (without need to disclose all the contents of each case base). The selection and retention of cases is modeled as a case bartering process, where each individual CBR agent autonomously decides which cases offers for bartering and which offered barters accepts. Experimental evaluations show that the sum of all these individual decisions result in a clear improvement in individual CBR agent performance with only a moderate increase of individual case bases.

Read the paper:

Case-Based Learning from Proactive Communication

by Santi Ontañón and Enric Plaza

International Joint Conference on Artificial Intelligence (IJCAI 2007), pp. 999-1004
www.cc.gatech.edu/faculty/ashwin/papers/er-07-18.pdf