We present an approach that uses learning from demonstration in a computer role playing game. We describe a behavior engine that uses case-based reasoning. The behavior engine accepts observation traces of human playing decisions and produces a sequence of actions which can then be carried out by an artificial agent within the gaming environment. Our work focuses on team-based role playing games, where the agents produced by the behavior engine act as team members within a mixed human-agent team. We present the results of a study we conducted, where we assess both the quantitative and qualitative performance difference between human-only teams compared with hybrid human-agent teams.
Posts Tagged ‘rts games’
Creating AI for complex computer games requires a great deal of technical knowledge as well as engineering effort on the part of game developers. This paper focuses on techniques that enable end-users to create AI for games without requiring technical knowledge by using case-based reasoning techniques.
AI creation for computer games typically involves two steps: a) generating a first version of the AI, and b) debugging and adapting it via experimentation. We will use the domain of real-time strategy games to illustrate how case-based reasoning can address both steps.
Read the paper:
Case-Based Reasoning and User-Generated AI for Real-Time Strategy Games
by Santi Ontañón and Ashwin RamIn P. Gonzáles-Calero & M. Gomez-Martín (ed.), AI for Games: State of the Practice, 2011.
Computer games are excellent domains for research and evaluation of AI and CBR techniques. The main drawback is the effort needed to connect AI systems to existing games. This paper presents MMPM, a middleware platform that supports easy connection of AI techniques with games. We will describe the MMPM architecture, and compare with related systems such as TIELT.
Read the paper:
MMPM: A Generic Platform for Case-Based Planning Research
by Pedro Pablo Gómez-Martín, David Llansó, Marco Antonio Gómez-Martín, Santiago Ontañón, Ashwin RamICCBR-2010 Workshop on Case-Based Reasoning for Computer Games
Behavior authoring for computer games involves writing behaviors in a programming language. This method is cumbersome and requires a lot of programming effort to author the behavior sets. Further, this approach restricts the behavior set authoring to people who are experts in programming.
This paper describes our approach to design a system that allows a user to demonstrate behaviors to the system, which the system uses to learn behavior sets for a game domain. With learning from demonstration, we aim at removing the requirement that the user has to be an expert in programming, and only require him to be an expert in the game. The approach has been integrated in a easy-to-use visual interface and instantiated for two domains, a real-time strategy game and an interactive drama.
Read the paper:
Authoring Behaviors for Games using Learning from Demonstration
by Manish Mehta, Santiago Ontañón, Tom Amundsen, Ashwin RamICCBR-09 Workshop on Case-Based Reasoning for Computer Games, Seattle, July 2009
One of the main bottlenecks in deploying case-based planning systems is authoring the case-base of plans. In this paper we present a collection of algorithms that can be used to automatically learn plans from human demonstrations. Our algorithms are based on the basic idea of a plan dependency graph, which is a graph that captures the dependencies among actions in a plan. Such algorithms are implemented in a system called Darmok 2 (D2), a case-based planning system capable of general game playing with a focus on real-time strategy (RTS) games. We evaluate D2 with a collection of three different games with promising results.
Read the paper:
Learning from Human Demonstrations for Real-Time Case-Based Planning
by Santi Ontañón, Kane Bonnette, Praful Mahindrakar, Marco Gómez-Martin, Katie Long, Jai Radhakrishnan, Rushabh Shah, Ashwin RamIJCAI-09 Workshop on Learning Structural Knowledge from Observations, Pasadena, CA, July 2009
Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence
Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator.
We used an existing game artificial intelligence system called Darmok 2. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.
Read the thesis: