Posts Tagged ‘virtual worlds’

Adaptive Computer Games: Easing the Authorial Burden

Game designers usually create AI behaviors by writing scripts that describe the reactions to all imaginable circumstances within the confines of the game world. The AI Programming Wisdom series provides a good overview of current scripting techniques used in the game industry. Scripting is expensive and it’s hard to plan. So, behaviors could be repetitive (resulting in breaking the atmosphere) or behaviors could fail to achieve their desired purpose. On one hand, creating AI with a rich behavior set requires a great deal of engineering effort on the part of game developers. On the other hand, the rich and dynamic nature of game worlds makes it hard to imagine and plan for all possible scenarios. When behaviors fail to achieve their desired purpose, the game AI is unable to identify such failure and will continue executing them. The techniques described in this article specifically deal with these issues.

Behavior (or script) creation for computer games typically involves two steps: a) generating a first version of behaviors using a programming language, b) debugging and adapting the behavior via experimentation. In this article we present techniques that aim at assisting the author from carrying out these two steps manually: behavior learning and behavior adaptation.

In the behavior learning process, the game developers can specify the AI behavior by demonstrating it to the system instead of having to code the behavior using a programming language. The system extracts behaviors from these expert demonstrations and stores them. Then, at performance time, the system retrieves appropriate behaviors observed from the expert and revises them in response to the current situation it is dealing with (i.e., to the current game state).

In the behavior adaptation process, the system monitors the performance of these learned behaviors at runtime. The system keeps track of the status of the executing behaviors, infer from their execution trace what might be wrong, and perform appropriate adaptations to the behaviors once the game is over. This approach to behavior transformation enables the game AI to reflect on the issues in the learnt behaviors from expert demonstration and revises them after post analysis of things that went wrong during the game. These set of techniques allow non-AI experts to define behaviors through demonstration that can then be adapted to different situations thereby reducing the development effort required to address all contingencies in a complex game.

Read the paper:

Adaptive Computer Games: Easing the Authorial Burden

by Manish Mehta, Santi Ontañón, Ashwin Ram

AI Game Programming Wisdom 4 (AIGPW4), Steve Rabin (editor), Charles River Media, 2008
www.cc.gatech.edu/faculty/ashwin/papers/er-08-03.pdf

Emotionally Driven Natural Language Generation for Personality Rich Characters in Interactive Games

Natural Language Generation for personality rich characters represents one of the important directions for believable agents research. The typical approach to interactive NLG is to hand-author textual responses to different situations. In this paper we address NLG for interactive games. Specifically, we present a novel template-based system that provides two distinct advantages over existing systems. First, our system not only works for dialogue, but enables a character’s personality and emotional state to influence the feel of the utterance. Second, our templates are resuable across characters, thus decreasing the burden on the game author. We briefly describe our system and present results of a preliminary evaluation study.

Read the paper:

Emotionally Driven Natural Language Generation for Personality Rich Characters in Interactive Games

by Christina Strong, Kinshuk Mishra, Manish Mehta, Alistair Jones, Ashwin Ram

Third Conference on Artificial Intelligence for Interactive Digital Entertainment (AIIDE-07), Stanford, CA, June 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-09.pdf

Towards Runtime Behavior Adaptation for Embodied Characters

Typically, autonomous believable agents are implemented using static, hand-authored reactive behaviors or scripts. This hand-authoring allows designers to craft expressive behavior for characters, but can lead to excessive authorial burden, as well as result in characters that are brittle to changing world dynamics.

In this paper we present an approach for the runtime adaptation of reactive behaviors for autonomous believable characters. Extending transformational planning, our system allows autonomous characters to monitor and reason about their behavior execution, and to use this reasoning to dynamically rewrite their behaviors. In our evaluation, we transplant two characters in a sample tag game from the original world they were written for into a different one, resulting in behavior that violates the author intended personality. The reasoning layer successfully adapts the character’s behaviors so as to bring its long-term behavior back into agreement with its personality.

Towards Runtime Behavior Adaptation for Embodied Characters

by Peng Zang, Manish Mehta, Michael Mateas, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-02.pdf