Posts Tagged ‘virtual worlds’

TED: Imagine a world of AI

Ashwin Ram works on the AI behind Alexa, one of several new bots that might change the way your home and your life function within the next few years. Imagine a bot that turns on your lights, shops for you, even helps you make decisions. Learn more about a bot-enabled future that might have you saying (like Shah Rukh Khan does): “Alexa, I love you!”

 

TALK & TRANSCRIPT:
#TomorrowsWorld made easier with Artificial Intelligence. #TEDTalksIndiaNayiSoch

ted.com/talks/ashwin_ram_could_bots_make_your_life_better

BEHIND THE SCENES:
Innovator and entrepreneur, Ashwin Ram believes AI will changes our lives in future. #TomorrowsWorld #TEDTalksIndiaNayiSoch

youtube.com/watch?v=kDvIsRuaq5k

FULL #TOMORROWSWORLD EPISODE:
Can you imagine what #TomorrowsWorld will be like? Shah Rukh Khan introduces.

tedtalksindianayisoch.hotstar.com/TED/episode-4.php

ALL TED TALKS INDIA NAYI SOCH:
#TEDTalksIndiaNayiSoch is a groundbreaking TV series showcasing new thinking from some of the brightest brains in India and beyond and hosted by “The King of Bollywood,” Shah Rukh Khan.

ted.com/series/ted_talks_india_nayi_soch

Construction and Adaptation of AI Behaviors in Computer Games

Computer games are an increasingly popular application for Artificial Intelligence (AI) research, and conversely AI is an increasingly popular selling point for commercial digital games. AI for non playing characters (NPC) in computer games tends to come from people with computing skills well beyond the average user. The prime reason behind the lack of involvement of novice users in creating AI behaviors for NPC’s in computer games is that construction of high quality AI behaviors is a hard problem.

There are two reasons for it. First, creating a set of AI behavior requires specialized skills in design and programming. The nature of the process restricts it to certain individuals who have a certain expertise in this area. There is little understanding of how the behavior authoring process can be simplified with easy-to-use authoring environments so that novice users (without programming and design experience) can carry out the behavior authoring task. Second, the constructed AI behaviors have problems and bugs in them which cause a break in player expe- rience when the problematic behaviors repeatedly fail. It is harder for novice users to identify, modify and correct problems with the authored behavior sets as they do not have the necessary debugging and design experience.

The two issues give rise to a couple of interesting questions that need to be investigated: a) How can the AI behavior construction process be simplified so that a novice user (without program- ming and design experience) can easily conduct the authoring activity and b) How can the novice users be supported to help them identify and correct problems with the authored behavior sets? In this thesis, I explore the issues related to the problems highlighted and propose a solution to them within an application domain, named Second Mind(SM). In SM novice users who do not have expertise in computer programming employ an authoring interface to design behaviors for intelligent virtual characters performing a service in a virtual world. These services range from shopkeepers to museum hosts. The constructed behaviors are further repaired using an AI based approach.

To evaluate the construction and repair approach, we conduct experiments with human subjects. Based on developing and evaluating the solution, I claim that a design solution with behavior timeline based interaction design approach for behavior construction supported by an understandable vocabulary and reduced feature representation formalism enables novice users to author AI behaviors in an easy and understandable manner for NPCs performing a service in a virtual world. I further claim that an introspective reasoning approach based on comparison of successful and unsuccessful execution traces can be used as a means to successfully identify breaks in player experience and modify the failures to improve the experience of the player interacting with NPCs performing a service in a virtual world.

The work contributes in the following three ways by providing: 1) a novel introspective reasoning approach for successfully detecting and repairing failures in AI behaviors for NPCs performing a service in a virtual world.; 2) a novice user understandable authoring environment to help them create AI behaviors for NPCs performing a service in a virtual world in an easy and understandable manner; and 3) Design, debugging and testing scaffolding to help novice users modify their authored AI behaviors and achieve higher quality modified AI behaviors compared to their original unmodified behaviors.

Read the dissertation:

Construction and Adaptation of AI Behaviors in Computer Games

by Manish Mehta

PhD dissertation, College of Computing, Georgia Institute of Technology, August 2011.

smartech.gatech.edu/handle/1853/42724

Real-Time Case-Based Reasoning for Interactive Digital Entertainment

(Click image to view the video – it’s near the bottom of the new page.)

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why authoring Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I will discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I propose an alternative approach to designing Game AI: Real-Time CBR. This approach extends CBR to real-time systems that operate asynchronously during game play, planning, adapting, and learning in an online manner. Originally developed for robotic control, Real-Time CBR can be used for interactive games ranging from multiplayer strategy games to interactive believable avatars in virtual worlds.

As with any CBR technique, Real-Time CBR integrates problem solving with learning. This property can be used to address the authoring problem. I will show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the role of CBR in AI-based Interactive Digital Entertainment.

Keynote talk at the Eighteenth Conference on Pattern Recognition and Artificial Intelligence (RFIA-12), Lyon, France, February 5, 2012.
Slides and video here: rfia2012.liris.cnrs.fr/doku.php?id=pub:ram
 
Keynote talk at the Eleventh Scandinavian Conference on Artificial Intelligence (SCAI-11), Trondheim, Norway, May 25, 2011.
 
Keynote talk at the 2010 International Conference on Case-Based Reasoning (ICCBR-10), Alessandria, Italy, July 22, 2010.
 
GVU Brown Bag talk, October 14, 2010. Watch the talk here: www.gvu.gatech.edu/node/4320 
 
Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

User-Generated AI for Interactive Digital Entertainment

CMU Seminar

User-generated content is everywhere: photos, videos, news, blogs, art, music, and every other type of digital media on the Social Web. Games are no exception. From strategy games to immersive virtual worlds, game players are increasingly engaged in creating and sharing nearly all aspects of the gaming experience: maps, quests, artifacts, avatars, clothing, even games themselves. Yet, there is one aspect of computer games that is not created and shared by game players: the AI. Building sophisticated personalities, behaviors, and strategies requires expertise in both AI and programming, and remains outside the purview of the end user.

To understand why Game AI is hard, we need to understand how it works. AI can take digital entertainment beyond scripted interactions into the arena of truly interactive systems that are responsive, adaptive, and intelligent. I discuss examples of AI techniques for character-level AI (in embedded NPCs, for example) and game-level AI (in the drama manager, for example). These types of AI enhance the player experience in different ways. The techniques are complicated and are usually implemented by expert game designers.

I argue that User-Generated AI is the next big frontier in the rapidly growing Social Gaming area. From Sims to Risk to World of Warcraft, end users want to create, modify, and share not only the appearance but the “minds” of their characters. I present my recent research on intelligent technologies to assist Game AI authors, and show the first Web 2.0 application that allows average users to create AIs and challenge their friends to play them—without programming. I conclude with some thoughts about the future of AI-based Interactive Digital Entertainment.

CMU Robotics & Intelligence Seminar, September 28, 2009
Carnegie-Mellon University, Pittsburgh, PA.
MIT Media Lab Colloquium, January 25, 2010
Massachusetts Institute of Technology, Cambridge, MA.
Stanford Media X Philips Seminar, February 1, 2010
Stanford University, Stanford, CA.
Pixar Research Seminar, February 2, 2010

Try it yourself:
Learn more about the algorithms:
View the talk:
www.sais.se/blog/?p=57

View the slides:

Run-Time Behavior Adaptation for Real-Time Interactive Games

Intelligent agents working in real-time domains need to adapt to changing circumstance so that they can improve their performance and avoid their mistakes. AI agents designed for interactive games, however, typically lack this ability. Game agents are traditionally implemented using static, hand-authored behaviors or scripts that are brittle to changing world dynamics and cause a break in player experience when they repeatedly fail. Furthermore, their static nature causes a lot of effort for the game designers as they have to think of all imaginable circumstances that can be encountered by the agent. The problem is exacerbated as state-of-the-art computer games have huge decision spaces, interactive user input, and real-time performance that make the problem of creating AI approaches for these domains harder.

In this paper we address the issue of non-adaptivity of game playing agents in complex real-time domains. The agents carry out run-time adaptation of their behavior sets by monitoring and reasoning about their behavior execution to dynamically carry out revisions on the behaviors. The behavior adaptation approaches has been instantiated in two real-time interactive game domains. The evaluation results shows that the agents in the two domains successfully adapt themselves by revising their behavior sets appropriately.

Read the paper:

Run-Time Behavior Adaptation for Real-Time Interactive Games

by Manish Mehta, Ashwin Ram

IEEE Transactions on Computational Intelligence and AI in Games, Vol. 1, No. 3, September 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-09.pdf

Emotional Memory and Adaptive Personalities

Believable agents designed for long-term interaction with human users need to adapt to them in a way which appears emotionally plausible while maintaining a consistent personality. For short-term interactions in restricted environments, scripting and state machine techniques can create agents with emotion and personality, but these methods are labor intensive, hard to extend, and brittle in new environments. Fortunately, research in memory, emotion and personality in humans and animals points to a solution to this problem. Emotions focus an animal’s attention on things it needs to care about, and strong emotions trigger enhanced formation of memory, enabling the animal to adapt its emotional response to the objects and situations in its environment. In humans this process becomes reflective: emotional stress or frustration can trigger re-evaluating past behavior with respect to personal standards, which in turn can lead to setting new strategies or goals.

To aid the authoring of adaptive agents, we present an artificial intelligence model inspired by these psychological results in which an emotion model triggers case-based emotional preference learning and behavioral adaptation guided by personality models. Our tests of this model on robot pets and embodied characters show that emotional adaptation can extend the range and increase the behavioral sophistication of an agent without the need for authoring additional hand-crafted behaviors.

Read the paper:

Emotional Memory and Adaptive Personalities

by Anthony Francis, Manish Mehta, Ashwin Ram

Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence, IGI Global, 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-08-10.pdf

Creating Behavior Authoring Environments for Everyday Users

The design of interactive experiences is increasingly important in our society. Examples include interactive media, computer games, and interactive portals. There is increasing interest in modes of interaction with virtual characters, as they represent a natural way for humans to interact. Creating such characters is a complex task, requiring both creative skills (to design personalities, emotions, gestures, behaviors) and programming skills (to code these in a scripting or programming language). There is little understanding of how the behavior authoring process can be simplified with easy-to-use authoring environments that can support the cognitive needs of everyday users and help them at every step to easily carry out this creative task.

Our research focuses on behavior authoring environments that not only make it easy for novices/everyday users to create characters but also provide them scaffolding in designing these interactive experiences. In this paper we present results from a user study with a paper prototype of an authoring environment that is aimed to allow everyday users to create virtual characters. The study aims at determining whether typical computer users are able to create character personalities in specific scenarios and think about characters’ mental states, and if so, then what kinds of user interfaces would be suitable for this authoring environment.

Read the paper:

Creating Behavior Authoring Environments for Everyday Users

by Manish Mehta, Christina Lacey, Iulian Radu, Abhishek Jain, Ashwin Ram

International Conference on Computer Games, Multimedia, and Allied Technologies (CGAT-09), Singapore, May 2009
www.cc.gatech.edu/faculty/ashwin/papers/er-09-01.pdf

Adaptive Computer Games: Easing the Authorial Burden

Game designers usually create AI behaviors by writing scripts that describe the reactions to all imaginable circumstances within the confines of the game world. The AI Programming Wisdom series provides a good overview of current scripting techniques used in the game industry. Scripting is expensive and it’s hard to plan. So, behaviors could be repetitive (resulting in breaking the atmosphere) or behaviors could fail to achieve their desired purpose. On one hand, creating AI with a rich behavior set requires a great deal of engineering effort on the part of game developers. On the other hand, the rich and dynamic nature of game worlds makes it hard to imagine and plan for all possible scenarios. When behaviors fail to achieve their desired purpose, the game AI is unable to identify such failure and will continue executing them. The techniques described in this article specifically deal with these issues.

Behavior (or script) creation for computer games typically involves two steps: a) generating a first version of behaviors using a programming language, b) debugging and adapting the behavior via experimentation. In this article we present techniques that aim at assisting the author from carrying out these two steps manually: behavior learning and behavior adaptation.

In the behavior learning process, the game developers can specify the AI behavior by demonstrating it to the system instead of having to code the behavior using a programming language. The system extracts behaviors from these expert demonstrations and stores them. Then, at performance time, the system retrieves appropriate behaviors observed from the expert and revises them in response to the current situation it is dealing with (i.e., to the current game state).

In the behavior adaptation process, the system monitors the performance of these learned behaviors at runtime. The system keeps track of the status of the executing behaviors, infer from their execution trace what might be wrong, and perform appropriate adaptations to the behaviors once the game is over. This approach to behavior transformation enables the game AI to reflect on the issues in the learnt behaviors from expert demonstration and revises them after post analysis of things that went wrong during the game. These set of techniques allow non-AI experts to define behaviors through demonstration that can then be adapted to different situations thereby reducing the development effort required to address all contingencies in a complex game.

Read the paper:

Adaptive Computer Games: Easing the Authorial Burden

by Manish Mehta, Santi Ontañón, Ashwin Ram

AI Game Programming Wisdom 4 (AIGPW4), Steve Rabin (editor), Charles River Media, 2008
www.cc.gatech.edu/faculty/ashwin/papers/er-08-03.pdf

Emotionally Driven Natural Language Generation for Personality Rich Characters in Interactive Games

Natural Language Generation for personality rich characters represents one of the important directions for believable agents research. The typical approach to interactive NLG is to hand-author textual responses to different situations. In this paper we address NLG for interactive games. Specifically, we present a novel template-based system that provides two distinct advantages over existing systems. First, our system not only works for dialogue, but enables a character’s personality and emotional state to influence the feel of the utterance. Second, our templates are resuable across characters, thus decreasing the burden on the game author. We briefly describe our system and present results of a preliminary evaluation study.

Read the paper:

Emotionally Driven Natural Language Generation for Personality Rich Characters in Interactive Games

by Christina Strong, Kinshuk Mishra, Manish Mehta, Alistair Jones, Ashwin Ram

Third Conference on Artificial Intelligence for Interactive Digital Entertainment (AIIDE-07), Stanford, CA, June 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-09.pdf

Towards Runtime Behavior Adaptation for Embodied Characters

Typically, autonomous believable agents are implemented using static, hand-authored reactive behaviors or scripts. This hand-authoring allows designers to craft expressive behavior for characters, but can lead to excessive authorial burden, as well as result in characters that are brittle to changing world dynamics.

In this paper we present an approach for the runtime adaptation of reactive behaviors for autonomous believable characters. Extending transformational planning, our system allows autonomous characters to monitor and reason about their behavior execution, and to use this reasoning to dynamically rewrite their behaviors. In our evaluation, we transplant two characters in a sample tag game from the original world they were written for into a different one, resulting in behavior that violates the author intended personality. The reasoning layer successfully adapts the character’s behaviors so as to bring its long-term behavior back into agreement with its personality.

Towards Runtime Behavior Adaptation for Embodied Characters

by Peng Zang, Manish Mehta, Michael Mateas, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-02.pdf