Posts Tagged ‘goal-driven learning’

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

Currently many game artificial intelligences attempt to determine their next moves by using a simulator to predict the effect of actions in the world. However, writing such a simulator is time-consuming, and the simulator must be changed substantially whenever a detail in the game design is modified. As such, this research project set out to determine if a version of the first order inductive learning algorithm could be used to learn rules that could then be used in place of a simulator.

We used an existing game artificial intelligence system called Darmok 2. By eliminating the need to write a simulator for each game by hand, the entire Darmok 2 project could more easily adapt to additional real-time strategy games. Over time, Darmok 2 would also be able to provide better competition for human players by training the artificial intelligences to play against the style of a specific player. Most importantly, Darmok 2 might also be able to create a general solution for creating game artificial intelligences, which could save game development companies a substantial amount of money, time, and effort.

Read the thesis:

Using First Order Inductive Learning as an Alternative to a Simulator in a Game Artificial Intelligence

by Katie Long

Undergraduate Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 2009

New Directions in Goal-Driven Learning

Goal-Driven Learning (GDL) views learning as a strategic process in which the learner attempts to identify and satisfy its learning needs in the context of its tasks and goals. This is modeled as a planful process where the learner analyzes its reasoning traces to identify learning goals, and composes a set of learning strategies (modeled as planning operators) into a plan to learn by satisfying those learning goals.

Traditional GDL frameworks were based on traditional planners. However, modern AI systems often deal with real-time scenarios where learning and performance happen in a reactive real-time fashion, or are composed of multiple agents that use different learning and reasoning paradigms. In this talk, I discuss new GDL frameworks that handle such problems, incorporating reactive and multi-agent planning techniques in order to manage learning in these kinds of AI systems.

About this talk:

New Directions in Goal-Driven Learning

by Ashwin Ram

Invited keynote at International Conference on Machine Learning (ICML-08) Workshop on Planning to Learn, Helsinki, Finland, July 2008

Introspective Multistrategy Learning: On the Construction of Learning Strategies

A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task.

The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms.

We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.

Read the paper:

Introspective Multistrategy Learning: On the Construction of Learning Strategies

by Mike Cox, Ashwin Ram

Artificial Intelligence, 112:1-55, 1999

Invention as an Opportunistic Enterprise

This paper identifies goal handling processes that begin to account for the kind of processes involved in invention. We identify new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. We focus on invention goals, which address significant enterprises associated with an inventor. Invention goals represent “seed” goals of an expert, around which the whole knowledge of an expert gets reorganized and grows more or less opportunistically. Invention goals reflect the idiosyncrasy of thematic goals among experts. They constantly increase the sensitivity of individuals for particular events that might contribute to their satisfaction.

Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We propose mechanisms to explain: (1) how Bell’s early thematic goals gave rise to the new goals to invent the multiple telegraph and the telephone, and (2) how the new goals interacted opportunistically. Finally, we describe our computational model, ALEC, that accounts for the role of goals in invention.

Invention as an Opportunistic Enterprise

by Marin Simina, Janet Kolodner, Ashwin Ram, Michael Gorman

19th Annual Conference of the Cognitive Science Society, Stanford, CA, August 1997

Case-Based Planning to Learn

Learning can be viewed as a problem of planning a series of modifications to memory. We adopt this view of learning and propose the applicability of the case-based planning methodology to the task of planning to learn. We argue that relatively simple, fine-grained primitive inferential operators are needed to support flexible planning. We show that it is possible to obtain the benefits of case-based reasoning within a planning to learn framework.

Read the paper:

Case-Based Planning to Learn

by Bill Murdock, Gordon Shippey, Ashwin Ram

2nd International Conference on Case-Based Reasoning (ICCBR-97), Providence, RI, July 1997

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

The real world has many properties that present challenges for the design of intelligent agents: it is dynamic, unpredictable, and independent, poses poorly structured problems, and places bounds on the resources available to agents. Agents that opearate in real worlds need a wide range of capabilities to deal with them: memory, situation analysis, situativity, resource-bounded cognition, and opportunism.

We propose a theory of experience-based agency which specifies how an agent with the ability to richly represent and store its experiences could remember those experiences with a context-sensitive, asynchronous memory, incorporate those experiences into its reasoning on demand with integration mechanisms, and usefully direct memory and reasoning through the use of a utility-based metacontroller. We have implemented this theory in an architecture called NICOLE and have used it to address the problem of merging multiple plans during the course of case-based adaptation in least-committment planning.

Read the paper:

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

by Ashwin Ram, Anthony Francis

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996

The Role of Student Tasks in Accessing Cognitive Media Types

We believe that identifying media by their cognitive roles (e.g., definition, explanation, pseudo-code, visualization) can improve comprehension and usability in hypermedia systems designed for learning. We refer to media links organized around their cognitive role as cognitive media types (Recker, Ram, Shikano, Li, & Stasko, 1995). Our hypothesis is that the goals that students bring to the learning task will affect how they will use the hypermedia support system (Ram & Leake, 1995).

We explored student use of a hypermedia system based on cognitive media types where students performed different orienting tasks: undirected, browsing in order to answer specific questions, problem-solving, and problem-solving with prompted self-explanations. We found significant differences in use behavior between problem-solving and browsing students, though no learning differences.

Read the paper:

The Role of Student Tasks in Accessing Cognitive Media Types

by Mike Byrne, Mark Guzdial, Preetha Ram, Rich Catrambone, Ashwin Ram, John Stasko, Gordon Shippey, Florian Albrecht

Second International Conference on the Learning Sciences (ICLS-96), Evanson, IL, July 1996