Posts Tagged ‘meta-reasoning’

Towards Runtime Behavior Adaptation for Embodied Characters

Typically, autonomous believable agents are implemented using static, hand-authored reactive behaviors or scripts. This hand-authoring allows designers to craft expressive behavior for characters, but can lead to excessive authorial burden, as well as result in characters that are brittle to changing world dynamics.

In this paper we present an approach for the runtime adaptation of reactive behaviors for autonomous believable characters. Extending transformational planning, our system allows autonomous characters to monitor and reason about their behavior execution, and to use this reasoning to dynamically rewrite their behaviors. In our evaluation, we transplant two characters in a sample tag game from the original world they were written for into a different one, resulting in behavior that violates the author intended personality. The reasoning layer successfully adapts the character’s behaviors so as to bring its long-term behavior back into agreement with its personality.

Towards Runtime Behavior Adaptation for Embodied Characters

by Peng Zang, Manish Mehta, Michael Mateas, Ashwin Ram

International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, January 2007
www.cc.gatech.edu/faculty/ashwin/papers/er-07-02.pdf

Introspective Multistrategy Learning: On the Construction of Learning Strategies

A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task.

The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms.

We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.

Read the paper:

Introspective Multistrategy Learning: On the Construction of Learning Strategies

by Mike Cox, Ashwin Ram

Artificial Intelligence, 112:1-55, 1999
www.cc.gatech.edu/faculty/ashwin/papers/er-99-01.pdf

Introspective Multistrategy Learning: Constructing a Learning Strategy under Reasoning Failure

The thesis put forth by this dissertation is that introspective analyses facilitate the construction of learning strategies. Furthermore, learning is much like nonlinear planning and problem solving. Like problem solving, it can be specified by a set of explicit learning goals (i.e., desired changes to the reasoner’s knowledge); these goals can be achieved by constructing a plan from a set of operators (the learning algorithms) that execute in a knowledge space. However, in order to specify learning goals and to avoid negative interactions between operators, a reasoner requires a model of its reasoning processes and knowledge.

With such a model, the reasoner can declaratively represent the events and causal relations of its mental world in the same manner that it represents events and relations in the physical world. This representation enables introspective self-examination, which contributes to learning by providing a basis for identifying what needs to be learned when reasoning fails. A multistrategy system possessing several learning algorithms can decide what to learn, and which algorithm(s) to apply, by analyzing the model of its reasoning. This introspective analysis therefore allows the learner to understand its reasoning failures, to determine the causes of the failures, to identify needed knowledge repairs to avoid such failures in the future, and to build a learning strategy (plan).

Thus, the research goal is to develop both a content theory and a process theory of introspective multistrategy learning and to establish the conditions under which such an approach is fruitful. Empirical experiments provide results that support the claims herein. The theory was implemented in a computational model called Meta-AQUA that attempts to understand simple stories. The system uses case-based reasoning to explain reasoning failures and to generate sets of learning goals, and it uses a standard non-linear planner to achieve these goals.

Evaluating Meta-AQUA with and without learning goals generated results indicating that computational introspection facilitates the learning process. In particular, the results lead to the conclusion that the stage that posts learning goals is a necessary stage if negative interactions between learning methods are to be avoided and if learning is to remain effective.

Read the thesis:

Introspective multistrategy learning: Constructing a learning strategy under reasoning failure

by Michael T. Cox

PhD Thesis, Technical Report GIT-CC-96/06, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-96-06.pdf

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Read the paper:

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

by Ashwin Ram, Mike Cox, S Narayanan

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-04.pdf

Learning, Goals, and Learning Goals

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. Investigators in each of these areas have independently pursued the common issues of how learning goals arise, how they affect learner decisions of when and what to learn, and how they guide the learning process. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning.

This chapter discusses fundamental questions for goal-driven learning: the motivations for adopting a goal-driven model of learning, the basic goal-driven learning framework, the specific issues raised by the framework that a theory of goal-driven learning must address, the types of goals that can influence learning, the types of influences those goals can have on learning, and the pragmatic implications of the goal-driven learning model.

Read the paper:

Learning, Goals, and Learning Goals

by Ashwin Ram, David Leake

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 1, MIT Press/Bradford Books, 1995

www.cc.gatech.edu/faculty/ashwin/papers/er-95-03.pdf

Goal-Driven Learning

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. This book brings together a diversity of research on goal-driven learning to establish a broad, interdisciplinary framework that describes the goal-driven learning process. It collects and solidifies existing results on this important issue in machine and human learning and presents a theoretical framework for future investigations.

The book opens with an an overview of goal-driven learning research and computational and cognitive models of the goal-driven learning process. This introduction is followed by a collection of fourteen recent research articles addressing fundamental issues of the field, including psychological and functional arguments for modeling learning as a deliberative, process; experimental evaluation of the benefits of utility-based analysis to guide decisions about what to learn; case studies of computational models in which learning is driven by reasoning about learning goals; psychological evidence for human goal-driven learning; and the ramifications of goal-driven learning in educational contexts.

The second part of the book presents six position papers reflecting ongoing research and current issues in goal-driven learning. Issues discussed include methods for pursuing psychological studies of goal-driven learning, frameworks for the design of active and multistrategy learning systems, and methods for selecting and balancing the goals that drive learning.

Find the book:

Goal-Driven Learning

edited by Ashwin Ram, David Leake

MIT Press/Bradford Books, Cambridge, MA, 1995, ISBN 978-0-262-18165-5
mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8349

Preview the book: books.google.com/books?id=5vo9zMJRnMwC

Table of Contents

Preface by Professor Tom Mitchell
Editors’ Preface
Chapter 1: Learning, Goals, and Learning Goals, Ram, Leake

Part I: Current state of the field

Chapter 2: Planning to Learn, Hunter
Chapter 3: Quantitative Results Concerning the Utility of Explanation-Based Learning, Minton
Chapter 4: The Use of Explicit Goals for Knowledge to Guide Inference and Learning, Ram, Hunter
Chapter 5: Deriving Categories to Achieve Goals, Barsalou
Chapter 6: Harpoons and Long Sticks: The Interaction of Theory and Similarity in Rule Induction, Wisniewski, Medin
Chapter 7: Introspective Reasoning using Meta-Explanations for Multistrategy Learning, Ram, Cox
Chapter 8: Goal-Directed Learning: A Decision-Theoretic Model for Deciding What to Learn Next, desJardins
Chapter 9: Goal-Based Explanation Evaluation, Leake
Chapter 10: Planning to Perceive, Pryor, Collins
Chapter 11: Learning and Planning in PRODIGY: Overview of an Integrated Architecture, Carbonell, Etzioni, Gil, Joseph, Knoblock, Minton, Veloso
Chapter 12: A Learning Model for the Selection of Problem Solving Strategies in Continuous Physical Systems, Xia, Yeung
Chapter 13: Explicitly Biased Generalization, Gordon, Perlis
Chapter 14: Three Levels of Goal Orientation in Learning, Ng, Bereiter
Chapter 15: Characterising the Application of Computer Simulations in Education: Instructional Criteria, van Berkum, Hijne, de Jong, van Joolingen, Njoo

Part II: Current research and recent directions

Chapter 16: Goal-Driven Learning: Fundamental Issues and Symposium Report, Leake, Ram
Chapter 17: Storage Side Effects: Studying Processing to Understand Learning, Barsalou
Chapter 18: Goal-Driven Learning in Multistrategy Reasoning and Learning Systems, Ram, Cox, Narayanan
Chapter 19: Inference to the Best Plan: A Coherence Theory of Decision, Thagard, Millgram
Chapter 20: Towards Goal-Driven Integration of Explanation and Action, Leake
Chapter 21: Learning as Goal-Driven Inference, Michalski, Ram

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Read the paper:

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

by Ashwin Ram, S Narayanan, Mike Cox

Cognitive Science journal, 19(3):289-340, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-67.pdf

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-09.ps

Failure-Driven Learning as Input Bias

Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA.

Read the paper:

Failure-Driven Learning as Input Bias

by Mike Cox, Ashwin Ram

Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994
www.cc.gatech.edu/faculty/ashwin/papers/er-94-09.pdf

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system’s knowledge, and of the organization of this knowledge.

This paper presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.

Read the paper:

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

by Ashwin Ram, Mike Cox

In Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (eds.), 349-377, Morgan Kaufmann, 1994
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-19.pdf