Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Read the paper:

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

by Ashwin Ram, Mike Cox, S Narayanan

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-04.pdf

Learning, Goals, and Learning Goals

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. Investigators in each of these areas have independently pursued the common issues of how learning goals arise, how they affect learner decisions of when and what to learn, and how they guide the learning process. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning.

This chapter discusses fundamental questions for goal-driven learning: the motivations for adopting a goal-driven model of learning, the basic goal-driven learning framework, the specific issues raised by the framework that a theory of goal-driven learning must address, the types of goals that can influence learning, the types of influences those goals can have on learning, and the pragmatic implications of the goal-driven learning model.

Read the paper:

Learning, Goals, and Learning Goals

by Ashwin Ram, David Leake

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 1, MIT Press/Bradford Books, 1995

www.cc.gatech.edu/faculty/ashwin/papers/er-95-03.pdf

Goal-Driven Learning

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. This book brings together a diversity of research on goal-driven learning to establish a broad, interdisciplinary framework that describes the goal-driven learning process. It collects and solidifies existing results on this important issue in machine and human learning and presents a theoretical framework for future investigations.

The book opens with an an overview of goal-driven learning research and computational and cognitive models of the goal-driven learning process. This introduction is followed by a collection of fourteen recent research articles addressing fundamental issues of the field, including psychological and functional arguments for modeling learning as a deliberative, process; experimental evaluation of the benefits of utility-based analysis to guide decisions about what to learn; case studies of computational models in which learning is driven by reasoning about learning goals; psychological evidence for human goal-driven learning; and the ramifications of goal-driven learning in educational contexts.

The second part of the book presents six position papers reflecting ongoing research and current issues in goal-driven learning. Issues discussed include methods for pursuing psychological studies of goal-driven learning, frameworks for the design of active and multistrategy learning systems, and methods for selecting and balancing the goals that drive learning.

Find the book:

Goal-Driven Learning

edited by Ashwin Ram, David Leake

MIT Press/Bradford Books, Cambridge, MA, 1995, ISBN 978-0-262-18165-5
mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8349

Preview the book: books.google.com/books?id=5vo9zMJRnMwC

Table of Contents

Preface by Professor Tom Mitchell
Editors’ Preface
Chapter 1: Learning, Goals, and Learning Goals, Ram, Leake

Part I: Current state of the field

Chapter 2: Planning to Learn, Hunter
Chapter 3: Quantitative Results Concerning the Utility of Explanation-Based Learning, Minton
Chapter 4: The Use of Explicit Goals for Knowledge to Guide Inference and Learning, Ram, Hunter
Chapter 5: Deriving Categories to Achieve Goals, Barsalou
Chapter 6: Harpoons and Long Sticks: The Interaction of Theory and Similarity in Rule Induction, Wisniewski, Medin
Chapter 7: Introspective Reasoning using Meta-Explanations for Multistrategy Learning, Ram, Cox
Chapter 8: Goal-Directed Learning: A Decision-Theoretic Model for Deciding What to Learn Next, desJardins
Chapter 9: Goal-Based Explanation Evaluation, Leake
Chapter 10: Planning to Perceive, Pryor, Collins
Chapter 11: Learning and Planning in PRODIGY: Overview of an Integrated Architecture, Carbonell, Etzioni, Gil, Joseph, Knoblock, Minton, Veloso
Chapter 12: A Learning Model for the Selection of Problem Solving Strategies in Continuous Physical Systems, Xia, Yeung
Chapter 13: Explicitly Biased Generalization, Gordon, Perlis
Chapter 14: Three Levels of Goal Orientation in Learning, Ng, Bereiter
Chapter 15: Characterising the Application of Computer Simulations in Education: Instructional Criteria, van Berkum, Hijne, de Jong, van Joolingen, Njoo

Part II: Current research and recent directions

Chapter 16: Goal-Driven Learning: Fundamental Issues and Symposium Report, Leake, Ram
Chapter 17: Storage Side Effects: Studying Processing to Understand Learning, Barsalou
Chapter 18: Goal-Driven Learning in Multistrategy Reasoning and Learning Systems, Ram, Cox, Narayanan
Chapter 19: Inference to the Best Plan: A Coherence Theory of Decision, Thagard, Millgram
Chapter 20: Towards Goal-Driven Integration of Explanation and Action, Leake
Chapter 21: Learning as Goal-Driven Inference, Michalski, Ram

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems.

Read the paper:

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

by Anthony Francis, Ashwin Ram

8th European Conference on Machine Learning (ECML-95), Crete, Greece, April 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-02.pdf

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Read the paper:

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

by Ashwin Ram, S Narayanan, Mike Cox

Cognitive Science journal, 19(3):289-340, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-67.pdf

Foundations of Foundations of Artificial Intelligence

Foundations of Artificial Intelligence (edited by David Kirsh, MIT Press, 1992) presents a number of chapters from major players in artificial intelligence (AI), including Kirsh, Nilsson, Birnbaum, Hewitt, Gasser, Brooks, Lenat & Feigenbaum, Smith, Rosenbloom and the Soar team, and Norman. These chapters discuss fundamental assumptions underlying the dominant approaches to AI today. Perhaps the best parts of the book are the critiques: each chapter is followed by an in-depth critique that evaluates the utility of those assumptions in pursuing the goal of AI.

But what is the goal of AI? Although several chapters propose definitions of the AI enterprise, there seems to be little agreement even at this fundamental level. Kirsh discusses the following definition in his introduction:

  • A theory in AI is a specification of the knowledge underpinning a cognitive skill. (p. 5)

While there appears to be a broad consensus (with some dissension from Brooks) that knowledge specification is an important part of the practice of AI, there seems to be little agreement that knowledge specification by itself constitutes a theory in AI. Indeed, while Lenat and Feigenbaum take this position seriously, Nilsson focusses on the language for the specification of such knowledge (rather than the knowledge itself); Hewitt on communication between agents; Rosenbloom, Laird, Newell, and McCarl on architectural issues in lieu of knowledge; and Brooks eschews explicit representations of knowledge altogether.

This lack of consensus is both the principal strength and weakness of the book. […] In our view, a theory of intelligent behavior should have a descriptive part and an explanatory part. The descriptive part specifies the computational mechanisms of the theory, and makes clear how the program instantiates those mechanisms. Computational mechanisms can be described under the following headings:

  • Knowledge: both the content of the relevant knowledge and the representation language used to express that knowledge.
  • Processes: the algorithms or mechanisms that produce the intelligent behavior.
  • Architecture: the “cognitive architecture” on which the algorithms execute.
  • Machine architecture: the physical hardware, if this happens to be theoretically relevant.

A theory of intelligent behavior also has an explanatory part, which justifies the computational mechanisms of the theory by explaining the way in which they are a good account of the behavior. The explanation provides a functional or teleological basis for the design decisions underlying the computational model, such as the choice of representational primitives and formalisms, and architectural and algorithmic commitments. The explanation should also make clear how the computer implementation exemplifies this account.

Read the full review:

Foundations of Foundations of Artificial Intelligence

by Ashwin Ram, Eric Jones

Philosophical Psychology, 8(2):193-199, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-08.html

Understanding the Creative Mind

Margaret Boden, a master at bring ideas from artificial intelligence and cognitive science to the masses, has done it again. In The Creative Mind: Myths and Mechanisms (published by Routledge, 2003), she has produced a well-written, well-argued review and synthesis of current computational theories relevant to creativity. This book seems appropriately pitched for students in survey courses and for the intelligent lay public. And if ever there were a topic suitable for bridging the gap between researchers adh the layperson, this is surely it: What is creativity, and how is it possible? Or, in computational terms (the terms that Boden argoes ought to be applied), what are the processes of creativity?

We believe that in order to analyze creative reasoning, one needs a theoretical framework in which to model thinking. To this end, we propose using a computational approach rooted in case-based reasoning. This paradigm is fundamentally concerned with memory issues, such as remindings from partial matches at varying levels of representation and the formation of analogical maps between seemingly disparate situations—exactly the kinds of phenomena that researchers up to, and including, Boden have highlighted as central to creativity.

Our research suggests that creativity is not a process in itself that can be turned on or off; rather, it arises from the confluence and complex interaction of inferences using multiple kinds of knowledge in the context of a task or problem and in the context of a specific situation. Much of what we think of as “creativity” arises from interesting strategic control of these inferences and their integration in the context of a task and situation.

These five aspects—inferences, knowledge, task, situation, and control—are not special or unique to creativity but are part of normal everyday thinking. They determine the thinkable, the thoughts the reasoner might normally have when addressing a problem or performing a task. In a specific individual, more creative thoughts will likely result when these pieces come together in a novel way to yield unexplored and unexpected paths that go “beyond the thinkable”.

Read the full review:

Understanding the Creative Mind

by Ashwin Ram, Linda Wills, Eric Domeshek, Nancy Nersessian, Janet Kolodner

Artificial Intelligence journal, 79(1):111-128, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-94-13.pdf

Cognitive Media Types for Multimedia Information Access

Multimedia repositories, libraries, and databases offer the potential for providing students with access to a wide variety of interconnected information resources. However, in order to realize this potential, multimedia systems should provide access to information and activities that support effective knowledge construction and learning by students. This article proposes a theoretical framework for organizing information and activities in educational hypermedia systems. We show that such systems should not be characterized primarily in terms of the kinds of physical media types that can be accessed; instead, the important aspect is the content that can be represented within a physical media, rather than the physical media itself.

We propose a theory of “cognitive media types”based on the inferential and learning processes of human users. The theory highlights specific media characteristics that facilitate specific problem solving actions, which in turn are enabled by specific kinds of physical media. We present an implemented computer system, called AlgoNet, that supports hypermedia information access and constructive learning activities for self-paced learning in computer and engineering disciplines. Extensive empirical evaluations with undergraduate students suggest that self-paced interactive learning environments, coupled with multimedia information access and constructive activities organized into cognitive media types, can support and help students develop deep intuitions about important concepts in a given domain.

Read the paper:

Cognitive Media Types for Multimedia Information Access

by Mimi Recker, Ashwin Ram, Terry Shikano, George Li, John Stasko

Journal of Educational Multimedia and Hypermedia, 4(2/3):185-210, 1995. Earlier version presented at the.Annual Meeting of the American Educational Research Association (AERA), San Franciso, 1995.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-07.pdf

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-09.ps

Failure-Driven Learning as Input Bias

Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA.

Read the paper:

Failure-Driven Learning as Input Bias

by Mike Cox, Ashwin Ram

Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994
www.cc.gatech.edu/faculty/ashwin/papers/er-94-09.pdf