Archive for the ‘Learning’ Category

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

The real world has many properties that present challenges for the design of intelligent agents: it is dynamic, unpredictable, and independent, poses poorly structured problems, and places bounds on the resources available to agents. Agents that opearate in real worlds need a wide range of capabilities to deal with them: memory, situation analysis, situativity, resource-bounded cognition, and opportunism.

We propose a theory of experience-based agency which specifies how an agent with the ability to richly represent and store its experiences could remember those experiences with a context-sensitive, asynchronous memory, incorporate those experiences into its reasoning on demand with integration mechanisms, and usefully direct memory and reasoning through the use of a utility-based metacontroller. We have implemented this theory in an architecture called NICOLE and have used it to address the problem of merging multiple plans during the course of case-based adaptation in least-committment planning.

Read the paper:

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

by Ashwin Ram, Anthony Francis

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-06.pdf

Dynamically Adjusting Concepts to Accommodate Changing Contexts

In concept learning, objects in a domain are grouped together based on similarity as determined by the attributes used to describe them. Existing concept learners require that this set of attributes be known in advance and presented in entirety before learning begins. Additionally, most systems do not possess mechanisms for altering the attribute set after concepts have been learned. Consequently, a veridical attribute set relevant to the task for which the concepts are to be used must be supplied at the onset of learning, and in turn, the usefulness of the concepts is limited to the task for which the attributes were originally selected.

In order to efficiently accommodate changing contexts, a concept learner must be able to alter the set of descriptors without discarding its prior knowledge of the domain. We introduce the notion of attribute-incrementation, the dynamic modification of the attribute set used to describe instances in a problem domain. We have implemented the capability in a concept learning system that has been evaluated along several dimensions using an existing concept formation system for comparison.

Read the paper:

Dynamically Adjusting Concepts to Accommodate Changing Contexts

by Mark Devaney, Ashwin Ram

ICML-96 Workshop on Learning in Context Sensitive Domains, Bari, Italy, July 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-07.pdf

Introspective Multistrategy Learning: Constructing a Learning Strategy under Reasoning Failure

The thesis put forth by this dissertation is that introspective analyses facilitate the construction of learning strategies. Furthermore, learning is much like nonlinear planning and problem solving. Like problem solving, it can be specified by a set of explicit learning goals (i.e., desired changes to the reasoner’s knowledge); these goals can be achieved by constructing a plan from a set of operators (the learning algorithms) that execute in a knowledge space. However, in order to specify learning goals and to avoid negative interactions between operators, a reasoner requires a model of its reasoning processes and knowledge.

With such a model, the reasoner can declaratively represent the events and causal relations of its mental world in the same manner that it represents events and relations in the physical world. This representation enables introspective self-examination, which contributes to learning by providing a basis for identifying what needs to be learned when reasoning fails. A multistrategy system possessing several learning algorithms can decide what to learn, and which algorithm(s) to apply, by analyzing the model of its reasoning. This introspective analysis therefore allows the learner to understand its reasoning failures, to determine the causes of the failures, to identify needed knowledge repairs to avoid such failures in the future, and to build a learning strategy (plan).

Thus, the research goal is to develop both a content theory and a process theory of introspective multistrategy learning and to establish the conditions under which such an approach is fruitful. Empirical experiments provide results that support the claims herein. The theory was implemented in a computational model called Meta-AQUA that attempts to understand simple stories. The system uses case-based reasoning to explain reasoning failures and to generate sets of learning goals, and it uses a standard non-linear planner to achieve these goals.

Evaluating Meta-AQUA with and without learning goals generated results indicating that computational introspection facilitates the learning process. In particular, the results lead to the conclusion that the stage that posts learning goals is a necessary stage if negative interactions between learning methods are to be avoided and if learning is to remain effective.

Read the thesis:

Introspective multistrategy learning: Constructing a learning strategy under reasoning failure

by Michael T. Cox

PhD Thesis, Technical Report GIT-CC-96/06, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-96-06.pdf

Learning Adaptive Reactive Agents

An autonomous agent is an intelligent system that has an ongoing interaction with a dynamic external world. It can perceive and act on the world through a set of limited sensors and effectors. Its most important characteristic is that it is forced to make decisions sequentially, one after another, during its entire “life”. The main objective of this dissertation is to study algorithms by which an autonomous agents can learn, using their own experience, to perform sequential decision-making efficiently and autonomously. The dissertation describes a framework for studying autonomous sequential decision-making consisting of three main elements: the agent, the environment, and the task. The agent attempts to control the environment by perceiving the environment and choosing actions in a sequential fashion. The environment is a dynamic system characterized by a state and its dynamics, a function that describes the evolution of the state given the agent’s actions. A task is a declarative description of the desired behavior the agent should exhibit as it interacts with the environment. The ultimate goal of the agent is to learn a policy or strategy for selecting actions that maximizes its expected benefit as defined by the task.

The dissertation focuses on sequential decision-making when the environment is characterized by continuous states and actions, and the agent has imperfect perception, incomplete knowledge, and limited computational resources. The main characteristic of the approach proposed in this dissertation is that the agent uses its previous experiences to improve estimates of the long-term benefit associated with the execution of specific actions. The agent uses these estimates to evaluate how desirable is to execute alternative actions and select the one that best balances the short- and long-term consequences, taking special consideration of the expected benefit associated with actions that accomplish new learning while making progress on the task.

The approach is based on novel methods that are specifically designed to address the problems associated with continuous domains, imperfect perception, incomplete knowledge, and limited computational resources. The approach is implemented using case-based techniques and extensively evaluated in simulated and real systems including autonomous mobile robots, pendulum swinging and balancing controllers, and other non-linear dynamic system controllers.

Read the thesis:

Learning Adaptive Reactive Agents

by Juan Carlos Santamaria

PhD Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-97-08.ps.Z

Learning as Goal-Driven Inference

Developing an adequate and general computational model of adaptive, multistrategy, and goal-oriented learning is a fundamental long-term objective for machine learning research for both theoretical and pragmatic reasons. We outline a proposal for developing such a model based on two key ideas. First, we view learning as an active process involving the formulation of learning goals during the performance of a reasoning task, the prioritization of learning goals, and the pursuit of learning goals using multiple learning strategies. The second key idea is to model learning as a kind of inference in which the system augments and reformulates its knowledge using various types of primitive inferential actions, known as knowledge transmutations.

Read the paper:

Learning as Goal-Driven Inference

by Ryszard Michalski, Ashwin Ram

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 21, MIT Press/Bradford Books, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-05.pdf

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Read the paper:

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

by Ashwin Ram, Mike Cox, S Narayanan

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-04.pdf

Learning, Goals, and Learning Goals

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. Investigators in each of these areas have independently pursued the common issues of how learning goals arise, how they affect learner decisions of when and what to learn, and how they guide the learning process. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning.

This chapter discusses fundamental questions for goal-driven learning: the motivations for adopting a goal-driven model of learning, the basic goal-driven learning framework, the specific issues raised by the framework that a theory of goal-driven learning must address, the types of goals that can influence learning, the types of influences those goals can have on learning, and the pragmatic implications of the goal-driven learning model.

Read the paper:

Learning, Goals, and Learning Goals

by Ashwin Ram, David Leake

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 1, MIT Press/Bradford Books, 1995

www.cc.gatech.edu/faculty/ashwin/papers/er-95-03.pdf

Goal-Driven Learning

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning. This book brings together a diversity of research on goal-driven learning to establish a broad, interdisciplinary framework that describes the goal-driven learning process. It collects and solidifies existing results on this important issue in machine and human learning and presents a theoretical framework for future investigations.

The book opens with an an overview of goal-driven learning research and computational and cognitive models of the goal-driven learning process. This introduction is followed by a collection of fourteen recent research articles addressing fundamental issues of the field, including psychological and functional arguments for modeling learning as a deliberative, process; experimental evaluation of the benefits of utility-based analysis to guide decisions about what to learn; case studies of computational models in which learning is driven by reasoning about learning goals; psychological evidence for human goal-driven learning; and the ramifications of goal-driven learning in educational contexts.

The second part of the book presents six position papers reflecting ongoing research and current issues in goal-driven learning. Issues discussed include methods for pursuing psychological studies of goal-driven learning, frameworks for the design of active and multistrategy learning systems, and methods for selecting and balancing the goals that drive learning.

Find the book:

Goal-Driven Learning

edited by Ashwin Ram, David Leake

MIT Press/Bradford Books, Cambridge, MA, 1995, ISBN 978-0-262-18165-5
mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8349

Preview the book: books.google.com/books?id=5vo9zMJRnMwC

Table of Contents

Preface by Professor Tom Mitchell
Editors’ Preface
Chapter 1: Learning, Goals, and Learning Goals, Ram, Leake

Part I: Current state of the field

Chapter 2: Planning to Learn, Hunter
Chapter 3: Quantitative Results Concerning the Utility of Explanation-Based Learning, Minton
Chapter 4: The Use of Explicit Goals for Knowledge to Guide Inference and Learning, Ram, Hunter
Chapter 5: Deriving Categories to Achieve Goals, Barsalou
Chapter 6: Harpoons and Long Sticks: The Interaction of Theory and Similarity in Rule Induction, Wisniewski, Medin
Chapter 7: Introspective Reasoning using Meta-Explanations for Multistrategy Learning, Ram, Cox
Chapter 8: Goal-Directed Learning: A Decision-Theoretic Model for Deciding What to Learn Next, desJardins
Chapter 9: Goal-Based Explanation Evaluation, Leake
Chapter 10: Planning to Perceive, Pryor, Collins
Chapter 11: Learning and Planning in PRODIGY: Overview of an Integrated Architecture, Carbonell, Etzioni, Gil, Joseph, Knoblock, Minton, Veloso
Chapter 12: A Learning Model for the Selection of Problem Solving Strategies in Continuous Physical Systems, Xia, Yeung
Chapter 13: Explicitly Biased Generalization, Gordon, Perlis
Chapter 14: Three Levels of Goal Orientation in Learning, Ng, Bereiter
Chapter 15: Characterising the Application of Computer Simulations in Education: Instructional Criteria, van Berkum, Hijne, de Jong, van Joolingen, Njoo

Part II: Current research and recent directions

Chapter 16: Goal-Driven Learning: Fundamental Issues and Symposium Report, Leake, Ram
Chapter 17: Storage Side Effects: Studying Processing to Understand Learning, Barsalou
Chapter 18: Goal-Driven Learning in Multistrategy Reasoning and Learning Systems, Ram, Cox, Narayanan
Chapter 19: Inference to the Best Plan: A Coherence Theory of Decision, Thagard, Millgram
Chapter 20: Towards Goal-Driven Integration of Explanation and Action, Leake
Chapter 21: Learning as Goal-Driven Inference, Michalski, Ram

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems.

Read the paper:

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

by Anthony Francis, Ashwin Ram

8th European Conference on Machine Learning (ECML-95), Crete, Greece, April 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-02.pdf

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Read the paper:

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

by Ashwin Ram, S Narayanan, Mike Cox

Cognitive Science journal, 19(3):289-340, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-67.pdf