Posts Tagged ‘goal-driven learning’

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-09.ps

AQUA: Questions that Drive the Explanation Process

Editors’ Introduction:

In the doctoral disseration from which this chapter is drawn, Ashwin Ram presents an alternative perspective on the processes of story understanding, explanation, and learning. The issues that Ram explores in that dissertation are similar to those that are explored by the other authors in this book, but the angle that Ram take on these issues is somewhat different. His exploration of these processes is organized around the central theme of question asking. For him, understanding a story means identifying questions that the story raises, and questions that it answers.

Question asking also serves as a lens through which each of the sub-processes of is viewed: the retrieval of stored explanations, for instance, is driven by a library of what Ram calls “XP retrieval questions”; likewise, evaluation is driven by another set of questions, called “hypothesis verification questions”.

The AQUA program, which is Ram’s implementation of this question-based theory of understanding, is a very complex system, probably the most complex among the programs described in this book. AQUA covers a great deal of ground; it implements the entire case-based explanation process in a question-based manner. In this chapter, Ram focuses on the high-level description of the questions the programs asks, especially the questions it asks when constructing and evaluating explanations of volitional actions.

Read the paper:

AQUA: Questions that Drive the Explanation Process

by Ashwin Ram

In Inside Case-Based Explanation, R.C. Schank, A. Kass, and C.K. Riesbeck (eds.), 207-261, Lawrence Erlbaum, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-47.pdf

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system’s knowledge, and of the organization of this knowledge.

This paper presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.

Read the paper:

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

by Ashwin Ram, Mike Cox

In Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (eds.), 349-377, Morgan Kaufmann, 1994
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-19.pdf

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner’s knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.

This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Read the paper:

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

by Ashwin Ram, Larry Hunter

Applied Intelligence journal, 2(1):47-73, 1992
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-04.pdf

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

Given the vast amount of information available to the average person, there is a growing need for mechanisms that can select relevant or useful information based on some specification of the interests of a user. Furthermore, experience with natural language understanding and reasoning programs in artificial intelligence has demonstrated that the combinatorial explosion of possible conclusions that can be drawn from any input is a serious computational bottleneck in the design of computer programs that process information automatically.

This paper presents a theory of interestingness that serves as the basis for two story understanding programs, one that can filter and extract information likely to be relevant or interesting to a user, and another that can formulate and pursue its own interests based on an analysis of the information necessary to carry out the tasks it is pursuing. We discuss the basis for our theory of interestingness, heuristics for interest-based processing of information, and the process used to filter and extract relevant information from the input.

Read the paper:

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

by Ashwin Ram

Bellcore Workshop on High-Performance Information Filtering, Morristown, NJ, November 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-05.pdf

Evaluation of Explanatory Hypotheses

Abduction is often viewed as inference to the “best” explanation. However, the evaluation of the goodness of candidate hypotheses remains an open problem. Most artificial intelligence research addressing this problem has concentrated on syntactic criteria, applied uniformly regardless of the explainer’s intended use for the explanation. We demonstrate that syntactic approaches are insufficient to capture important differences in explanations, and propose instead that choice of the “best” explanation should be based on explanations’ utility for the explainer’s purpose. We describe two classes of goals motivating explanation: knowledge goals reflecting internal desires for information, and goals to accomplish tasks in the external world. We describe how these goals impose requirements on explanations, and discuss how we apply those requirements to evaluate hypotheses in two computer story understanding systems.

Read the paper:

Evaluation of Explanatory Hypotheses

by Ashwin Ram, David Leake

13th Annual Conference of the Cognitive Science Society, 867-871, Chicago, IL, August 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-03.pdf

A Goal-based Approach to Intelligent Information Retrieval

Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems.

The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system’s inferential abilities and current state.

The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Read the paper:

A Goal-based Approach to Intelligent Information Retrieval

by Ashwin Ram, Larry Hunter

Eighth International Workshop on Machine Learning (ICML-91), Chicago, IL, June 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-02.pdf

A Theory of Questions and Question Asking

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner’s model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Read the paper:

A Theory of Questions and Question Asking

by Ashwin Ram

The Journal of the Learning Sciences, 1(3&4):273-318, 1991
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-02.pdf

Knowledge Goals: A Theory of Interestingness

Combinatorial explosion of inferences has always been one of the classic problems in AI. Resources are limited, and inferences potentially infinite; a reasoner needs to be able to determine which inferences are useful to draw from a given piece of text. But unless one considers the goals of the reasoner, it is very difficult to give a principled definition of what it means for an inference to be “useful.”

This paper presents a theory of inference control based on the notion of interestingness. We introduce knowledge goals, the goals of a reasoner to acquire some piece of knowledge required for a reasoning task, as the focusing criteria for inference control. We argue that knowledge goals correspond to the interests of the reasoner, and present a theory of interestingness that is functionally motivated by consideration of the needs of the reasoner. Although we use story understanding as the reasoning task, many of the arguments carry over to other cognitive tasks as well.

Read the paper:

Knowledge Goals: A Theory of Interestingness

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society, 206-214, Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-02.pdf