Posts Tagged ‘meta-reasoning’

An Explicit Representation of Forgetting

A pervasive, yet much ignored, factor in the analysis of processing failures is the problem of misorganized knowledge. If a system’s knowledge is not indexed or organized correctly, it may make an error, not because it does not have either the general capability or specific knowledge to solve a problem, but rather because it does not have the knowledge sufficiently organized so that the appropriate knowledge structures are brought to bear on the problem at the appropriate time. In such cases, the system can be said to have “forgotten” the knowledge, if only in this context. This is the problem of forgetting or retrieval failure.

This research presents an analysis along with a declarative representation of a number of types of forgetting errors. Such representations can extend the capability of introspective failure-driven learning systems, allowing them to reduce the likelihood of repeating such errors. Examples are presented from the Meta-AQUA program, which learns to improve its performance on a story understanding task through an introspective meta-analysis of its knowledge, its organization of its knowledge, and its reasoning processes.

Read the paper:

An Explicit Representation of Forgetting

by Mike Cox, Ashwin Ram

6th International Conference on Systems Research, Informatics and Cybernetics, Baden-Baden, Germany, August, 1992.
www.cc.gatech.edu/faculty/ashwin/papers/er-92-06.pdf

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner’s knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.

This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Read the paper:

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

by Ashwin Ram, Larry Hunter

Applied Intelligence journal, 2(1):47-73, 1992
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-04.pdf