Posts Tagged ‘planning’

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner’s knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.

This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Read the paper:

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

by Ashwin Ram, Larry Hunter

Applied Intelligence journal, 2(1):47-73, 1992
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-04.pdf

A Goal-based Approach to Intelligent Information Retrieval

Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems.

The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system’s inferential abilities and current state.

The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Read the paper:

A Goal-based Approach to Intelligent Information Retrieval

by Ashwin Ram, Larry Hunter

Eighth International Workshop on Machine Learning (ICML-91), Chicago, IL, June 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-02.pdf

Decision Models: A Theory of Volitional Explanation

This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanations patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used, and evaluated.

Read the paper:

Decision Models: A Theory of Volitional Explanation

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society (CogSci-90), Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-03.pdf