An Explicit Representation of Forgetting

A pervasive, yet much ignored, factor in the analysis of processing failures is the problem of misorganized knowledge. If a system’s knowledge is not indexed or organized correctly, it may make an error, not because it does not have either the general capability or specific knowledge to solve a problem, but rather because it does not have the knowledge sufficiently organized so that the appropriate knowledge structures are brought to bear on the problem at the appropriate time. In such cases, the system can be said to have “forgotten” the knowledge, if only in this context. This is the problem of forgetting or retrieval failure.

This research presents an analysis along with a declarative representation of a number of types of forgetting errors. Such representations can extend the capability of introspective failure-driven learning systems, allowing them to reduce the likelihood of repeating such errors. Examples are presented from the Meta-AQUA program, which learns to improve its performance on a story understanding task through an introspective meta-analysis of its knowledge, its organization of its knowledge, and its reasoning processes.

Read the paper:

An Explicit Representation of Forgetting

by Mike Cox, Ashwin Ram

6th International Conference on Systems Research, Informatics and Cybernetics, Baden-Baden, Germany, August, 1992.
www.cc.gatech.edu/faculty/ashwin/papers/er-92-06.pdf

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner’s knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.

This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Read the paper:

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

by Ashwin Ram, Larry Hunter

Applied Intelligence journal, 2(1):47-73, 1992
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-04.pdf

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

Given the vast amount of information available to the average person, there is a growing need for mechanisms that can select relevant or useful information based on some specification of the interests of a user. Furthermore, experience with natural language understanding and reasoning programs in artificial intelligence has demonstrated that the combinatorial explosion of possible conclusions that can be drawn from any input is a serious computational bottleneck in the design of computer programs that process information automatically.

This paper presents a theory of interestingness that serves as the basis for two story understanding programs, one that can filter and extract information likely to be relevant or interesting to a user, and another that can formulate and pursue its own interests based on an analysis of the information necessary to carry out the tasks it is pursuing. We discuss the basis for our theory of interestingness, heuristics for interest-based processing of information, and the process used to filter and extract relevant information from the input.

Read the paper:

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

by Ashwin Ram

Bellcore Workshop on High-Performance Information Filtering, Morristown, NJ, November 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-05.pdf

Evaluation of Explanatory Hypotheses

Abduction is often viewed as inference to the “best” explanation. However, the evaluation of the goodness of candidate hypotheses remains an open problem. Most artificial intelligence research addressing this problem has concentrated on syntactic criteria, applied uniformly regardless of the explainer’s intended use for the explanation. We demonstrate that syntactic approaches are insufficient to capture important differences in explanations, and propose instead that choice of the “best” explanation should be based on explanations’ utility for the explainer’s purpose. We describe two classes of goals motivating explanation: knowledge goals reflecting internal desires for information, and goals to accomplish tasks in the external world. We describe how these goals impose requirements on explanations, and discuss how we apply those requirements to evaluate hypotheses in two computer story understanding systems.

Read the paper:

Evaluation of Explanatory Hypotheses

by Ashwin Ram, David Leake

13th Annual Conference of the Cognitive Science Society, 867-871, Chicago, IL, August 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-03.pdf

A Goal-based Approach to Intelligent Information Retrieval

Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems.

The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system’s inferential abilities and current state.

The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Read the paper:

A Goal-based Approach to Intelligent Information Retrieval

by Ashwin Ram, Larry Hunter

Eighth International Workshop on Machine Learning (ICML-91), Chicago, IL, June 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-02.pdf

Learning Indices for Schema Selection

In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system’s memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.

Read the paper:

Learning Indices for Schema Selection

by Sam Bhatta, Ashwin Ram

Florida Artificial Intelligence Research Symposium (FLAIRS-91), 226-231, Cocoa Beach, FL, April 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-01.pdf

A Theory of Questions and Question Asking

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner’s model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Read the paper:

A Theory of Questions and Question Asking

by Ashwin Ram

The Journal of the Learning Sciences, 1(3&4):273-318, 1991
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-02.pdf

Decision Models: A Theory of Volitional Explanation

This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanations patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used, and evaluated.

Read the paper:

Decision Models: A Theory of Volitional Explanation

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society (CogSci-90), Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-03.pdf

Knowledge Goals: A Theory of Interestingness

Combinatorial explosion of inferences has always been one of the classic problems in AI. Resources are limited, and inferences potentially infinite; a reasoner needs to be able to determine which inferences are useful to draw from a given piece of text. But unless one considers the goals of the reasoner, it is very difficult to give a principled definition of what it means for an inference to be “useful.”

This paper presents a theory of inference control based on the notion of interestingness. We introduce knowledge goals, the goals of a reasoner to acquire some piece of knowledge required for a reasoning task, as the focusing criteria for inference control. We argue that knowledge goals correspond to the interests of the reasoner, and present a theory of interestingness that is functionally motivated by consideration of the needs of the reasoner. Although we use story understanding as the reasoning task, many of the arguments carry over to other cognitive tasks as well.

Read the paper:

Knowledge Goals: A Theory of Interestingness

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society, 206-214, Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-02.pdf

How To Present A Paper

(Paraphrased from my hazy memory of what Drew McDermott taught me many years ago.)

Many students present a paper, especially one authored by someone else, by talking through it section by section or page by page. The student reads out the definitions and points the audience to the figures. Anything in italics is read out. The student works through the paper linearly, taking great care not to miss anything that the author might have written that might possibly be relevant. This approach is not useful because all that is happening is that the student is reading the paper aloud, forgetting that the audience is perfectly capable of reading the paper themselves and in most cases has already done so. Here is a different approach.

If you’re presenting the paper:

  • Read the paper ahead of time, and decide what you think of the ideas presented in the paper. In particular, decide whether you think the paper has some good ideas or whether it belongs in the recycling bin. Keep in mind that very few papers have no worthwhile ideas whatsoever; however, if you’re convinced that your paper belongs in this category, follow the steps listed below for critiquing a paper.
  • Next, decide which idea is the best idea (or a small cluster of related ideas) in the paper. “Best” may mean most novel, most central, most relevant, most clever, most important, and so on. Write down this idea, preferably in your own words, and a one-line justification for why this idea is the best one. (This step is particularly important when the paper you’re presenting is your own.)
  • Now comes the crucial step: Figure out how to get your audience as quickly as possible to the point where they can understand this idea.
  • Next, if necessary, elaborate the idea and fill in the details. Explain things like how the idea came about, how it was fleshed out in the paper, how it was proven, what benefit it had, what difference did it make, what alternative ideas might have been pursued instead, and so on.

If you’re critiquing the paper:

  • Read the paper ahead of time, and decide what you think of the ideas presented in the paper.
  • Next, determine what you think is the central fallacy or bad idea (or a small cluster of related ideas) in the paper. Don’t pick something tangential; you want a novel, central, relevant, clever, important idea (similar to the kind of idea you’d pick if you were presenting the paper) but one that is, in your mind, simply wrong. Write down this idea, preferably in your own words, and a short “bottom line” reason explaining why this idea is wrong.
  • Now comes the crucial step: Figure out how to get your audience as quickly as possible to the point where they can understand the fallacy or bad idea.
  • Next, if necessary, elaborate the idea and fill in the details. Explain things like how the idea came about, how it was fleshed out in the paper, what problems did it raise, why the proof was inadequate, what alternative ideas might have been pursued instead, and so on.