Posts Tagged ‘semantic memory’

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner’s memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good “lessons” to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices.

We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.

Read the paper:

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

by Ashwin Ram

Machine Learning journal, 10:201-248, 1993
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-03.pdf

An Explicit Representation of Forgetting

A pervasive, yet much ignored, factor in the analysis of processing failures is the problem of misorganized knowledge. If a system’s knowledge is not indexed or organized correctly, it may make an error, not because it does not have either the general capability or specific knowledge to solve a problem, but rather because it does not have the knowledge sufficiently organized so that the appropriate knowledge structures are brought to bear on the problem at the appropriate time. In such cases, the system can be said to have “forgotten” the knowledge, if only in this context. This is the problem of forgetting or retrieval failure.

This research presents an analysis along with a declarative representation of a number of types of forgetting errors. Such representations can extend the capability of introspective failure-driven learning systems, allowing them to reduce the likelihood of repeating such errors. Examples are presented from the Meta-AQUA program, which learns to improve its performance on a story understanding task through an introspective meta-analysis of its knowledge, its organization of its knowledge, and its reasoning processes.

Read the paper:

An Explicit Representation of Forgetting

by Mike Cox, Ashwin Ram

6th International Conference on Systems Research, Informatics and Cybernetics, Baden-Baden, Germany, August, 1992.
www.cc.gatech.edu/faculty/ashwin/papers/er-92-06.pdf

Evaluation of Explanatory Hypotheses

Abduction is often viewed as inference to the “best” explanation. However, the evaluation of the goodness of candidate hypotheses remains an open problem. Most artificial intelligence research addressing this problem has concentrated on syntactic criteria, applied uniformly regardless of the explainer’s intended use for the explanation. We demonstrate that syntactic approaches are insufficient to capture important differences in explanations, and propose instead that choice of the “best” explanation should be based on explanations’ utility for the explainer’s purpose. We describe two classes of goals motivating explanation: knowledge goals reflecting internal desires for information, and goals to accomplish tasks in the external world. We describe how these goals impose requirements on explanations, and discuss how we apply those requirements to evaluate hypotheses in two computer story understanding systems.

Read the paper:

Evaluation of Explanatory Hypotheses

by Ashwin Ram, David Leake

13th Annual Conference of the Cognitive Science Society, 867-871, Chicago, IL, August 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-03.pdf

Learning Indices for Schema Selection

In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system’s memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.

Read the paper:

Learning Indices for Schema Selection

by Sam Bhatta, Ashwin Ram

Florida Artificial Intelligence Research Symposium (FLAIRS-91), 226-231, Cocoa Beach, FL, April 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-01.pdf

Decision Models: A Theory of Volitional Explanation

This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanations patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used, and evaluated.

Read the paper:

Decision Models: A Theory of Volitional Explanation

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society (CogSci-90), Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-03.pdf

Knowledge Goals: A Theory of Interestingness

Combinatorial explosion of inferences has always been one of the classic problems in AI. Resources are limited, and inferences potentially infinite; a reasoner needs to be able to determine which inferences are useful to draw from a given piece of text. But unless one considers the goals of the reasoner, it is very difficult to give a principled definition of what it means for an inference to be “useful.”

This paper presents a theory of inference control based on the notion of interestingness. We introduce knowledge goals, the goals of a reasoner to acquire some piece of knowledge required for a reasoning task, as the focusing criteria for inference control. We argue that knowledge goals correspond to the interests of the reasoner, and present a theory of interestingness that is functionally motivated by consideration of the needs of the reasoner. Although we use story understanding as the reasoning task, many of the arguments carry over to other cognitive tasks as well.

Read the paper:

Knowledge Goals: A Theory of Interestingness

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society, 206-214, Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-02.pdf