Archive for the ‘Learning’ Category

An Explicit Representation of Forgetting

A pervasive, yet much ignored, factor in the analysis of processing failures is the problem of misorganized knowledge. If a system’s knowledge is not indexed or organized correctly, it may make an error, not because it does not have either the general capability or specific knowledge to solve a problem, but rather because it does not have the knowledge sufficiently organized so that the appropriate knowledge structures are brought to bear on the problem at the appropriate time. In such cases, the system can be said to have “forgotten” the knowledge, if only in this context. This is the problem of forgetting or retrieval failure.

This research presents an analysis along with a declarative representation of a number of types of forgetting errors. Such representations can extend the capability of introspective failure-driven learning systems, allowing them to reduce the likelihood of repeating such errors. Examples are presented from the Meta-AQUA program, which learns to improve its performance on a story understanding task through an introspective meta-analysis of its knowledge, its organization of its knowledge, and its reasoning processes.

Read the paper:

An Explicit Representation of Forgetting

by Mike Cox, Ashwin Ram

6th International Conference on Systems Research, Informatics and Cybernetics, Baden-Baden, Germany, August, 1992.
www.cc.gatech.edu/faculty/ashwin/papers/er-92-06.pdf

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner’s knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it.

This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.

Read the paper:

The Use of Explicit Goals for Knowledge to Guide Inference and Learning

by Ashwin Ram, Larry Hunter

Applied Intelligence journal, 2(1):47-73, 1992
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-04.pdf

A Goal-based Approach to Intelligent Information Retrieval

Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems.

The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system’s inferential abilities and current state.

The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods.

Read the paper:

A Goal-based Approach to Intelligent Information Retrieval

by Ashwin Ram, Larry Hunter

Eighth International Workshop on Machine Learning (ICML-91), Chicago, IL, June 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-02.pdf

Learning Indices for Schema Selection

In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system’s memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.

Read the paper:

Learning Indices for Schema Selection

by Sam Bhatta, Ashwin Ram

Florida Artificial Intelligence Research Symposium (FLAIRS-91), 226-231, Cocoa Beach, FL, April 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-01.pdf

A Theory of Questions and Question Asking

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner’s model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Read the paper:

A Theory of Questions and Question Asking

by Ashwin Ram

The Journal of the Learning Sciences, 1(3&4):273-318, 1991
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-02.pdf

Evaluating Text-Mining Strategies for Interpreting DNA Microarray Expression Profiles

To facilitate the interpretation of large data sets generated by DNA microarray studies, we are 1) developing a text mining system to extract keywords from MEDLINE abstracts associated with individual gene names and 2) investigating several clustering algorithms to determine relationships between genes based on shared keywords. The basic mechanisms of our keyword extraction algorithm was described previously (Soc Neurosci Abstr 2001, 557.4). Recent progress in evaluating the performance of this algorithm through Precision-Recall calculations and in using extracted keywords to accurately cluster predefined groups of genes are reported here.

Evaluating Text-Mining Strategies for Interpreting DNA Microarray Expression Profiles

by Brian Ciliax, Ying Liu, Jorge Civera, Ashwin Ram, Sham Navathe, Ray Dingledine

Annual Meeting of the Society for Neuroscience (Soc Neurosci Abstr), Orlando, FL, September 2002
www.cc.gatech.edu/faculty/ashwin/papers/er-02-01.pdf