Archive for the ‘Language’ Category

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system’s knowledge, and of the organization of this knowledge.

This paper presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.

Read the paper:

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

by Ashwin Ram, Mike Cox

In Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (eds.), 349-377, Morgan Kaufmann, 1994
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-19.pdf

Creative Conceptual Change

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. The first kind of process involves reformulating perceptual, sensorimotor, or other low-level information into higher-level abstractions. The second kind of process involves a temporary suspension of disbelieve and the extension or adaptation of existing concepts to create a conceptual model of a new situation which may be very different from previous real-world experience.

We discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different “everyday” task domains: (a) SINS is an autonomous robotic navigation system that learns to navigate in an obstacle-ridden world by constructing sensorimotor concepts that represent navigational strategies, and (b) ISAAC is a natural language understanding system that reads short stories from the science fiction genre which requires a deep understanding of concepts that might be very different from the concepts that the system is familiar with.

Read the paper:

Creative Conceptual Change

by Ashwin Ram, Kenneth Moorman, Juan Carlos Santamaria

Invited talk at the 15th Annual Conference of the Cognitive Science Society, Boulder, CO, June 1993. Long version published as Technical Report GIT-CC-96/07, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.
www.cc.gatech.edu/faculty/ashwin/papers/er-93-04.pdf

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

Given the vast amount of information available to the average person, there is a growing need for mechanisms that can select relevant or useful information based on some specification of the interests of a user. Furthermore, experience with natural language understanding and reasoning programs in artificial intelligence has demonstrated that the combinatorial explosion of possible conclusions that can be drawn from any input is a serious computational bottleneck in the design of computer programs that process information automatically.

This paper presents a theory of interestingness that serves as the basis for two story understanding programs, one that can filter and extract information likely to be relevant or interesting to a user, and another that can formulate and pursue its own interests based on an analysis of the information necessary to carry out the tasks it is pursuing. We discuss the basis for our theory of interestingness, heuristics for interest-based processing of information, and the process used to filter and extract relevant information from the input.

Read the paper:

Interest-based Information Filtering and Extraction in Natural Language Understanding Systems

by Ashwin Ram

Bellcore Workshop on High-Performance Information Filtering, Morristown, NJ, November 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-05.pdf

Evaluation of Explanatory Hypotheses

Abduction is often viewed as inference to the “best” explanation. However, the evaluation of the goodness of candidate hypotheses remains an open problem. Most artificial intelligence research addressing this problem has concentrated on syntactic criteria, applied uniformly regardless of the explainer’s intended use for the explanation. We demonstrate that syntactic approaches are insufficient to capture important differences in explanations, and propose instead that choice of the “best” explanation should be based on explanations’ utility for the explainer’s purpose. We describe two classes of goals motivating explanation: knowledge goals reflecting internal desires for information, and goals to accomplish tasks in the external world. We describe how these goals impose requirements on explanations, and discuss how we apply those requirements to evaluate hypotheses in two computer story understanding systems.

Read the paper:

Evaluation of Explanatory Hypotheses

by Ashwin Ram, David Leake

13th Annual Conference of the Cognitive Science Society, 867-871, Chicago, IL, August 1991
www.cc.gatech.edu/faculty/ashwin/papers/er-91-03.pdf

A Theory of Questions and Question Asking

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner’s model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Read the paper:

A Theory of Questions and Question Asking

by Ashwin Ram

The Journal of the Learning Sciences, 1(3&4):273-318, 1991
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-02.pdf

Decision Models: A Theory of Volitional Explanation

This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanations patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used, and evaluated.

Read the paper:

Decision Models: A Theory of Volitional Explanation

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society (CogSci-90), Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-03.pdf

Knowledge Goals: A Theory of Interestingness

Combinatorial explosion of inferences has always been one of the classic problems in AI. Resources are limited, and inferences potentially infinite; a reasoner needs to be able to determine which inferences are useful to draw from a given piece of text. But unless one considers the goals of the reasoner, it is very difficult to give a principled definition of what it means for an inference to be “useful.”

This paper presents a theory of inference control based on the notion of interestingness. We introduce knowledge goals, the goals of a reasoner to acquire some piece of knowledge required for a reasoning task, as the focusing criteria for inference control. We argue that knowledge goals correspond to the interests of the reasoner, and present a theory of interestingness that is functionally motivated by consideration of the needs of the reasoner. Although we use story understanding as the reasoning task, many of the arguments carry over to other cognitive tasks as well.

Read the paper:

Knowledge Goals: A Theory of Interestingness

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society, 206-214, Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-02.pdf

Evaluating Text-Mining Strategies for Interpreting DNA Microarray Expression Profiles

To facilitate the interpretation of large data sets generated by DNA microarray studies, we are 1) developing a text mining system to extract keywords from MEDLINE abstracts associated with individual gene names and 2) investigating several clustering algorithms to determine relationships between genes based on shared keywords. The basic mechanisms of our keyword extraction algorithm was described previously (Soc Neurosci Abstr 2001, 557.4). Recent progress in evaluating the performance of this algorithm through Precision-Recall calculations and in using extracted keywords to accurately cluster predefined groups of genes are reported here.

Evaluating Text-Mining Strategies for Interpreting DNA Microarray Expression Profiles

by Brian Ciliax, Ying Liu, Jorge Civera, Ashwin Ram, Sham Navathe, Ray Dingledine

Annual Meeting of the Society for Neuroscience (Soc Neurosci Abstr), Orlando, FL, September 2002
www.cc.gatech.edu/faculty/ashwin/papers/er-02-01.pdf