Posts Tagged ‘text cbr’

Structuring On-The-Job Troubleshooting Performance to Aid Learning

This paper describes a methodology for aiding the learning of troubleshooting tasks in the course of an engineer’s work. The approach supports learning in the context of actual, on-the-job troubleshooting and, in addition, supports performance of the troubleshooting task in tandem. This approach has been implemented in a computer tool called WALTS (Workspace for Aiding and Learning Troubleshooting).

This method aids learning by helping the learner structure his or her task into the conceptual components necessary for troubleshooting, giving advice about how to proceed, suggesting candidate hypotheses and solutions, and automatically retrieving cognitively relevant media. WALTS includes three major components: a structured dynamic workspace for representing knowledge about the troubleshooting process and the device being diagnosed; an intelligent agent that facilitates the troubleshooting process by offering advice; and an intelligent media retrieval tool that automatically presents candidate hypotheses and solutions, relevant cases, and various other media. WALTS creates resources for future learning and aiding of troubleshooting by storing completed troubleshooting instances in a self-populating database of troubleshooting cases.

The methodology described in this paper is partly based on research in problem-based learning, learning by doing, case-based reasoning, intelligent tutoring systems, and the transition from novice to expert. The tool is currently implemented in the domain of remote computer troubleshooting.

Read the paper:

Structuring On-The-Job Troubleshooting Performance to Aid Learning

by Brian Minsk, Hari Balakrishnan, Ashwin Ram

World Conference on Engineering Education, Minneapolis, MN, October 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-06.pdf

AQUA: Questions that Drive the Explanation Process

Editors’ Introduction:

In the doctoral disseration from which this chapter is drawn, Ashwin Ram presents an alternative perspective on the processes of story understanding, explanation, and learning. The issues that Ram explores in that dissertation are similar to those that are explored by the other authors in this book, but the angle that Ram take on these issues is somewhat different. His exploration of these processes is organized around the central theme of question asking. For him, understanding a story means identifying questions that the story raises, and questions that it answers.

Question asking also serves as a lens through which each of the sub-processes of is viewed: the retrieval of stored explanations, for instance, is driven by a library of what Ram calls “XP retrieval questions”; likewise, evaluation is driven by another set of questions, called “hypothesis verification questions”.

The AQUA program, which is Ram’s implementation of this question-based theory of understanding, is a very complex system, probably the most complex among the programs described in this book. AQUA covers a great deal of ground; it implements the entire case-based explanation process in a question-based manner. In this chapter, Ram focuses on the high-level description of the questions the programs asks, especially the questions it asks when constructing and evaluating explanations of volitional actions.

Read the paper:

AQUA: Questions that Drive the Explanation Process

by Ashwin Ram

In Inside Case-Based Explanation, R.C. Schank, A. Kass, and C.K. Riesbeck (eds.), 207-261, Lawrence Erlbaum, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-47.pdf

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner’s memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good “lessons” to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices.

We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.

Read the paper:

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

by Ashwin Ram

Machine Learning journal, 10:201-248, 1993
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-03.pdf

A Theory of Questions and Question Asking

This article focusses on knowledge goals, that is, the goals of a reasoner to acquire or reorganize knowledge. Knowledge goals, often expressed as questions, arise when the reasoner’s model of the domain is inadequate in some reasoning situation. This leads the reasoner to focus on the knowledge it needs, to formulate questions to acquire this knowledge, and to learn by pursuing its questions. I develop a theory of questions and of question-asking, motivated both by cognitive and computational considerations, and I discuss the theory in the context of the task of story understanding. I present a computer model of an active reader that learns about novel domains by reading newspaper stories.

Read the paper:

A Theory of Questions and Question Asking

by Ashwin Ram

The Journal of the Learning Sciences, 1(3&4):273-318, 1991
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-02.pdf

Decision Models: A Theory of Volitional Explanation

This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanations patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used, and evaluated.

Read the paper:

Decision Models: A Theory of Volitional Explanation

by Ashwin Ram

Twelvth Annual Conference of the Cognitive Science Society (CogSci-90), Cambridge, MA, July 1990
www.cc.gatech.edu/faculty/ashwin/papers/er-90-03.pdf