Posts Tagged ‘problem solving’

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

Two important goals in the evaluation of artificial intelligence systems are to assess the merit of alternative design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. Achieving these objectives enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave when the characteristics of the domain or problem change. In addition, for case-based reasoning and other machine learning systems, it is important to evaluate the improvement in the performance of the system with experience (or with learning), to show that this improvement is statistically significant, to show that the variability in performance decreases with experience (convergence), and to analyze the impact of the design decisions on this improvement in performance.

We present a methodology for the evaluation of CBR and other AI systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. We illustrate this methodology with a case study in which we evaluate a multistrategy case-based and reinforcement learning system which performs autonomous robotic navigation. In this case study, we evaluate a range of design decisions that are important in CBR systems, including configuration parameters of the system (e.g., overall size of the case library, size or extent of the individual cases), problem characteristics (e.g., problem difficulty), knowledge representation decisions (e.g., choice of representational primitives or vocabulary), algorithmic decisions (e.g., choice of adaptation method), and amount of prior experience (e.g., learning or training). We show how our methodology can be used to evaluate the impact of these decisions on the performance of the system and, in turn, to make the appropriate choices for a given problem domain and verify that the system does behave as predicted.

Read the paper:

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

by Juan Carlos Santamaria, Ashwin Ram

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-05.pdf

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

The real world has many properties that present challenges for the design of intelligent agents: it is dynamic, unpredictable, and independent, poses poorly structured problems, and places bounds on the resources available to agents. Agents that opearate in real worlds need a wide range of capabilities to deal with them: memory, situation analysis, situativity, resource-bounded cognition, and opportunism.

We propose a theory of experience-based agency which specifies how an agent with the ability to richly represent and store its experiences could remember those experiences with a context-sensitive, asynchronous memory, incorporate those experiences into its reasoning on demand with integration mechanisms, and usefully direct memory and reasoning through the use of a utility-based metacontroller. We have implemented this theory in an architecture called NICOLE and have used it to address the problem of merging multiple plans during the course of case-based adaptation in least-committment planning.

Read the paper:

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

by Ashwin Ram, Anthony Francis

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-06.pdf

Structuring On-The-Job Troubleshooting Performance to Aid Learning

This paper describes a methodology for aiding the learning of troubleshooting tasks in the course of an engineer’s work. The approach supports learning in the context of actual, on-the-job troubleshooting and, in addition, supports performance of the troubleshooting task in tandem. This approach has been implemented in a computer tool called WALTS (Workspace for Aiding and Learning Troubleshooting).

This method aids learning by helping the learner structure his or her task into the conceptual components necessary for troubleshooting, giving advice about how to proceed, suggesting candidate hypotheses and solutions, and automatically retrieving cognitively relevant media. WALTS includes three major components: a structured dynamic workspace for representing knowledge about the troubleshooting process and the device being diagnosed; an intelligent agent that facilitates the troubleshooting process by offering advice; and an intelligent media retrieval tool that automatically presents candidate hypotheses and solutions, relevant cases, and various other media. WALTS creates resources for future learning and aiding of troubleshooting by storing completed troubleshooting instances in a self-populating database of troubleshooting cases.

The methodology described in this paper is partly based on research in problem-based learning, learning by doing, case-based reasoning, intelligent tutoring systems, and the transition from novice to expert. The tool is currently implemented in the domain of remote computer troubleshooting.

Read the paper:

Structuring On-The-Job Troubleshooting Performance to Aid Learning

by Brian Minsk, Hari Balakrishnan, Ashwin Ram

World Conference on Engineering Education, Minneapolis, MN, October 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-06.pdf

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Read the paper:

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

by Ashwin Ram, Mike Cox, S Narayanan

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-04.pdf

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Read the paper:

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

by Ashwin Ram, S Narayanan, Mike Cox

Cognitive Science journal, 19(3):289-340, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-67.pdf

The Utility Problem in Case-Based Reasoning

Case-based reasoning systems may suffer from the utility problem, which occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. One of the primary causes of the utility problem is the slowdown of conventional memories as the number of stored items increases. Unrestricted learning algorithms can swamp their memory system, causing the system to slow down more than the average speedup provided by individual learned rules.

Massive parallelism is often offered as a solution to this problem. However, most theoretical parallel models indicate that parallel solutions to the utility problem fail to scale up to large problem sizes, and hardware implementations across a wide class of machines and technologies back up these predictions.

Failing the creation of an ideal concurrent-write parallel random access machine, the only solution to the utility problem lies in a number of coping strategies, such as restricting learning to extremely high utility items or restricting the amount of memory searched. Case-based reasoning provides an excellent framework for the implementation and testing of a wide range of methods and policies for coping with the utility problem.

Read the paper:

The Utility Problem in Case-Based Reasoning

by Anthony Francis, Ashwin Ram

AAAI-93 Workshop on Case-Based Reasoning, Washington, DC, July 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-08.pdf

Creative Conceptual Change

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. The first kind of process involves reformulating perceptual, sensorimotor, or other low-level information into higher-level abstractions. The second kind of process involves a temporary suspension of disbelieve and the extension or adaptation of existing concepts to create a conceptual model of a new situation which may be very different from previous real-world experience.

We discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different “everyday” task domains: (a) SINS is an autonomous robotic navigation system that learns to navigate in an obstacle-ridden world by constructing sensorimotor concepts that represent navigational strategies, and (b) ISAAC is a natural language understanding system that reads short stories from the science fiction genre which requires a deep understanding of concepts that might be very different from the concepts that the system is familiar with.

Read the paper:

Creative Conceptual Change

by Ashwin Ram, Kenneth Moorman, Juan Carlos Santamaria

Invited talk at the 15th Annual Conference of the Cognitive Science Society, Boulder, CO, June 1993. Long version published as Technical Report GIT-CC-96/07, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.
www.cc.gatech.edu/faculty/ashwin/papers/er-93-04.pdf