Posts Tagged ‘case-based reasoning’

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

The real world has many properties that present challenges for the design of intelligent agents: it is dynamic, unpredictable, and independent, poses poorly structured problems, and places bounds on the resources available to agents. Agents that opearate in real worlds need a wide range of capabilities to deal with them: memory, situation analysis, situativity, resource-bounded cognition, and opportunism.

We propose a theory of experience-based agency which specifies how an agent with the ability to richly represent and store its experiences could remember those experiences with a context-sensitive, asynchronous memory, incorporate those experiences into its reasoning on demand with integration mechanisms, and usefully direct memory and reasoning through the use of a utility-based metacontroller. We have implemented this theory in an architecture called NICOLE and have used it to address the problem of merging multiple plans during the course of case-based adaptation in least-committment planning.

Read the paper:

Multi-Plan Retrieval and Adaptation in an Experience-Based Agent

by Ashwin Ram, Anthony Francis

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-06.pdf

The Role of Ontology in Creative Understanding

Successful creative understanding requires that a reasoner be able to manipulate known concepts in order to understand novel ones. A major problem arises, however, when one considers exactly how these manipulations are to be bounded. If a bound is imposed which is too loose, the reasoner is likely to create bizarre understandings rather than useful creative ones. On the other hand, if the bound is too tight, the reasoner will not have the flexibility needed to deal with a wide range of creative understanding experiences. Our approach is to make use of a principled ontology as one source of reasonable bounding. This allows our creative understanding theory to have good explanatory power about the process while allowing the computer implementation of the theory (the ISAAC system) to be flexible without being bizarre in the task domain of reading science fiction short stories.

Read the paper:

The Role of Ontology in Creative Understanding

by Kenneth Moorman, Ashwin Ram

18th Annual Conference of the Cognitive Science Society (CogSci-96), San Diego, CA, July 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-01.pdf

Introspective Multistrategy Learning: Constructing a Learning Strategy under Reasoning Failure

The thesis put forth by this dissertation is that introspective analyses facilitate the construction of learning strategies. Furthermore, learning is much like nonlinear planning and problem solving. Like problem solving, it can be specified by a set of explicit learning goals (i.e., desired changes to the reasoner’s knowledge); these goals can be achieved by constructing a plan from a set of operators (the learning algorithms) that execute in a knowledge space. However, in order to specify learning goals and to avoid negative interactions between operators, a reasoner requires a model of its reasoning processes and knowledge.

With such a model, the reasoner can declaratively represent the events and causal relations of its mental world in the same manner that it represents events and relations in the physical world. This representation enables introspective self-examination, which contributes to learning by providing a basis for identifying what needs to be learned when reasoning fails. A multistrategy system possessing several learning algorithms can decide what to learn, and which algorithm(s) to apply, by analyzing the model of its reasoning. This introspective analysis therefore allows the learner to understand its reasoning failures, to determine the causes of the failures, to identify needed knowledge repairs to avoid such failures in the future, and to build a learning strategy (plan).

Thus, the research goal is to develop both a content theory and a process theory of introspective multistrategy learning and to establish the conditions under which such an approach is fruitful. Empirical experiments provide results that support the claims herein. The theory was implemented in a computational model called Meta-AQUA that attempts to understand simple stories. The system uses case-based reasoning to explain reasoning failures and to generate sets of learning goals, and it uses a standard non-linear planner to achieve these goals.

Evaluating Meta-AQUA with and without learning goals generated results indicating that computational introspection facilitates the learning process. In particular, the results lead to the conclusion that the stage that posts learning goals is a necessary stage if negative interactions between learning methods are to be avoided and if learning is to remain effective.

Read the thesis:

Introspective multistrategy learning: Constructing a learning strategy under reasoning failure

by Michael T. Cox

PhD Thesis, Technical Report GIT-CC-96/06, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-96-06.pdf

Learning Adaptive Reactive Agents

An autonomous agent is an intelligent system that has an ongoing interaction with a dynamic external world. It can perceive and act on the world through a set of limited sensors and effectors. Its most important characteristic is that it is forced to make decisions sequentially, one after another, during its entire “life”. The main objective of this dissertation is to study algorithms by which an autonomous agents can learn, using their own experience, to perform sequential decision-making efficiently and autonomously. The dissertation describes a framework for studying autonomous sequential decision-making consisting of three main elements: the agent, the environment, and the task. The agent attempts to control the environment by perceiving the environment and choosing actions in a sequential fashion. The environment is a dynamic system characterized by a state and its dynamics, a function that describes the evolution of the state given the agent’s actions. A task is a declarative description of the desired behavior the agent should exhibit as it interacts with the environment. The ultimate goal of the agent is to learn a policy or strategy for selecting actions that maximizes its expected benefit as defined by the task.

The dissertation focuses on sequential decision-making when the environment is characterized by continuous states and actions, and the agent has imperfect perception, incomplete knowledge, and limited computational resources. The main characteristic of the approach proposed in this dissertation is that the agent uses its previous experiences to improve estimates of the long-term benefit associated with the execution of specific actions. The agent uses these estimates to evaluate how desirable is to execute alternative actions and select the one that best balances the short- and long-term consequences, taking special consideration of the expected benefit associated with actions that accomplish new learning while making progress on the task.

The approach is based on novel methods that are specifically designed to address the problems associated with continuous domains, imperfect perception, incomplete knowledge, and limited computational resources. The approach is implemented using case-based techniques and extensively evaluated in simulated and real systems including autonomous mobile robots, pendulum swinging and balancing controllers, and other non-linear dynamic system controllers.

Read the thesis:

Learning Adaptive Reactive Agents

by Juan Carlos Santamaria

PhD Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-97-08.ps.Z

Structuring On-The-Job Troubleshooting Performance to Aid Learning

This paper describes a methodology for aiding the learning of troubleshooting tasks in the course of an engineer’s work. The approach supports learning in the context of actual, on-the-job troubleshooting and, in addition, supports performance of the troubleshooting task in tandem. This approach has been implemented in a computer tool called WALTS (Workspace for Aiding and Learning Troubleshooting).

This method aids learning by helping the learner structure his or her task into the conceptual components necessary for troubleshooting, giving advice about how to proceed, suggesting candidate hypotheses and solutions, and automatically retrieving cognitively relevant media. WALTS includes three major components: a structured dynamic workspace for representing knowledge about the troubleshooting process and the device being diagnosed; an intelligent agent that facilitates the troubleshooting process by offering advice; and an intelligent media retrieval tool that automatically presents candidate hypotheses and solutions, relevant cases, and various other media. WALTS creates resources for future learning and aiding of troubleshooting by storing completed troubleshooting instances in a self-populating database of troubleshooting cases.

The methodology described in this paper is partly based on research in problem-based learning, learning by doing, case-based reasoning, intelligent tutoring systems, and the transition from novice to expert. The tool is currently implemented in the domain of remote computer troubleshooting.

Read the paper:

Structuring On-The-Job Troubleshooting Performance to Aid Learning

by Brian Minsk, Hari Balakrishnan, Ashwin Ram

World Conference on Engineering Education, Minneapolis, MN, October 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-06.pdf

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems.

Read the paper:

A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems

by Anthony Francis, Ashwin Ram

8th European Conference on Machine Learning (ECML-95), Crete, Greece, April 1995
www.cc.gatech.edu/faculty/ashwin/papers/er-95-02.pdf

Understanding the Creative Mind

Margaret Boden, a master at bring ideas from artificial intelligence and cognitive science to the masses, has done it again. In The Creative Mind: Myths and Mechanisms (published by Routledge, 2003), she has produced a well-written, well-argued review and synthesis of current computational theories relevant to creativity. This book seems appropriately pitched for students in survey courses and for the intelligent lay public. And if ever there were a topic suitable for bridging the gap between researchers adh the layperson, this is surely it: What is creativity, and how is it possible? Or, in computational terms (the terms that Boden argoes ought to be applied), what are the processes of creativity?

We believe that in order to analyze creative reasoning, one needs a theoretical framework in which to model thinking. To this end, we propose using a computational approach rooted in case-based reasoning. This paradigm is fundamentally concerned with memory issues, such as remindings from partial matches at varying levels of representation and the formation of analogical maps between seemingly disparate situations—exactly the kinds of phenomena that researchers up to, and including, Boden have highlighted as central to creativity.

Our research suggests that creativity is not a process in itself that can be turned on or off; rather, it arises from the confluence and complex interaction of inferences using multiple kinds of knowledge in the context of a task or problem and in the context of a specific situation. Much of what we think of as “creativity” arises from interesting strategic control of these inferences and their integration in the context of a task and situation.

These five aspects—inferences, knowledge, task, situation, and control—are not special or unique to creativity but are part of normal everyday thinking. They determine the thinkable, the thoughts the reasoner might normally have when addressing a problem or performing a task. In a specific individual, more creative thoughts will likely result when these pieces come together in a novel way to yield unexplored and unexpected paths that go “beyond the thinkable”.

Read the full review:

Understanding the Creative Mind

by Ashwin Ram, Linda Wills, Eric Domeshek, Nancy Nersessian, Janet Kolodner

Artificial Intelligence journal, 79(1):111-128, 1995
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-94-13.pdf

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-09.ps

AQUA: Questions that Drive the Explanation Process

Editors’ Introduction:

In the doctoral disseration from which this chapter is drawn, Ashwin Ram presents an alternative perspective on the processes of story understanding, explanation, and learning. The issues that Ram explores in that dissertation are similar to those that are explored by the other authors in this book, but the angle that Ram take on these issues is somewhat different. His exploration of these processes is organized around the central theme of question asking. For him, understanding a story means identifying questions that the story raises, and questions that it answers.

Question asking also serves as a lens through which each of the sub-processes of is viewed: the retrieval of stored explanations, for instance, is driven by a library of what Ram calls “XP retrieval questions”; likewise, evaluation is driven by another set of questions, called “hypothesis verification questions”.

The AQUA program, which is Ram’s implementation of this question-based theory of understanding, is a very complex system, probably the most complex among the programs described in this book. AQUA covers a great deal of ground; it implements the entire case-based explanation process in a question-based manner. In this chapter, Ram focuses on the high-level description of the questions the programs asks, especially the questions it asks when constructing and evaluating explanations of volitional actions.

Read the paper:

AQUA: Questions that Drive the Explanation Process

by Ashwin Ram

In Inside Case-Based Explanation, R.C. Schank, A. Kass, and C.K. Riesbeck (eds.), 207-261, Lawrence Erlbaum, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-47.pdf

The Utility Problem in Case-Based Reasoning

Case-based reasoning systems may suffer from the utility problem, which occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. One of the primary causes of the utility problem is the slowdown of conventional memories as the number of stored items increases. Unrestricted learning algorithms can swamp their memory system, causing the system to slow down more than the average speedup provided by individual learned rules.

Massive parallelism is often offered as a solution to this problem. However, most theoretical parallel models indicate that parallel solutions to the utility problem fail to scale up to large problem sizes, and hardware implementations across a wide class of machines and technologies back up these predictions.

Failing the creation of an ideal concurrent-write parallel random access machine, the only solution to the utility problem lies in a number of coping strategies, such as restricting learning to extremely high utility items or restricting the amount of memory searched. Case-based reasoning provides an excellent framework for the implementation and testing of a wide range of methods and policies for coping with the utility problem.

Read the paper:

The Utility Problem in Case-Based Reasoning

by Anthony Francis, Ashwin Ram

AAAI-93 Workshop on Case-Based Reasoning, Washington, DC, July 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-08.pdf