Archive for the ‘Learning’ Category

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/er-95-09.ps

Failure-Driven Learning as Input Bias

Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA.

Read the paper:

Failure-Driven Learning as Input Bias

by Mike Cox, Ashwin Ram

Sixteenth Annual Conference of the Cognitive Science Society, Atlanta, GA, August 1994
www.cc.gatech.edu/faculty/ashwin/papers/er-94-09.pdf

AQUA: Questions that Drive the Explanation Process

Editors’ Introduction:

In the doctoral disseration from which this chapter is drawn, Ashwin Ram presents an alternative perspective on the processes of story understanding, explanation, and learning. The issues that Ram explores in that dissertation are similar to those that are explored by the other authors in this book, but the angle that Ram take on these issues is somewhat different. His exploration of these processes is organized around the central theme of question asking. For him, understanding a story means identifying questions that the story raises, and questions that it answers.

Question asking also serves as a lens through which each of the sub-processes of is viewed: the retrieval of stored explanations, for instance, is driven by a library of what Ram calls “XP retrieval questions”; likewise, evaluation is driven by another set of questions, called “hypothesis verification questions”.

The AQUA program, which is Ram’s implementation of this question-based theory of understanding, is a very complex system, probably the most complex among the programs described in this book. AQUA covers a great deal of ground; it implements the entire case-based explanation process in a question-based manner. In this chapter, Ram focuses on the high-level description of the questions the programs asks, especially the questions it asks when constructing and evaluating explanations of volitional actions.

Read the paper:

AQUA: Questions that Drive the Explanation Process

by Ashwin Ram

In Inside Case-Based Explanation, R.C. Schank, A. Kass, and C.K. Riesbeck (eds.), 207-261, Lawrence Erlbaum, 1994.
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-93-47.pdf

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system’s knowledge, and of the organization of this knowledge.

This paper presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task.

Read the paper:

Introspective Reasoning using Meta-Explanations for Multistrategy Learning

by Ashwin Ram, Mike Cox

In Machine Learning: A Multistrategy Approach, Vol. IV, R.S. Michalski and G. Tecuci (eds.), 349-377, Morgan Kaufmann, 1994
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-19.pdf

Using Genetic Algorithms to Learn Reactive Control Parameters for Autonomous Robotic Navigation

This paper explores the application of genetic algorithms to the learning of local robot navigation behaviors for reactive control systems. Our approach evolves reactive control systems in various environments, thus creating sets of “ecological niches” that can be used in similar environments. The use of genetic algorithms as an unsupervised learning method for a reactive control architecture greatly reduces the effort required to configure a navigation system. Unlike standard genetic algorithms, our method uses a floating point gene representation. The system is fully implemented and has been evaluated through extensive computer simulations of robot navigation through various types of environments.

Read the paper:

Using Genetic Algorithms to Learn Reactive Control Parameters for Autonomous Robotic Navigation

by Ashwin Ram, Ron Arkin, Gary Boone, Michael Pearce

Adaptive Behavior, 2(3):277-305, 1994
www.cc.gatech.edu/faculty/ashwin/papers/er-94-01.pdf

The Utility Problem in Case-Based Reasoning

Case-based reasoning systems may suffer from the utility problem, which occurs when knowledge learned in an attempt to improve a system’s performance degrades performance instead. One of the primary causes of the utility problem is the slowdown of conventional memories as the number of stored items increases. Unrestricted learning algorithms can swamp their memory system, causing the system to slow down more than the average speedup provided by individual learned rules.

Massive parallelism is often offered as a solution to this problem. However, most theoretical parallel models indicate that parallel solutions to the utility problem fail to scale up to large problem sizes, and hardware implementations across a wide class of machines and technologies back up these predictions.

Failing the creation of an ideal concurrent-write parallel random access machine, the only solution to the utility problem lies in a number of coping strategies, such as restricting learning to extremely high utility items or restricting the amount of memory searched. Case-based reasoning provides an excellent framework for the implementation and testing of a wide range of methods and policies for coping with the utility problem.

Read the paper:

The Utility Problem in Case-Based Reasoning

by Anthony Francis, Ashwin Ram

AAAI-93 Workshop on Case-Based Reasoning, Washington, DC, July 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-08.pdf

Knowledge Compilation and Speedup Learning in Continuous Task Domains

Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define.

To improve its performance in continuous problem domains, a problem solver must learn, modify, and use “continuous operators” that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimotor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimentation.

Read the paper:

Knowledge Compilation and Speedup Learning in Continuous Task Domains

by Juan Carlos Santamaria, Ashwin Ram

ICML-93 Workshop on Knowledge Compilation and Speedup Learning, Amherst, MA, June 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-07.pdf

Creative Conceptual Change

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. The first kind of process involves reformulating perceptual, sensorimotor, or other low-level information into higher-level abstractions. The second kind of process involves a temporary suspension of disbelieve and the extension or adaptation of existing concepts to create a conceptual model of a new situation which may be very different from previous real-world experience.

We discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different “everyday” task domains: (a) SINS is an autonomous robotic navigation system that learns to navigate in an obstacle-ridden world by constructing sensorimotor concepts that represent navigational strategies, and (b) ISAAC is a natural language understanding system that reads short stories from the science fiction genre which requires a deep understanding of concepts that might be very different from the concepts that the system is familiar with.

Read the paper:

Creative Conceptual Change

by Ashwin Ram, Kenneth Moorman, Juan Carlos Santamaria

Invited talk at the 15th Annual Conference of the Cognitive Science Society, Boulder, CO, June 1993. Long version published as Technical Report GIT-CC-96/07, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.
www.cc.gatech.edu/faculty/ashwin/papers/er-93-04.pdf

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner’s memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good “lessons” to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices.

We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.

Read the paper:

Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases

by Ashwin Ram

Machine Learning journal, 10:201-248, 1993
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-92-03.pdf

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system’s environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.

Read the paper:

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

by Ashwin Ram, Juan Carlos Santamaria

Informatica, 17(4):347-369, 1993

www.cc.gatech.edu/faculty/ashwin/papers/er-93-09.pdf