Posts Tagged ‘multistrategy learning’

A Functional Theory of Creative Reading: Process, Knowledge, and Evaluation

Reading is a complex cognitive behavior, making use of dozens of tasks to achieve comprehension. As such, it represents an important aspect of general cognition; the benefits of having a theory of reading would be far-reaching. Additionally, there is an aspect of reading which has been largely ignored by the research, namely, reading appears to encompass a creative process. In this dissertation, I present a theory capable of explaining creative reading. There are not separate reading behaviors, some mundane and some creative; instead, all of reading must be understood as a creative process. Therefore, a comprehensive theory of reading and creativity is needed. Unfortunately, although the scientific study of reading has been undertaken for almost a century, it is often done in a piecemeal fashion–that is, the research has often concentrated on a narrow aspect of reading behavior. This is due, to some degree, to the fact that reading is a huge process–however, it is my belief that failing to consider the complete reading process will limit the research, Thus, in my work, I identify a set of tasks which sufficiently covers the reading process for short narratives. Together, these tasks form the basis of a functional theory of reading.

Using the reading framework to support the research, I produced a theory of creative understanding, which is the process by which novel concepts come to be understood by a reasoner. To accomplish this, I created a taxonomy of novelty types, I produced a knowledge representation and ontology of sufficient flexibility to permit the representation of a wide-range of conceptual forms, and I created an interlocking set of four tasks which act together to produce the behavior–memory retrieval, analogical mapping, base-constructive analogy, and problem reformulation. My technique for base-constructive analogy is one of the more unique features of my work; it permits existing concepts to be combined in ways which enable novel concepts to be understood. In addition to that, the theory provides for reasonable bounding to occur on the process of creative understanding through a set of heuristics associated with the ontology. This allows reasonable bounding to occur while greatly reducing the possibility of non-useful understandings.

The theory of creative reading is instantiated in a computer model, the ISAAC system, which reads and comprehends short science fiction stories. The model has allowed me to perform empirical evaluation, providing an important stage in the overall theory revision cycle. The evaluation demonstrated that ISAAC can answer independently-generated comprehension questions about a set of science fiction stories with skill comparable to a group of college students. This result, along with an analysis of the internal workings of the model enables me to claim that my theory of creative reading is sufficient to explain important aspects of the behavior.

Read the thesis:

A functional theory of creative reading: Process, knowledge, and evaluation

by Kenneth Moorman

PhD Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1997

Continuous Case-Based Reasoning

Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as on-line sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task.

This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research.

Read the paper:

Continuous Case-Based Reasoning

by Ashwin Ram, Juan Carlos Santamaria

Artificial Intelligence journal, (90)1-2:25-77, 1997

Learning Adaptive Reactive Controllers

Reactive controllers has been widely used in mobile robots since they are able to achieve successful performance in real-time. However, the configuration of a reactive controller depends highly on the operating conditions of the robot and the environment; thus, a reactive controller configured for one class of environments may not perform adequately in another. This paper presents a formulation of learning adaptive reactive controllers. Adaptive reactive controllers inherit all the advantages of traditional reactive controllers, but in addition they are able to adjust themselves to the current operating conditions of the robot and the environment in order to improve task performance. Furthermore, learning adaptive reactive controllers can learn when and how to adapt the reactive controller so as to achieve effective performance under different conditions.

The paper presents an algorithm for a learning adaptive reactive controller that combines ideas from case-based reasoning and reinforcement learning to construct a mapping between the operating conditions of a controller and the appropriate controller configuration; this mapping is in turn used to adapt the controller configuration dynamically. As a case study, the algorithm is implemented in a robotic navigation system that controls a Denning MRV-III mobile robot. The system is extensively evaluated using statistical methods to verify its learning performance and to understand the relevance of different design parameters on the performance of the system.

Read the paper:

Learning Adaptive Reactive Controllers

by Juan Carlos Santamaria, Ashwin Ram

Technical Report GIT-CC-97/05, College of Computing, Georgia Institute of Technology, Atlanta, GA, January 1997

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

Two important goals in the evaluation of artificial intelligence systems are to assess the merit of alternative design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. Achieving these objectives enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave when the characteristics of the domain or problem change. In addition, for case-based reasoning and other machine learning systems, it is important to evaluate the improvement in the performance of the system with experience (or with learning), to show that this improvement is statistically significant, to show that the variability in performance decreases with experience (convergence), and to analyze the impact of the design decisions on this improvement in performance.

We present a methodology for the evaluation of CBR and other AI systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. We illustrate this methodology with a case study in which we evaluate a multistrategy case-based and reinforcement learning system which performs autonomous robotic navigation. In this case study, we evaluate a range of design decisions that are important in CBR systems, including configuration parameters of the system (e.g., overall size of the case library, size or extent of the individual cases), problem characteristics (e.g., problem difficulty), knowledge representation decisions (e.g., choice of representational primitives or vocabulary), algorithmic decisions (e.g., choice of adaptation method), and amount of prior experience (e.g., learning or training). We show how our methodology can be used to evaluate the impact of these decisions on the performance of the system and, in turn, to make the appropriate choices for a given problem domain and verify that the system does behave as predicted.

Read the paper:

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

by Juan Carlos Santamaria, Ashwin Ram

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996

Introspective Multistrategy Learning: Constructing a Learning Strategy under Reasoning Failure

The thesis put forth by this dissertation is that introspective analyses facilitate the construction of learning strategies. Furthermore, learning is much like nonlinear planning and problem solving. Like problem solving, it can be specified by a set of explicit learning goals (i.e., desired changes to the reasoner’s knowledge); these goals can be achieved by constructing a plan from a set of operators (the learning algorithms) that execute in a knowledge space. However, in order to specify learning goals and to avoid negative interactions between operators, a reasoner requires a model of its reasoning processes and knowledge.

With such a model, the reasoner can declaratively represent the events and causal relations of its mental world in the same manner that it represents events and relations in the physical world. This representation enables introspective self-examination, which contributes to learning by providing a basis for identifying what needs to be learned when reasoning fails. A multistrategy system possessing several learning algorithms can decide what to learn, and which algorithm(s) to apply, by analyzing the model of its reasoning. This introspective analysis therefore allows the learner to understand its reasoning failures, to determine the causes of the failures, to identify needed knowledge repairs to avoid such failures in the future, and to build a learning strategy (plan).

Thus, the research goal is to develop both a content theory and a process theory of introspective multistrategy learning and to establish the conditions under which such an approach is fruitful. Empirical experiments provide results that support the claims herein. The theory was implemented in a computational model called Meta-AQUA that attempts to understand simple stories. The system uses case-based reasoning to explain reasoning failures and to generate sets of learning goals, and it uses a standard non-linear planner to achieve these goals.

Evaluating Meta-AQUA with and without learning goals generated results indicating that computational introspection facilitates the learning process. In particular, the results lead to the conclusion that the stage that posts learning goals is a necessary stage if negative interactions between learning methods are to be avoided and if learning is to remain effective.

Read the thesis:

Introspective multistrategy learning: Constructing a learning strategy under reasoning failure

by Michael T. Cox

PhD Thesis, Technical Report GIT-CC-96/06, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996

Learning as Goal-Driven Inference

Developing an adequate and general computational model of adaptive, multistrategy, and goal-oriented learning is a fundamental long-term objective for machine learning research for both theoretical and pragmatic reasons. We outline a proposal for developing such a model based on two key ideas. First, we view learning as an active process involving the formulation of learning goals during the performance of a reasoning task, the prioritization of learning goals, and the pursuit of learning goals using multiple learning strategies. The second key idea is to model learning as a kind of inference in which the system augments and reformulates its knowledge using various types of primitive inferential actions, known as knowledge transmutations.

Read the paper:

Learning as Goal-Driven Inference

by Ryszard Michalski, Ashwin Ram

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 21, MIT Press/Bradford Books, 1995

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

This chapter presents a computational model of introspective multistrategy learning, which is a deliberative or strategic learning process in which a reasoner introspects about its own performance to decide what to learn and how to learn it. The reasoner introspects about its own performance on a reasoning task, assigns credit or blame for its performance, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. Our theory models a process of learning that is active, experiential, opportunistic, diverse, and introspective. This chapter also describes two computer systems that implement our theory, one that learns diagnostic knowledge during a troubleshooting task and one that learns multiple kinds of causal and explanatory knowledge during a story understanding task.

Read the paper:

Goal-Driven Learning in Multistrategy Reasoning and Learning Systems

by Ashwin Ram, Mike Cox, S Narayanan

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 18, MIT Press/Bradford Books, 1995

Learning, Goals, and Learning Goals

In cognitive science, artificial intelligence, psychology, and education, a growing body of research supports the view that the learning process is strongly influenced by the learner’s goals. Investigators in each of these areas have independently pursued the common issues of how learning goals arise, how they affect learner decisions of when and what to learn, and how they guide the learning process. The fundamental tenet of goal-driven learning is that learning is largely an active and strategic process in which the learner, human or machine, attempts to identify and satisfy its information needs in the context of its tasks and goals, its prior knowledge, its capabilities, and environmental opportunities for learning.

This chapter discusses fundamental questions for goal-driven learning: the motivations for adopting a goal-driven model of learning, the basic goal-driven learning framework, the specific issues raised by the framework that a theory of goal-driven learning must address, the types of goals that can influence learning, the types of influences those goals can have on learning, and the pragmatic implications of the goal-driven learning model.

Read the paper:

Learning, Goals, and Learning Goals

by Ashwin Ram, David Leake

In A. Ram & D. Leake (eds.), Goal-Driven Learning, chapter 1, MIT Press/Bradford Books, 1995

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

This article presents a computational model of the learning of diagnostic knowledge based on observations of human operators engaged in a real-world troubleshooting task. We present a model of problem solving and learning in which the reasoner introspects about its own performance on the problem solving task, identifies what it needs to learn to improve its performance, formulates learning goals to acquire the required knowledge, and pursues its learning goals using multiple learning strategies. The model is implemented in a computer system which provides a case study based on observations of troubleshooting operators and protocol analysis of the data gathered in the test area of an operational electronics manufacturing plant. The model is intended as a computational model of human learning; in addition, it is computationally justified as a uniform, extensible framework for multistrategy learning.

Read the paper:

Learning to Troubleshoot: Multistrategy Learning of Diagnostic Knowledge for a Real-World Problem Solving Task

by Ashwin Ram, S Narayanan, Mike Cox

Cognitive Science journal, 19(3):289-340, 1995

Interacting Learning-Goals: Treating Learning as a Planning Task

This research examines the metaphor of goal-driven planning as a tool for performing the integration of multiple learning algorithms. In case-based reasoning systems, several learning techniques may apply to a given situation. In a failure-driven learning environment, the problems of strategy construction are to choose and order the best set of learning algorithms or strategies that recover from a processing failure and to use those strategies to modify the system’s background knowledge so that the failure will not be repeated in similar future situations.

A solution to this problem is to treat learning-strategy construction as a planning problem with its own set of goals. Learning goals, as opposed to ordinary goals, specify desired states in the background knowledge of the learner, rather than desired states in the external environment of the planner. But as with traditional goal-based planners, management and pursuit of these learning goals becomes a central issue in learning. Example interactions of learning-goals are presented from a multistrategy learning system called Meta-AQUA that combines a case-based approach to learning with non linear planning to achieve goals in a knowledge space.

Read the paper:

Interacting Learning-Goals: Treating Learning as a Planning Task

by Mike Cox, Ashwin Ram

In J.-P. Haton, M. Keane, & M. Manago (editors), Advances in Case-Based Reasoning (Lecture Notes in Artificial Intelligence), 60-74, Springer-Verlag, 1995. Earlier version presented at the Second European Workshop on Case-Based Reasoning (EWCBR-94), Chantilly, France, 1994.