Posts Tagged ‘multistrategy learning’

Case-Based Learning from Proactive Communication

We present a proactive communication approach that allows CBR agents to gauge the strengths and weaknesses of other CBR agents. The communication protocol allows CBR agents to learn from communicating with other CBR agents in such a way that each agent is able to retain certain cases provided by other agents that are able to improve their individual performance (without need to disclose all the contents of each case base). The selection and retention of cases is modeled as a case bartering process, where each individual CBR agent autonomously decides which cases offers for bartering and which offered barters accepts. Experimental evaluations show that the sum of all these individual decisions result in a clear improvement in individual CBR agent performance with only a moderate increase of individual case bases.

Read the paper:

Case-Based Learning from Proactive Communication

by Santi Ontañón and Enric Plaza

International Joint Conference on Artificial Intelligence (IJCAI 2007), pp. 999-1004
www.cc.gatech.edu/faculty/ashwin/papers/er-07-18.pdf

Introspective Multistrategy Learning: On the Construction of Learning Strategies

A central problem in multistrategy learning systems is the selection and sequencing of machine learning algorithms for particular situations. This is typically done by the system designer who analyzes the learning task and implements the appropriate algorithm or sequence of algorithms for that task. We propose a solution to this problem which enables an AI system with a library of machine learning algorithms to select and sequence appropriate algorithms autonomously. Furthermore, instead of relying on the system designer or user to provide a learning goal or target concept to the learning system, our method enables the system to determine its learning goals based on analysis of its successes and failures at the performance task.

The method involves three steps: Given a performance failure, the learner examines a trace of its reasoning prior to the failure to diagnose what went wrong (blame assignment); given the resultant explanation of the reasoning failure, the learner posts explicitly represented learning goals to change its background knowledge (deciding what to learn); and given a set of learning goals, the learner uses nonlinear planning techniques to assemble a sequence of machine learning algorithms, represented as planning operators, to achieve the learning goals (learning-strategy construction). In support of these operations, we define the types of reasoning failures, a taxonomy of failure causes, a second-order formalism to represent reasoning traces, a taxonomy of learning goals that specify desired change to the background knowledge of a system, and a declarative task-formalism representation of learning algorithms.

We present the Meta-AQUA system, an implemented multistrategy learner that operates in the domain of story understanding. Extensive empirical evaluations of Meta-AQUA show that it performs significantly better in a deliberative, planful mode than in a reflexive mode in which learning goals are ablated and, furthermore, that the arbitrary ordering of learning algorithms can lead to worse performance than no learning at all. We conclude that explicit representation and sequencing of learning goals is necessary for avoiding negative interactions between learning algorithms that can lead to less effective learning.

Read the paper:

Introspective Multistrategy Learning: On the Construction of Learning Strategies

by Mike Cox, Ashwin Ram

Artificial Intelligence, 112:1-55, 1999
www.cc.gatech.edu/faculty/ashwin/papers/er-99-01.pdf

Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces

A key element in the solution of reinforcement learning problems is the value function. The purpose of this function is to measure the long-term utility or value of any given state. The function is important because an agent can use this measure to decide what to do next. A common problem in reinforcement learning when applied to systems having continuous states and action spaces is that the value function must operate with a domain consisting of real-valued variables, which means that it should be able to represent the value of infinitely many state and action pairs. For this reason, function approximators are used to represent the value function when a close-form solution of the optimal policy is not available.

In this paper, we extend a previously proposed reinforcement learning algorithm so that it can be used with function approximators that generalize the value of individual experiences across both, state and action spaces. In particular, we discuss the benefits of using sparse coarse-coded function approximators to represent value functions and describe in detail three implementations: CMAC, instance-based, and case-based. Additionally, we discuss how function approximators having different degrees of resolution in different regions of the state and action spaces may influence the performance and learning efficiency of the agent.

We propose a simple and modular technique that can be used to implement function approximators with non-uniform degrees of resolution so that it can represent the value function with higher accuracy in important regions of the state and action spaces. We performed extensive experiments in the double integrator and pendulum swing up systems to demonstrate the proposed ideas.

Read the paper:

Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces

by Juan Santamaria, Rich Sutton, Ashwin Ram

Adaptive Behavior, 6(2):163-217, 1997
www.cc.gatech.edu/faculty/ashwin/papers/er-98-02.pdf

A Functional Theory of Creative Reading: Process, Knowledge, and Evaluation

Reading is a complex cognitive behavior, making use of dozens of tasks to achieve comprehension. As such, it represents an important aspect of general cognition; the benefits of having a theory of reading would be far-reaching. Additionally, there is an aspect of reading which has been largely ignored by the research, namely, reading appears to encompass a creative process. In this dissertation, I present a theory capable of explaining creative reading. There are not separate reading behaviors, some mundane and some creative; instead, all of reading must be understood as a creative process. Therefore, a comprehensive theory of reading and creativity is needed. Unfortunately, although the scientific study of reading has been undertaken for almost a century, it is often done in a piecemeal fashion–that is, the research has often concentrated on a narrow aspect of reading behavior. This is due, to some degree, to the fact that reading is a huge process–however, it is my belief that failing to consider the complete reading process will limit the research, Thus, in my work, I identify a set of tasks which sufficiently covers the reading process for short narratives. Together, these tasks form the basis of a functional theory of reading.

Using the reading framework to support the research, I produced a theory of creative understanding, which is the process by which novel concepts come to be understood by a reasoner. To accomplish this, I created a taxonomy of novelty types, I produced a knowledge representation and ontology of sufficient flexibility to permit the representation of a wide-range of conceptual forms, and I created an interlocking set of four tasks which act together to produce the behavior–memory retrieval, analogical mapping, base-constructive analogy, and problem reformulation. My technique for base-constructive analogy is one of the more unique features of my work; it permits existing concepts to be combined in ways which enable novel concepts to be understood. In addition to that, the theory provides for reasonable bounding to occur on the process of creative understanding through a set of heuristics associated with the ontology. This allows reasonable bounding to occur while greatly reducing the possibility of non-useful understandings.

The theory of creative reading is instantiated in a computer model, the ISAAC system, which reads and comprehends short science fiction stories. The model has allowed me to perform empirical evaluation, providing an important stage in the overall theory revision cycle. The evaluation demonstrated that ISAAC can answer independently-generated comprehension questions about a set of science fiction stories with skill comparable to a group of college students. This result, along with an analysis of the internal workings of the model enables me to claim that my theory of creative reading is sufficient to explain important aspects of the behavior.

Read the thesis:

A functional theory of creative reading: Process, knowledge, and evaluation

by Kenneth Moorman

PhD Thesis, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1997
www.cs.transy.edu/kmoorman/Dissertation/

Continuous Case-Based Reasoning

Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as on-line sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task.

This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research.

Read the paper:

Continuous Case-Based Reasoning

by Ashwin Ram, Juan Carlos Santamaria

Artificial Intelligence journal, (90)1-2:25-77, 1997
www.cc.gatech.edu/faculty/ashwin/papers/er-97-06.pdf

Learning Adaptive Reactive Controllers

Reactive controllers has been widely used in mobile robots since they are able to achieve successful performance in real-time. However, the configuration of a reactive controller depends highly on the operating conditions of the robot and the environment; thus, a reactive controller configured for one class of environments may not perform adequately in another. This paper presents a formulation of learning adaptive reactive controllers. Adaptive reactive controllers inherit all the advantages of traditional reactive controllers, but in addition they are able to adjust themselves to the current operating conditions of the robot and the environment in order to improve task performance. Furthermore, learning adaptive reactive controllers can learn when and how to adapt the reactive controller so as to achieve effective performance under different conditions.

The paper presents an algorithm for a learning adaptive reactive controller that combines ideas from case-based reasoning and reinforcement learning to construct a mapping between the operating conditions of a controller and the appropriate controller configuration; this mapping is in turn used to adapt the controller configuration dynamically. As a case study, the algorithm is implemented in a robotic navigation system that controls a Denning MRV-III mobile robot. The system is extensively evaluated using statistical methods to verify its learning performance and to understand the relevance of different design parameters on the performance of the system.

Read the paper:

Learning Adaptive Reactive Controllers

by Juan Carlos Santamaria, Ashwin Ram

Technical Report GIT-CC-97/05, College of Computing, Georgia Institute of Technology, Atlanta, GA, January 1997
www.cc.gatech.edu/faculty/ashwin/papers/git-cc-97-05.pdf

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

Two important goals in the evaluation of artificial intelligence systems are to assess the merit of alternative design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. Achieving these objectives enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave when the characteristics of the domain or problem change. In addition, for case-based reasoning and other machine learning systems, it is important to evaluate the improvement in the performance of the system with experience (or with learning), to show that this improvement is statistically significant, to show that the variability in performance decreases with experience (convergence), and to analyze the impact of the design decisions on this improvement in performance.

We present a methodology for the evaluation of CBR and other AI systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. We illustrate this methodology with a case study in which we evaluate a multistrategy case-based and reinforcement learning system which performs autonomous robotic navigation. In this case study, we evaluate a range of design decisions that are important in CBR systems, including configuration parameters of the system (e.g., overall size of the case library, size or extent of the individual cases), problem characteristics (e.g., problem difficulty), knowledge representation decisions (e.g., choice of representational primitives or vocabulary), algorithmic decisions (e.g., choice of adaptation method), and amount of prior experience (e.g., learning or training). We show how our methodology can be used to evaluate the impact of these decisions on the performance of the system and, in turn, to make the appropriate choices for a given problem domain and verify that the system does behave as predicted.

Read the paper:

Systematic Evaluation of Design Decisions in Case-Based Reasoning Systems

by Juan Carlos Santamaria, Ashwin Ram

In Case-Based Reasoning: Experiences, Lessons, and Future Directions, D.B. Leake, editor, AAAI Press, 1996
www.cc.gatech.edu/faculty/ashwin/papers/er-96-05.pdf