Posts Tagged ‘real-time cbr’

Knowledge Compilation and Speedup Learning in Continuous Task Domains

Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define.

To improve its performance in continuous problem domains, a problem solver must learn, modify, and use “continuous operators” that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimotor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimentation.

Read the paper:

Knowledge Compilation and Speedup Learning in Continuous Task Domains

by Juan Carlos Santamaria, Ashwin Ram

ICML-93 Workshop on Knowledge Compilation and Speedup Learning, Amherst, MA, June 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-07.pdf

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system’s environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.

Read the paper:

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

by Ashwin Ram, Juan Carlos Santamaria

Informatica, 17(4):347-369, 1993

www.cc.gatech.edu/faculty/ashwin/papers/er-93-09.pdf