Archive for the ‘Robotics’ Category

Knowledge Compilation and Speedup Learning in Continuous Task Domains

Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define.

To improve its performance in continuous problem domains, a problem solver must learn, modify, and use “continuous operators” that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimotor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimentation.

Read the paper:

Knowledge Compilation and Speedup Learning in Continuous Task Domains

by Juan Carlos Santamaria, Ashwin Ram

ICML-93 Workshop on Knowledge Compilation and Speedup Learning, Amherst, MA, June 1993
www.cc.gatech.edu/faculty/ashwin/papers/er-93-07.pdf

Creative Conceptual Change

Creative conceptual change involves (a) the construction of new concepts and of coherent belief systems, or theories, relating these concepts, and (b) the modification and extrapolation of existing concepts and theories in novel situations. The first kind of process involves reformulating perceptual, sensorimotor, or other low-level information into higher-level abstractions. The second kind of process involves a temporary suspension of disbelieve and the extension or adaptation of existing concepts to create a conceptual model of a new situation which may be very different from previous real-world experience.

We discuss these and other types of conceptual change, and present computational models of constructive and extrapolative processes in creative conceptual change. The models have been implemented as computer programs in two very different “everyday” task domains: (a) SINS is an autonomous robotic navigation system that learns to navigate in an obstacle-ridden world by constructing sensorimotor concepts that represent navigational strategies, and (b) ISAAC is a natural language understanding system that reads short stories from the science fiction genre which requires a deep understanding of concepts that might be very different from the concepts that the system is familiar with.

Read the paper:

Creative Conceptual Change

by Ashwin Ram, Kenneth Moorman, Juan Carlos Santamaria

Invited talk at the 15th Annual Conference of the Cognitive Science Society, Boulder, CO, June 1993. Long version published as Technical Report GIT-CC-96/07, College of Computing, Georgia Institute of Technology, Atlanta, GA, 1996.
www.cc.gatech.edu/faculty/ashwin/papers/er-93-04.pdf

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system’s environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.

Read the paper:

Multistrategy Learning in Reactive Control Systems for Autonomous Robotic Navigation

by Ashwin Ram, Juan Carlos Santamaria

Informatica, 17(4):347-369, 1993

www.cc.gatech.edu/faculty/ashwin/papers/er-93-09.pdf