Abstract
A hierarchical sensory information processing model which achieves sensor fusion is proposed. In this hierarchical structure, autonomous processing units are interconnected in levels above the sensors that get information from the physical world and the actuators which act on the physical world. The processing unit consists of three basic modules: a recognition module, a motor module, and a sensory-motor fusion module. A sensory-motor fusion model using neural networks, which enables the recognition system and motor system to be tightly coupled, is focused on. To demonstrate the effectiveness of the processing model, a visual control system architecture for a two-dimensional manipulator is developed, and computer simulation results for a target holding task are described.<>