Abstract
The task of modeling and simulating human directional and spatial hearing has been difficult because of the complexity of the auditory system. The authors approach the problem from the perspective of artificial neural networks that reveal new possibilities for developing advanced models of directional and spatial hearing that are able to learn desired forms of behavior. They discuss the general framework of the topic, possible approaches and model implementations, and interesting applications of such models. Experimental results show that a combination of a dummy head, an auditory preprocessor, and a neural network may learn directional discrimination that in simple cases outperforms human listeners, showing also some ability of generalization.