We study how the human visual system processes visual information to allow successful interactions with the environment. Our approach is to combine computational methods and behavioral studies to understand what are the visual features that establish the mapping between vision and action. Contrary to the commonly held assumption that perception and action stem from separate visual mechanisms, we take a fundamentally different view, proposing that perception and action form a coordinated system: Perception informs action about the state of the world and, in turn, action shapes perception by signaling when it is faulty. Through this integrated approach, quite distinct from most investigations on perception and motor control, we seek understanding:
Visual encoding of 3D object properties: What are the visual signals that reflect these properties, how are these signals analyzed across disparate viewing conditions, what are the computational mechanisms that model these processes, and how well do these models predict perceptual performance?
Visuomotor mapping: How is the 3D visual encoding transformed into motor actions? What is the role of feedback during an action?
Plasticity of the motor mapping and visual encoding of 3D information: How do motor actions adapt to faulty visual information? Is it sufficient to change the visuomotor mapping? Or does the very perceptual interpretation of the environment need to be reshaped?