Research

I study the neural systems behind human visual learning. With better knowledge of these systems we can develop methods (whether human-focused such as better training paradigms or machine-focused such as computer assistance) to increase human performance.

Broadly I am interested in leveraging recent advances in machine learning, computational modeling, computing resources, and the increased accuracy of brain imaging/data collection schemes to pursue a deeper understanding of how the brain functions. In addition, I am fascinated by the interplay between AI and human intelligence and what we can learn from our quest to reproduce our intelligence in non-biological entities.

Vision

As perhaps our most prominent sense, vision plays a role in the majority of our daily lives. It underlies many of the tasks we perform and assists in some of the most dangerous activities we do on a daily basis (e.g. driving). As a sensory system, the output of our visual system is also the input to many other complex systems (such as category learning). Using eye tracking and VR I seek to understand how these higher level systems interact with vision and how reliance on visual information can impact performance.

Category Learning

Every object we see in life requires categorization. From your morning cup of coffee (poison or edible?) to whether feed your dog (hungry or not?), we are continually categorizing objects and acting upon those categories. When this process goes well we often aren’t even aware of the judgment we have made. But when objects are incorrectly categorized “for example, a tumor is categorized as normal tissue or a toy gun as a real gun “the results can be dire. By studying how people learn categories we can develop better training paradigms to improve accuracy and decrease the amount of training required to become an expert categorizer.

Visual Perceptual Learning

Visual perceptual learning (VPL) is often defined as a long-term improvement in the ability to perform a perceptual task as a result of perceptual experience. Common VPL tasks include line orientation, dot motion, or visual contrast. While traditionally focused on as a low-level type of visual learning, I am interested in how VPL works for more complex objects (such as shapes) and how it interacts with other systems (such as category learning or attentional systems).

Computational Cognitive Neuroscience

To study visual learning, I use a combination of mathematical models and behavioral data within the Computational Cognitive Neuroscience (CCN) framework. CCN is a field of neuroscience developed extensively by my advisor Greg Ashby that ties mathematical models to neuroscience data and human behavior to develop models that are both falsifiable and predictive. Briefly, the CCN approach follows four principles (from the paper cited below):

1. A CCN model should not make any assumptions that are known to contradict the current neuroscience literature.

2. No extra neuroscientific detail should be added to the model unless there are data to test this component of the model or the model cannot function without this detail.

3. Once set, the architecture of the network and the models of each individual unit should remain fixed throughout all applications.

4. A CCN model should provide good accounts of behavioral data and at least some neuroscience data.

For a great tutorial on CCN written by Prof. Greg Ashby, see A tutorial on computational cognitive neuroscience: Modeling the neurodynamics of cognition.

Task Difficulty

Some tasks are more difficult than others, but why? And what can the difficulty of a task tell us about the system performing the task? These are the kind of questions I ask to as I seek to understand the brain by figuring out why it fails.

Working on something similar and interested in collaborating (or simply discussing an interesting idea)? Email me at luke_rosedahl@brown.edu