How do we make sense of visual world around us? Our brain takes a pattern of photons hitting the retina, and continually creates a coherent representation of what we see – detecting objects and landmarks – rather than just perceiving an array of pixels. This image processing allows us to perform a range of visual tasks, such recognizing a friend’s face, finding your way to the grocery store, and catching a frisbee.
However, how these computational feats are achieved by the neural circuitry of the visual system is largely unknown. Furthermore, visual processing does not occur in isolation, but depends on behavioral state, task demands, and interaction with the world. Strikingly, the underlying neural circuitry is wired up by a range of cellular processes, such as arbor growth, synapse formation, and activity-dependent plasticity, and thus these developmental mechanisms effectively determine how we see the world.
Our research is focused on understanding how neural circuits performs the image processing that allows us to perform complex visual behaviors, and how these circuits are assembled during development. We use in vivo recording techniques including high-density extracellular recording and widefield / two-photon imaging, along with molecular genetic tools to dissect neural circuits. We have also implemented behavioral tasks for mice so we can perform quantitative psychophysics to measure the animal’s perception, and we use theoretical models to understand general computational principles being instantiated. Recently, we have extended these approaches to study visual perception in the context of natural behaviors and complex environments, in an effort to understand how the visual system function in real-world conditions.