From images to meaning: What have deep neural networks taught us about the ventral stream?

When and Where

Wednesday, March 04, 2026 12:15 pm to 1:30 pm
Psychology Lounge; Room 4043
Sidney Smith Hall
100 St. George Street

Speakers

Marieke Mur, Western University

Description

How does the ventral visual stream turn patterns of light into meaningful objects? In my lab, we combine experimental and computational approaches to characterize how object information is represented in human high-level visual cortex. We have shown that the representation is at once categorical and continuous: response patterns encode visual features of intermediate complexity (e.g., eye, round) that are diagnostic of ecologically relevant categories such as faces. Feedforward deep neural networks approximate aspects of this representational geometry, suggesting that relatively simple linear and nonlinear transformations of visual input can produce category-aligned structure. At the same time, our work reveals systematic divergences between artificial networks and human vision, particularly under more challenging and naturalistic conditions. I will argue that deep neural networks are most powerful not as end-point models of the ventral stream, but as controlled systems for probing the learning pressures and constraints that shape human visual representations.

 

Alternate locations:

Mississauga

Scarborough

Rotman Research Institute

CCT 4034

SW 403

748

 

Onlinehttps://utoronto.zoom.us/j/85899245173

Map

100 St. George Street

Categories

Audiences