Professor Uses Deep Neural Networks to Understand Visual Learning

Aaron Seitz

Aaron Seitz, a professor of psychology.

Visual perceptual learning, or VPL, refers to changes in abilities to detect or discriminate visual stimuli through training or experience. VPL has been argued to give rise in plasticity in a diversity of brain systems, and there is currently much controversy regarding what parameters of training give rise to changes, or not, in these various brain systems. To date no models exist that are sufficiently detailed to estimate the distribution of learning across these systems while accounting for both behavioral changes and learning at the level of individual neuronal units.

Aaron Seitz, a professor of psychology at UCR, is a coauthor on a paper, “Deep neural networks for modeling visual perceptual learning,” published in the Journal of Neuroscience, that uses a deep neural network, or DNN (currently one of the popular models utilized in machine learning), in a novel way to serve as a model system of VPL. The advantage of this DNN is that it can be studied at many levels of analyses.

“We trained a deep neural network model in a similar way that humans and animals had been trained for perceptual learning studies so that we could compare changes in the model to what have been found behaviorally and in the brain,” said Seitz, who is the director of the UCR Brain Game Center.

DNNs well simulate human behaviors and neural data from early visual areas and the inferior temporal cortex – the area of the brain that is crucial for visual object recognition – and can be flexibly adapted to different tasks, stimulus types, and training programs.

“We found our model showed solutions very similar to what we found in the brain and could also be used to resolve key controversies in the field where different labs had found different brain changes related to learning,” Seitz said.

Seitz and first author Li Wenliang, a graduate student at University College London, were able to resolve these controversies by showing that different parameters of training between those studies led to the different pattern of results when the artificial network was trained.

“The striking similarities with many studies suggests that DNNs may provide solutions to learning and representation problems faced by biological systems and as such may be useful in generating testable predictions to constrain and guide perceptual learning research within living systems,” Seitz said.

Unlike existing VPL models, the DNN Seitz and Wenliang used was pre-trained on natural images to recognize objects with high precision. It was not designed specifically for VPL.

“Yet, it fulfilled predictions of some existing theories, and reproduced findings related to neurons of the primate visual areas,” Seitz said. “We, therefore, submit that this model can provide ways of studying VPL – from behavior to physiology.”

The research was supported by the Gatsby Charitable Foundation and National Institutes of Health.

Iqbal Pittalwala

Top of Page

Page 4 of 16212345678203040Last >>