Led by 天美麻豆鈥檚 James Haxby, Neuroscientists Unlock Shared Brain Codes

Body

[[{鈥渢ype鈥:鈥渕edia鈥,鈥渧iew_mode鈥:鈥渕edia_large鈥,鈥渇id鈥:null,鈥渁ttributes鈥:{鈥渃lass鈥:鈥渕edia-image alignleft size-full wp-image-4848鈥,鈥渢ypeof鈥:鈥渇oaf:Image鈥,鈥渟tyle鈥:鈥溾,鈥渨idth鈥:鈥75鈥,鈥渉eight鈥:鈥75鈥,鈥渁lt鈥:鈥溙烀缆槎 Shield鈥潁}]]天美麻豆 College Press Release Contact the Office of Public Affairs 603-646-3661 鈥 603-646-2850 (fax)office.of.public.affairs@dartmouth.edu

A team of neuroscientists at 天美麻豆 College has shown that different individuals鈥 brains use the same, common neural code to recognize complex visual images.

Image
天美麻豆 neuroscientist James Haxby studies the encoding of visual images in the brain. (photo by Joseph Mehling 鈥69)

Their paper, 鈥,鈥 is published in the October 20, 2011, issue of the journal Neuron.  The paper鈥檚 lead author is , the Evans Family Distinguished Professor of Cognitive Neuroscience in the . Haxby is also the director of the Cognitive Neuroscience Center at 天美麻豆 and a professor in the Center for Mind/Brain Sciences at the University of Trento in Italy. Swaroop Guntupalli, a graduate student in Haxby鈥檚 laboratory, developed software for the project鈥檚 methods and ran the tests of their validity.

Haxby developed a new method called hyperalignment to create this common code and the parameters that transform an individual鈥檚 brain activity patterns into the code.

The parameters are a set of numbers that act like a combination that unlocks that individual鈥檚 brain鈥檚 code, Haxby said, allowing activity patterns in that person鈥檚 brain to be decoded 鈥 specifying the visual images that evoked those patterns 鈥 by comparing them to patterns in other people鈥檚 brains.

鈥淔or example, patterns of brain activity evoked by viewing a movie can be decoded to identify precisely which part of the movie an individual was watching by comparing his or her brain activity to the brain activity of other people watching the same movie,鈥 said Haxby.

When someone looks at the world, visual images are encoded into patterns of brain activity that capture all of the subtleties that make it possible to recognize an unlimited variety of objects, animals, and actions.

鈥淎lthough the goal of this work was to find the common code, these methods can now be used to see how brain codes vary across individuals because of differences in visual experience due to training, such as that for air traffic controllers or radiologists, to cultural background, or to factors such as genetics and clinical disorders,鈥 he said.

Because of variability in brain anatomy, brain decoding had required separate analysis of each individual. Although detailed analysis of an individual could break that person鈥檚 brain code, it didn鈥檛 say anything about the brain code for a different person. In the paper, Haxby shows that all individuals use a common code for visual recognition, making it possible to identify specific patterns of brain activity for a wide range of visual images that are the same in all brains.

As a result of their research, the team showed that a pattern of brain activity in one individual can be decoded by finding the picture or movie that evoked the same pattern in other individuals.

Participants in the study watched the movie Raiders of the Lost Ark while their patterns of brain activity were measured using fMRI. In two separate experiments, they viewed still images of seven categories of faces and objects (male and female human faces, monkey faces, dog faces, shoes, chairs and houses) or six animal species (squirrel monkeys, ring-tailed lemurs, mallards, yellow-throated warblers, ladybugs and luna moths). Analysis of the brain activity patterns evoked by the movie produced the common code. Once the brain patterns were in the common code, including responses that were not evoked by the movie, distinct patterns were detected that were common across individuals and specific for fine distinctions, such as monkey versus dog faces, and squirrel monkeys versus lemurs.

This work is part of a five-year collaboration with signal processing scientists at Princeton University.

Latarsha Gatlin