Stanford researchers find that kids see words and faces differently from adults

Image
Image credit: Getty Images

By Nathan Collins

Young children literally see words and faces differently from adults. Where adults can most easily comprehend a word when they look at it straight on, children need to look a bit up and to the left. For faces, they need to look a bit up and to the right.

What’s more, those differences are accompanied by previously undetected changes in the brain circuits responsible for processing words and faces, researchers report Feb. 23 in Nature Communications.

“Kids’ window onto the world is different from adults,” said Jesse Gomez, a graduate student in the Stanford Neurosciences PhD Program and the lead author of the new study. Studying that window could help researchers better understand how children learn to read and recognize faces – and perhaps better understand dyslexia and autism as well.

What are you looking at?

Intuitively, if you want to get a good look at something – a word, a face, or pretty much anything else – you ought to look straight at it, and indeed that’s basically what adults do. After all, our eyes’ resolution is highest in the center of vision, called the fovea, so we get the clearest images by looking at something or someone straight on.

Yet even in adults that description is a bit of a simplification, because vision isn’t just about resolution. When it comes to recognizing words and faces, it’s also about how we process and pool the information from our eyes and from what parts of the visual field – the entire range of things we can see, not just the center. Meanwhile, researchers know only a little about visual processing in children and how that processing changes as kids grow up.

To tease things apart and to start to explore how visual processing develops over time in the brain, Gomez, his adviser Kalanit Grill-Spector, a professor of psychology and a member of Stanford Bio-Xand the Stanford Neurosciences Institute, and colleagues invited 26 children between the ages of 5 and 12 and 26 young adults between age 22 and 28 into the lab.

There, each participant laid down in an fMRI brain scanner and watched a sweeping bar glide across different places on a screen without moving their eyes. By correlating where the bar was with the regions that lit up on an fMRI image, the team mapped how the visual world is represented on the brain.

In those same subjects, researchers carried out a different scan in which subjects looked at various images, including words and faces, to identify which regions of the brain process faces, words and other objects. By combining the two scans, the data revealed which parts of the visual field face and word regions were most sensitive to – for example, where in the visual field the brain was looking to find words.

Kids see differently

In adults, the visual circuits for both words and faces looked for those things straight ahead – just as the scientists expected.

But the situation is different in children. For one thing, children’s circuits for words process a different region of the visual field, one that is shifted down and to the right, compared to adults. That means that in order to process words most efficiently, kids would need to look a bit up and to the left.

There are also some intriguing differences between vision circuits on left and right sides of the brain, Gomez said. In children, both sides respond fairly similarly to words and faces. But by young adulthood, the left side is more responsive to words, while the right is more responsive to faces, especially when those things are in the center of vision.

That, Gomez said, suggests a kind of competition exists for the prime real estate in the brain region that processes the center of the visual field. “If they both use central vision, you might think they’d be fighting,” Gomez said. The solution seems to be this: face circuits get a larger expanse in the right hemisphere, while word circuits get their pick in the left hemisphere.

Those results could help researchers better understand disorders associated with processing words (such as dyslexia) or faces (such as autism). The results could also help researchers better understand how kids learn to read or recognize faces, although Gomez cautioned that much more research is needed before reaching any conclusions.

“There could be a more optimal strategy if we catered to the differences children have from adults,” Gomez said, or “it could be that this is the natural course of development,” in which case adjusting how we teach kids to read, for example, could be counterproductive.

Additional Stanford authors include Vaidehi NatuBrianna Jeska, and Michael Barnett. The research was funded by grants from the National Science Foundation and the National Institutes of Health.