Infants’ holistic but inefficient image perception
One of the reasons we become faster at tasks the more we do them is that our brains get more efficient with our perception of that activity. During our first attempt at say, using a tool, we’re not sure which details are important to our success, and so we need to mentally wade through all of them. With experience, we pare down the list things we pay attention to, even filtering from indirectly related experiences to allow us to work within a much more focused framework than before. Babies, of course, are on the opposite end of this spectrum, because to them, just about everything is new, and so each little detail needs to be noticed and appreciated, which means they’re actually seeing a lot more detail than adults.
Comparing objects versus conditions
A milestone in our perception is when we start to appreciate perceptual constancy. This is a little like the idea of object permanence, where you understand that that just because you can’t see an object it can still exist and be rediscovered. Rather than what’s not seen though, perceptual constancy is what allows us to mentally attach attributes to what we see, and understand that a change in its appearance might not mean a change in the object itself. For instance, a red shirt put under a blue light looks different, but is still the same shirt. Adult brains understand that only the circumstances changed. After just a few months of age, we start to take this kind of thing for granted, but what researchers have found is that it also comes at the cost of noticing a variety of other details.
Tiny babies, however, can still get caught up in such details. They were shown sets of images of 3d rendered objects to see which they’d find novel as the “odd man out.” While adults would generally make one grouping, babies would indicate, through their continued attention and interest, that there were other, stronger similarities that better grouped the images. While the researchers couldn’t see what the babies were seeing of course, they could design and seed the images with such differences in the computer before rendering them. So while adults got hung up on a two things looking glossy and the other looking matte, the babies looked past those concepts of surface and fixated on shared pixel intensity.
Evaluating every color
Pixel intensity is a term used in image analysis that’s unsurprisingly uncommon in common speech. As a mental shortcut, adults usually focus on objects within an image, and what’s happening to those objects compared to life experience. Pixel intensity is coming from a survey of all the color and light values in an image at once. The more instances of a single, discrete color value, the more intense it is said to be. So in the above experiment, while the adults were looking at a 3d object, the babies were comparing the color ratios in the overall image. They basically see the world through histograms.
As neat as this kind of perception may seem, it’s not the easiest way to interact with the world. So as babies’ brains develop, they start pruning back the information they rely on, using more mental models based on experience to speed up their perception. It’s one of a few shifts in perception we know about, including having an ear for all potential spoken phonemes before narrowing them down to the “useful list” of sounds our immediate family makes.
Source: What Little Babies See That You No Longer Can by Susana Martinez-Conde, Scientific American