Monitoring the synced mental activity of speakers and listeners
The qualities of a good conversation may seem somewhat ephemeral, but new technology now allows us to observe when two people really understand each other. This is possible largely thanks to how brains process information— to describe something, we activate some of the same regions of the brain that listeners use to ‘decode’ what we’re saying. The trick is then to see what parts of the brain are active, but until recently, that’s required a large, noisy fMRI machine, which isn’t exactly conducive to conversation…
Fortunately, for brain-scanning-on-the-go, researchers at Drexel and Princeton have been working on functional near-infrared spectroscopy (fNIRS). The wearable headband sends multiple frequencies of light through the head, tuned to pass through skin, tissue and bone. Those same wavelengths do get reflected by hemoglobin though, and so the machine measures how much light is bouncing off different concentrations of oxygenated blood around the brain. More neural activity requires more blood flow, allowing us to map out what part of the brain is working at any given moment. Unlike an fMRI though, this can only capture the outer layers of the brain, but that’s still sufficient to watch someone listening.
As a test, researchers had people tell a story in English and Turkish, both in front of a live audience and on video. English speakers then listened to these stories while their brain activity was monitored and compared to the storytellers’ brains. When listening to English, attentive listeners essentially synced up with the speakers, both in recordings and in person. Hearing the story in Turkish didn’t have the same effect, indicating that the key element was being able to hear and understand the words being spoken.
Shared sentiments
Overall, the data agreed with the notion that we experience hearing words in a similar way to the person saying them. This state of ‘alignment’ fits with similar fMRI data from other studies, as well as studies looking to map our language processing across the brain.
Interestingly, there was one key difference between listeners and speakers, which is that listeners brains were slightly delayed in their activation. It seems that there is a bit of lag in even tightly aligned brains introduced by the listener hearing and processing the speaker’s words. This isn’t an inconsistency in the model though, as it actually supports the model that listeners are taking information from speakers to rebuild the described mental image— they just need a moment to unpack it all first.
Source: New Findings Show How Our Brains 'Align' When We Communicate by David Nield, Science Alert