On May 27th, 2018 we learned about

Our memories and imagination may make nouns the most mentally demanding part of speech

Some things are hard to say, but according to an easily quantifiable metric, “I’m sorry” and “I love you” are easy. While there may be emotional difficulties and sharing one’s feelings, none of those words are all that difficult, because non of them are common nouns. A study of human speech across nine languages from around the world has found that all humans are most likely to show a bit of cognitive strain when trying to say the names of specific nouns more than any other part of speech. In a weird way, the issue might come down to our brain’s attempt to familiarize ourselves with what we’re saying before we say it.

The study analyzed recorded conversations from Mexico, Siberia, the Himalayas, the Amazon and the Kalahari Desert. While the specific grammar and vocabulary of these languages obviously differed, a commonality turned out to be how much people hesitate before saying a noun in the middle of a sentence. No matter if someone paused silently or with an “uh” or “um,” they were 60 percent more likely to take an extra moment before saying a noun than a verb. Even difficult or unfamiliar verbs weren’t this likely to require a pause, suggesting that there was something about how a noun is handled in the brain that makes us take an extra moment.

Stopping to see what we’re saying

Researchers suspect that these pauses may be due to our brains trying to conceptualize nouns as we try to say them. When we think of something, our brain brings that information into our working memory, often “seeing” it in our mind’s eye. So when we say “dog,” our brain will give us at least an abstract image of a dog, and that moment of internal observation may cost us enough time that we will need to break the rhythm of a spoken statement. As further evidence of the extra effort nouns require, researchers point out that we often avoid restating actual nouns in spoken conversation, replacing them with pronouns like “it” and “that” as much as possible. Verbs apparently aren’t so taxing, as we they don’t cause us to pause, even if we have to keep explicitly using a verb over and over.

Some of this may be eventually confirmed by observing the brain activity of people during casual conversations, looking at how much one’s own speech activates your memory and visual cortex. The fact that these patterns were so widespread suggests that a pattern will turn up though. Having analyzed over 288,848 words in nine languages, researchers are confident that these pre-noun pauses are something universal to human cognition, rather than weird tics of a specific culture’s customs or grammar.

Source: Why You Say 'Um' Before Certain Words by Mindy Weisberger, Live Science

On May 9th, 2018 we learned about

By five months old, babies prefer the sound of each other’s babble over any words adults can offer

Once a child learns to speak, it’s only a matter of time before they start parroting their parents’ favorite phrases. Even things you didn’t know you had a habit of saying are suddenly returned to your ears, albeit in a cuter, squeakier voice. It sort of feels like hearing your own voice in a recording (“Do I really say that?”) but it’s also slightly flattering, as you realize how you’re influencing the thoughts and actions of this new little person. It turns out that this influence may be overstated though, at least if your child has other babies to listen to.

Preference for the sounds of their peers

Many studies have investigated babies’ interest in speech around when they’re around seven months old, since that’s often when they’re just beginning to start experimenting with speech in the form of babbling. We know that the sing-song “baby voice” adults often use with kids helps these tiny listeners engage, although they also benefit from hearing normal conversations as well. These interactions may all be compromises though, as it seems that what babies really love to hear is the chatter of other babies, even over the sound of their own parents’ voices.

That wouldn’t be surprising in context of teenagers, but this preference for the sounds of a child’s peers apparently starts as early as five months old. In an experiment, babies could trigger recordings of vocalizations by turning to look at a checkerboard pattern on a television. They spent more time looking, and therefore listening, when the sounds they triggered were from another baby.

Seeking the secret ingredient in babies’ speech

Presumably those babies aren’t exactly amazing conversationalists, so researchers are assuming that there’s something about the quality of their voices that hold their peers’ attention. Even when an adult woman matches the exact pitch of a babies’ vocalization, it’s not as interesting to a listening child. With the help of a special synthesizer built to imitate human speech, researchers are hoping to deduce exactly what aspect of a baby’s proto-babble makes it so fascinating. If adults can somehow command that kind of attention, we may be able to help children’s language development. Or better yet, we may learn how to better hold our kids’ attention when we’re trying to talk to them.

Source: From the mouths of babes: Infants really enjoy hearing from their peers, EurekAlert!

On December 3rd, 2017 we learned about

Using languages invented in a lab to reveal the brain’s linguistic biases

From a scientific perspective, the problem with languages is the cultures that help create them. There are over 7,000 languages spoken on Earth today, but they’re all so closely tied to cultural learning that it’s hard for researchers to be sure which aspects of language come from history, and which come our brains. To study how human brain structure influences language, researchers needed a language that was somehow free of the baggage that comes from being spoken, read and shared for thousands of years. Since such a language obviously wasn’t in use anywhere around the world, the only solution was to make up some new ones for the lab.

The hypothesis was that while human languages have been shaped and steered by events in the outside world, they were probably also shaped by the structure and functionality of our brains as well. These traits, called linguistic universals, would then be the foundation for all human language, even if they were modified or dressed up later on. If people could be tested using new languages that were designed to be free of these modifications, the underlying mental mechanisms that come straight from our gray matter would hopefully be revealed.

Keeping associated words close together

To test how brains handle a culture-free language, English speakers without other linguistic experience were taught two synthetic languages over the course of three days. The languages were different from both English and each other, making it easier to compare how people worked with these mental frameworks. Once proficient at these new ways of speaking, test participants were asked to explain a task in the synthetic languages, forcing their brains to make use of the new words and rules. The first pattern to emerge was related to word order, or more specifically, word dependencies.

Word dependency is when one word is partnered with another to complete an idea. For example, a sentence with the “29” doesn’t tell you nearly as much as “November 29.” All languages, including the two synthetic languages in this study, use this concept, but this experiment showed that our brains show a bias towards shorter dependencies. When speaking in either synthetic language, test participants regularly tried to pair dependent terms as closely in a sentence as possible, rather than allow them to drift apart through the sentence.

Aiming for cognitive efficiency

This may seem intuitive because we’re so used to it, but that’s probably the result of how much our brains prefer that grammatical construction. Researchers suspect that by pairing dependent words together as much as possible, our brains may be reducing the cognitive load to parse the sentence’s meaning. When the dependent terms are joined together, they can share the same space in our memory, vs. taking up multiple “slots” in what’s a fairly limited mental resource. This word order isn’t universal among the 7,000+ languages humans speak today, but it’s more common than it should be if it were strictly coincidental, indicating that the basis for many linguistic rules may stem from our brains’ cognitive abilities.

Source: Why do we see similarities across languages? Human brain may be responsible, Medical Xpress

On November 14th, 2017 we learned about

The evolution of empty space in how we read and write

It’s a safe bet that you’re reading this text silently, because that’s how text works, right? You read the words presented, process their meaning in your mind, and then maybe relate them to others verbally or by sharing the text directly (hint hint.) At this point, I’m going to guess that you’re wondering why this is even being spelled out to you- this is how reading works, right? As it turns out, this concept of reading is relatively new. For the majority of human history, people had a very different relationship with words on a page, starting with all the parts that they didn’t even bother to transcribe in the first place.

Skipping vowels or spaces

While alphabets have existed for nearly 4000 years, not all languages recorded them like a modern English speaker is used to. Languages related to Aramaic, like Hebrew and Phoenician, had vowels but didn’t write them down. Instead, consonants were strung together, and readers just had to use context to put together what word was intended by the author. As difficult as that may sound, some modern languages, including Arabic, still use a writing system that skips writing vowels. European languages obviously did start incorporating vowels thanks to the ancient Greeks, although when they started writing them, they dropped some critical punctuation as a trade-off.

By the time The Iliad was written, Greeks had more or less given up on spaces between words, adopting a system now known as scriptio continua. The entire word was spelled out, but entiresentencesweremashedtogether. It seems like this would just convince all the Greeks to declare “too long, didn’t read,” and do an extent they did. This is because the purpose of writing at that time was primarily to record someone’s oral statements so that they could be stated for a new audience. Mashing words together without spaces made reading a bit more cumbersome, but that was fine when the point of a written scroll was read words aloud to other people. The reader could see what sounds to pronounce, and so plowing through an endless line of phonemes worked fine.

Adding spaces for accessibility

As human societies came in more and more contact with each other, scriptio continua started causing problems. For hundreds of years, the few people who could read throughout Europe were satisfied with this kind of writing, since most listeners were never going to see the text itself to even worry about it. However, in the ninth century, Irish scribes hit a snag, as the manuscripts they were copying were proving to be extremely difficult to parse. As native Celtic speakers, the scribes couldn’t simply “hear” the gaps in the words that speakers of Romance languages like Italian or French could, and so they started incorporating other cues in the writing to help make sense of things. They started with line breaks, giving each sentence it’s own line on a page. That was followed by spaces between words, with the Book of Mulling being the first volume to be transcribed in a way that wouldn’t immediately scare off a modern reader.

Adding spaces to a page did more than make sentences more intelligible. As the practice spread across Europe, it started to influence people’s relationship with writing. This new writing no longer put the emphasis on sounds and speech, allowing for the reader to become the primary audience instead of a performer. With that change in focus, making a page pleasing to the reader sparked new ideas in graphic design, since those efforts would help readers and writers alike. The printing press and subsequent accessibility of reading materials obviously cemented the idea of reading as something people could do by themselves, but the simple act of putting spaces between words was a key step towards making reading a goal unto itself, although it’s certainly not time to think that our relationship with the written word has been settled.

Trading space for more speed

The amount of writing in the world today is unmatched in history. More people are expected to read more than ever before, and huge amounts of that text is found on electronic devices that are starting to sever our relationship with the media it appears on. Modern graphic design loves white space more than ever, and many people would advise that the most accessible writing is lists of short phrases as opposed to “walls of text” that will scare readers away. However, as more writing appears on screens, there’s a chance that we could give up spacing in an entirely new way, displaying only one word at a time on your screen.

Instead of scanning your eyes across a sentence, text can be animated, with each word changing in the center of your screen so your eyes can focus on one physical location. It’s a significantly faster way to read, as moving your focus across a page or screen slows you down just a bit. As we consume more writing on daily basis, reading at 600 to 1000 words per minute may start to sound pretty attractive, as much as that would baffle the ancient Greeks.

Source: No Word Unspoken by Daniel Zalewski, Lingua Franca

On July 20th, 2017 we learned about

Babies pick up second languages better when spoken to in “parentese”

Baby brains are language sponges. Among all lessons the world offers a tiny child, from gravity’s effect on spoons to how expressive adults can be about picking those spoons up, babies are also doing their best to make sense of everything people around them are saying. This is true even when more than one language is being spoken by the baby’s household, which is how most bilingual people pick up their first and second languages. Beyond the practical upside of being able to speak with a wider range of people in the world, being bilingual has been linked to a number of cognitive benefits. Since not everyone has a chance to grow up in a bilingual household, researchers have been looking at ways to help more children learn more languages.

Fortunately, while the ability to learn a language seems to be part of human genes, the specific language you learn is not. That bit of cultural information can be shared from anyone, and often babies will learn a second language from child care providers, schools, or the local population in general if parents speak a different language at home. There are also programs and classes, even for the very young, to help pick up a second language. As an experiment, a sort of minimal language course was established to help Spanish speakers in Spain learn English, and the results show that full immersion isn’t necessary for a child to make progress.

Learning languages in just one hour a week

The experimental class lasted 18 weeks, but it only met for one hour a week. During that hour, instructors catered to the type of interaction the participating 7- to 33-month-olds would receive at home. This meant that rather than more formal practice and instruction in English, the babies and toddlers were taught with something closer to the baby talk their parents were using at home to teach them Spanish. The idea was that those types of interactions aren’t just fun for the babies, but they’re actually something human brains have evolved to expect in order to learn to speak.

To evaluate the effectiveness of this “parentese” approach, children were outfitted with special vests that recorded their speech, both in the experimental class and a more traditional English class offered by the Madrid school system. Researchers then tallied up how many English words the kids used, and how often they used them. To see what stuck, their English was evaluated again 18 weeks after the instruction ended. The results showed that parentese worked better with these young kids, and their English was stronger at the end of 36 weeks, with those kids having retained around 74 words compared to the 18 words retained by more traditional students. Importantly, this held true even for kids from very different backgrounds— kids’ brains from both higher and lower income neighborhoods responded well to this type of instruction.

This doesn’t necessarily prove that these kids will all speak English fluently in a couple of years, but it does show that learning a second language doesn’t require intense investments of energy. Most likely, it means that the best way to teach a language is to use the tools evolution has created for a baby’s or toddler’s brain during this crucial developmental period when they’re primed to learn.

Source: New Study Shows How Exposure To A Foreign Language Ignites Infants’ Learning, Scienmag

On February 2nd, 2017 we learned about

Even brief exposure to a language can offer bilingual benefits later in life

When my daughter was around one-and-a-half, she started going to a daycare that was run by a native Russian speaker. Many of the other families at the daycare were Russian, and so it was a place of daily exposure to a second language. At home, while we could speak a little Russian, we spoke English, leaving our toddler to try and figure out the two languages at the same time. Russian words understandably lagged behind English, but eventually my daughter was able to speak with a pretty good accent, at least to my non-native ear. Once she left the daycare though, my daughter’s second language quickly faded, and she now says she doesn’t remember more than one or two words, despite having once been capable of simple conversations.

New research suggests that some of that Russian, or at least the sounds required to listen and speak, are likely retained in my daughter’s brain. Rather than track my daughter, studies looked at children born in Korea but then moved to the Netherlands as babies or toddlers. Nobody expected these kids to retain any vocabulary, but tests found that they did hold on to Korean pronunciation when they needed it.

Saying specific sounds

With a pool of both Korean born and non-Korean born Dutch speakers, participant were asked to take Korean language pronunciation tests. While many languages have some overlapping sounds, or phonemes, it’s not unusual that some sounds don’t intersect. In these tests, there was particular interest in “p”, “t” and “k” sounds, which have three variations each in Korean, two in English, but only one in Dutch. Test participants were asked to vocalize these phonemes, and then had their pronunciation rated by native Korean speakers.

The tests showed that children who had had some exposure to native speakers as babies, even for as little as three months, could pronounce the Korean-specific sounds better than people who were first exposed to them later in life. This contracts some language acquisition research that has indicated that these sounds were slowly amassed and mastered a baby’s brain, only being ready for use at around 12 months of age when many people say their first real words. Instead, it seems that even brief exposure to different sounds can make a lasting impression, and that that information can be retrieved later in life.

So if my daughter ever gets back into it, she’ll probably be able to speak Russian with a better accent than I ever will. Even if she doesn’t end up passing for a native speaker, she’ll still have some benefits from having listened to them at an early age.

My three-year-old said: A goat is a коза!

Oh right— he goes to the same daycare, so he’s in the middle of all this as well. We’ll see if he sticks with it or is English-only by Kindergarten too.

Source: Infants Exposed to Languages Can Retain Them Later in Life by Greg Uyeno, Live Science

On January 13th, 2016 we learned about

Making sense of our language’s lack of scents

If vocabulary reflects a culture’s interests, then we really don’t care about smells. English has multitudes of terms for how things look, and can even offer a variety of ways to see, gaze, stare, observe, gape, gawk or witness them. Smells seem to take a back seat to our other senses, with only a small handful of words that only apply to how things smell, like stinky, fragrant or musty, and even stinky gets co-opted for other purposes now and then. Otherwise, we just compare smells to their sources without getting into much more nuance than that. This isn’t surprising considering how sophisticated and primary human vision is, but it two Southeast Asian populations show that this bias may be cultural instead of physiological.

Specifying smells

The Jahai people of Malaysia and the Maniq of Thailand both have a wider variety of terms used exclusively to describe smells. While these vocabularies don’t necessarily rival the number of visual experiences in English, they’re enough to help shape the thoughts and experiences of their speakers. With as few as 15 words for different smells, these peoples can organize and categorize types of smells in ways that English speakers would never consider. For instance, the word itpit describes the smell of an otter-like animal called a binturong, some soap, flowers and the durian fruit. English would best approximate the smell by saying it’s “like” popcorn, but to a native speaker itpit is as elemental as saying something is red in English.

This degree of sophistication in language has shaped the Jahai and Maniq peoples’ lives to a degree. Everything from customs to cooking are influenced by the heightened degree of odor-awareness, with greater concerns over when smells from foods or people might be mixing in an unwanted way. Tests also indicate that both populations are better at identifying smells than Westerners, possibly just as a result of paying more attention to them.

Classifications from chemistry

Beyond cultural differences, these terms for smells may offer clues about chemistry as well. In the same way that potentially harmful substances are often lumped together as having bad or gross smells, pleasant smells might unintentionally be groupings of other key ingredients. Itpit, the popcorny, binturong smell, is largely used to describe flowers and plants with medicinal qualities. If a common ingredient can be found in each item, our idea of what constitutes a ‘pleasant’ smell may have originated from something as utilitarian as medicinal qualities of flowers and herbs.

Source: Why Do Most Languages Have So Few Words for Smells? by Ed Yong, The Atlantic