top of page

Developing Meaning


This is the third essay in a five part series. Read the previous essay here.


How do infants learn the meanings of their first words? Since infants can’t look them up in a dictionary, the mechanism by which they learn words offers an escape from the infinite circularity of meaning: an entry point into its origin.


Infants’ altriciality and plasticity enables them to fine-tune their inherited representations

As a human fetus begins to form, it’s brain develops according to the genetic plan that was forged by evolution. Neurons assemble to selectively respond to basic sensory patterns, like the geometric orientation of lines and the pitch contours of sounds. Some of these neuronal ensembles combine to detect higher order patterns that serve as foundational representations. We’re born with a visual module that preferentially responds to shapes arranged in an inverted triangle (a “face”), as well as an auditory template that categorically differentiates the basic units of speech (“phonemes”).


Many of these foundational representations are not unique to humans; they’ve been highly conserved across animal species. Examples of this “core knowledge” include number approximations, object representations, and in some species, even an implicit theory of mind.


What is unique to humans is that our language links our system for representation with our system for communication. In doing so, it allows us to endlessly combine and recombine representations, creating new ones that can themselves be endlessly combined and recombined. We build abstraction upon abstraction and pack them into words like “freedom,” “justice,” and “civilization.” This is how language serves as our psychological escape velocity.


But how did our systems for representation and communication become linked in the first place? Because development and evolution share many parallels, each provides a window into the other, mutually shedding light on the mechanisms that support how we understand and create meaning.


Our capacity for language depends upon two hallmarks of our species: our altriciality, or relative immaturity at birth, and our neural plasticity. Infants’ altriciality requires that their caretakers keep them in close proximity, which means they’re surrounded by social signals from birth. Combined with their plasticity, this social immersion guides infants to tune into the faces and voices around them, as well as the words they produce, to an extreme degree.


While visual tuning can’t begin until the infant is born, auditory tuning begins before birth. Because coarse-grained aspects of auditory input, like rhythm, pass through the womb, infants prefer the rhythmic patterns of their native language at birth. In their first months of life, they begin tuning into more fine-grained features of their native language, including its phonemic repertoire. And by the second half of their first year, they’ve mastered reference—the foundations for semantic meaning—for a handful of words.


How do infants get from perceptual preferences for speech to the foundations for meaning in less than a year? This is another way of asking how our communication and representational systems became linked. The answer, at least in part, has to do with the extreme degree of infants’ developmental tuning. Far more than in other species, these tuning processes significantly modify many of our evolutionarily inherited representations. Moreover, they become intertwined in a series of developmental cascades and positive feedback loops that hint at the evolutionary origins of various human capacities, including language itself.


Infants’ developmental tuning reorganizes their brain to learn language

Language development is at the center of these cascades and feedback loops. Infants' extensive language exposure across modalities (i.e., faces, voices, and gestures) refines and elaborates upon our inherited capacity for primate-general vocalizations, allowing us to perceive and eventually produce an incredibly complex speech signal. The complexity and precision of our speech, in turn, refines and elaborates upon our inherited representations (i.e., core knowledge), allowing us to model the world in wholly new ways. For example, language allows us to get beyond approximate number representations to invent things like calculus, statistics, and Information Theory—all building blocks for modern science and technology.


Infants’ tuning processes involve a neural reorganization that makes them increasingly better at recognizing and discriminating the sounds of their native language and, at the same time, increasingly worse at recognizing and discriminating foreign sounds. This perceptual trade-off enables infants to more efficiently process the sounds of words that make up their native language.


Infants’ increasing efficiency in processing the sounds of words enables them to begin systematically linking these sounds with their real-world referents. In turn, infants’ growing number of links between words and their referents enables them to begin generalizing these links to a range of abstract categories of meanings (e.g., nouns, verbs, adjectives).


Consider the following illustration. When an infant repeatedly hears the word “doggy” in a variety of diverse contexts, most of which include the presence of their family dog, their brain changes in several ways.


First, the auditory neurons that correspond to the phonemes (ˈdɔːɡɪ) begin to consistently fire together. The regularity with which these neurons fire together leads them to wire together. This new wiring of neurons forms a “perceptual placeholder”—a firing pattern whose consistency enables it to become linked with neurons that encode other perceptual representations that co-occur with the word, such as the dog’s wet nose and pink tongue, its soft fur and loud bark, and the gentle breeze of its wagging tail.


Second, the increasing interconnections among neurons within the perceptual placeholder further stabilizes its selective responsiveness. When the infant hears the word “doggy,” their brain conjures the various representations (encoded across neuronal ensembles) associated with their family dog. These representations function like a unit, increasing the probability that activation of any one of them (e.g., those that correspond to the sound of the word “doggy”) activate the rest (e.g., those that correspond to the dog’s scent).


Third, when the infants’ caretakers point to other “doggies” on neighborhood strolls, the infant encodes the similarities between its dog and the other dogs, ignoring the differences. The word “doggy” becomes linked with increasingly abstract representations of dogs, such as “furry animals with long faces that wag their tails.” Conversely, when the infants’ caretakers call out to their family dog, Boris, as he plays with the other dogs in the park, the infant encodes details that differentiate Boris from the other dogs. Connections among neurons that encode the specific details about Boris are strengthened.


In other words, when the same words are consistently applied to different objects, they function as invitations to form categories, while when different words are applied to different objects, they function as invitations to individuate. (To be clear, the millenia-old debate about how we form concepts and categories still persists, but there’s strong evidence for these generalizations.)


When the infant encounters a new object that is subsequently named for them, the infant perceives the object in a new way, even in the absence of the newly learned word. That is, the infant’s brain responses to an object before and after learning a name for the object are different.


Collectively, this evidence indicates that words “flip” what might be thought of as “neural switches” that alter how we perceive things in the world. We perceive a world full of nameless entities fundamentally differently than a world full of named ones.


Infants’ language learning leads them to infer the existence of other minds

Humans aren’t the only social species that have evolved intricate communication systems. But the linkage between our words and our thoughts—our systems for communication and representation—has led us to create new concepts, like the concept of “mind.” These evolutionarily recent concepts alter how we interact within our social environments.


As infants tune into the faces and voices around them—intrinsically rewarding signals that reinforce infants’ attention toward them—their brains detect temporal synchronicities: Talking faces contain a lot of redundant, multisensory information. This redundancy scaffolds language development, permitting infants to parse the speech stream and then link individual words to their real-world referents.


Infants’ increasing comprehension of words is also rewarding, which further reifies their tuning into the faces and voices producing them. This not only strengthens the perceptual-motor neural connections that underlie speech production, but it also leads infants to manipulate their caregivers’ attention to solicit even more information about the world by jointly focusing on the objects and events around them.


This “joint attention,” combined with infants’ extreme social-perceptual tuning, leads to the critical shift that enables us to acquire language (and many other things unique to our species): the spontaneous inference of a “mind.”


In other words, infants infer a hidden cause to explain the perceptual redundancies of talking faces, including the power of these talking faces to predict co-occurrences of objects and events in infants’ environment. This hidden cause reduces the entropy, or information, of the perceptual redundancies by forming an implicit placeholder for what will ultimately become a rich, explicit representation of other minds.


Simply put, minds simplify things.


Once infants have a placeholder for their caretakers’ minds, they are able to intentionally manipulate them to learn more words. Infants first learn words that refer to concrete, observable objects in the world around them (i.e., nouns), then words that describe relationships between and features of these objects (i.e., verbs and adjectives), and eventually more abstract words (e.g., concepts for time) that are grounded in more concrete ones (e.g., concepts for space).


Abstract words are a particularly good example of how meaning is created by combining and recombining other concepts. But this combining and recombining of concepts to create new ones would be too fragile to enable us to flexibly create meaning if it weren’t for another power of words, above and beyond reference.


Continue with Semantic Meaning.


Comments


©2023 Oscillations, Inc.

bottom of page