top of page

Semantic Meaning


This is the fourth essay in a five part series. Read the previous essay here.


When we have a word for a given concept, we are much more likely to encode, retrieve, and link it with other concepts. Words stabilize concepts and the many ways in which we combine them. A seemingly arbitrary assortment of concepts can become integrated into a story cued by a single word. The word creates a new level of abstraction that neatly packs together these concepts, allowing us to discover previously hidden patterns and forever expand our conceptual horizons. As philosopher Ludwig Wittgenstein said, “The limits of my language mean the limits of my world.”


Words stabilize our thoughts

In stabilizing our thoughts, words trigger a positive feedback loop. We increasingly attend to and remember signals in the environment for which we have names; we increasingly ignore those for which we don’t. We build a web of concepts from the named signals that readily activate one another, furthering their stability and interconnectedness. Potential signals that remain unnamed are likely to be rendered as noise, undifferentiated from one another and therefore incapable, except for by random chance, to elicit our attention.


This brings us to how semantic meaning is made in the mind. The meaning of a concept is its relative position within its interconnected network of associations distributed across the brain. Semantic meaning, like its sister concept of value, is inherently relative. Once again, Wittgenstein's words are prescient: “[In most cases] the meaning of a word is its use in the language


We’ve seen how some conceptual associations were stabilized over the course of our evolution. These inherited concepts, like our schematic representation of faces and our categorical perception of phonemes, were important for our survival and are therefore “given” to us. While our experience tunes these concepts, they remain relatively stable over the course of development.


We’ve also seen how the human brain is not limited to building models of the world as a function of its evolutionary history. Unlike for other species, the vast majority of our concepts do not invariably unfold as we develop, subject to only minimal tuning. Human learning isn’t constrained in this way.


Instead, language enables us to dynamically construct a potentially infinite set of new concepts and to continually revise them. As language evolved in our species, it progressively rewired our brains, which enabled language itself to become progressively more complex, which in turn enables it to continue to rewire our brains over the course of our development.


Therefore, while animals can learn a few arbitrary associations (e.g., Pavolv’s salivating dogs and Skinner’s superstitious pigeons), humans can build entire cathedrals of associations. It’s these conceptual cathedrals that provide the scaffolding for our minds and our communities.


Semantic meaning is probabilistic

But concepts, unlike cathedrals, aren’t set in stone. They’re set in the brain, a prediction machine that calculates probabilities. This means that semantic meaning is probabilistic. The neural network that underlies a given concept is defined by interconnections of various strengths, or “weights.” These weights are slightly modified every time a concept is activated as a function of its current context.


To illustrate, think about where a concept goes when it’s not activated. Of course, it doesn’t “go” anywhere; it simply exists as a potential pattern of activation defined by the relative weights distributed across its neural connections. The concept only “comes to mind”—the meaning only comes into existence—when its associated neural network is activated.


The same is true for memory. We remember facts and events by reconstructing them each time we “bring them to mind.” And yes—they, too, change over time. The only reason we are able to intentionally “bring to mind” concepts and memories in the first place is because we have words, which stabilize them enough for us to cue their recall.


Because the activation of a neural network is slightly different every time, semantic meaning (like memory) is created and recreated on the spot. Semantic meaning is just the act of realizing one potential instantiation of a neural network.


Consider a rather meta example. The words “concept,” “thought,” “idea,” “representation,” “model,” and “pattern” have all been used more-or-less interchangeably throughout this series of essays. If they all mean the same thing, then why do we have so many synonyms?


Despite appearing perfectly synonymous in one context, they aren’t in every context. Throughout this essay, “representation,” “model,” and “pattern” mean approximately the same thing, but in an art studio, they mean entirely different things. Instead of being independently stable, their specific meanings exist as context-dependent probabilities of interconnected representations; namely, those with which they have co-occurred in the past.


Ironically, it’s exactly because words stabilize our probabilistic thoughts and direct our attention that we assume that they have stable meanings. For example, we assume that racial categories are biological homogenous because we use words like “White,” “Black,” “Asian,” and “Hispanic.” But these categories are socially constructed, not biological.


The paradox of how meaning is made in the brain is that no two people will ever represent even the most basic concepts in the same way. Meaning is relative to each individual person’s particular history of experiences, which tune our neural networks. In this way, every person is an island.


Of course, that doesn’t stop us from building bridges.


Language spontaneously emerges in communities

If language scaffolds our minds, what happens when you bring together a group of people who speak different languages? History’s many examples all tell the same story. Whether these individuals were slaves from different countries brought to the same plantation or deaf children from different homes brought to the same school, communities that don’t already have a common system for communication inevitably create one to converge on common semantic meanings.


The birth of any new language follows the same developmental trajectory. First, when a founding generation that lacks a common language comes together, they create a pidgin. When children are exposed to this pidgin, they turn it into a creole. And when the next generation of children are exposed to this creole, they turn it into a full-fledged language.


This progressive increase in structural soundness reflects a remarkable feature of the developing human mind, even before it has learned language: It is highly structured, and it superimposes this structure onto the information it receives and expresses, detecting gaps, partial regularities, and redundancies in the communicative signal to make it more complete, systematic, and efficient. How the prelinguistic human mind became structured to acquire language in the first place is still an open question in the scientific community—one that’s been dubbed among the hardest in all of science.


What we do know is this: The ability to link our minds together—to attend to the same signals among a noisy environment and co-create models of the world—significantly enhances our capacity for cooperation and, therefore, the predictive accuracy of our models. Our cooperation allows us to build better tools for farming and thinking alike. We leverage each other’s minds to “divide and conquer” further skills and knowledge: Somebody invented the wheel and then shared it with others so that they didn’t have to reinvent it. Each generation builds upon the insights of the last.


No matter how innovative an individual might be, however, cultural innovations (including language) can’t emerge in an individual mind. Instead, they emerge as a function of the need to invent new concepts that align our minds. Semantic meaning is not only distributed across the brain as a network of associations, then; it’s also distributed across individuals. And we can only build cathedrals of meaning—both semantic and existential—because we belong to communities that are motivated to link their minds together by co-creating new models of the world.


Culture is a “cognitive toolkit” that co-evolves with our changing models of the world to continually align our minds and maintain a shared identity. While language is the most powerful tool in this toolkit, others range from cultural artifacts like recipes, fashion, and music to social protocols like how to greet somebody, who to marry, and what to believe. Like language, culture both reflects and shapes our models of the world. But in a globalized, multicultural, 21st century world, how can humans converge on a shared existential meaning?


Continue with Existential Meaning.


bottom of page