Language Limits
The language we use largely defines what we can even think about — and how. A good example is color. When a language has no word for a certain color, its native speakers literally can’t tell it apart from similar ones. Ancient Greeks, for instance, described both the sea and wine as the same color. Sounds absurd — but this is how strangely our consciousness works.
In the last couple of centuries, though, science has progressed faster than language could keep up. Specialized terminology patched the gap at first — but only for experts. Regular people, unfamiliar with the field, either don’t understand at all or misinterpret everything, skipping key ideas. Even for those who do know the terms, our thinking remains limited by language structure. If we could somehow break free from these outdated patterns, we might grasp things we currently can’t even imagine.
This fits into the Sapir–Whorf hypothesis — linguistic relativity — which says that language doesn’t just reflect, but shapes how we understand reality. Another modern example: programming languages. Before computers, humans couldn’t think in terms of data streams, logic branches, or abstract computation. Now, thanks to these languages, millions think algorithmically — even if they’re not coders. New languages brought new ways of seeing.
Language models today already use their own inner language for optimization — but that’s just the beginning. Big changes will come with LSMs. They’ll create new languages — and teach them to us. Whoever learns those languages will be able to think in new categories.
It’s a bit like in Arrival, the film where learning an alien language lets scientists perceive time differently. That’s sci-fi, of course — or maybe not. Maybe even time is just a flawed linguistic construct. And once we switch to a different language, we might be able to see sideways… or glimpse other dimensions… or simply understand the universe on a new level.
Whatever it is — it’ll be interesting, important, and fundamental.