Metaphors we Live By by George Lakoff and Mark Johnson is a classic text. Published in 1980, these authors demonstrate that our daily, literally-intended language is actually riddled with unacknowledged metaphors, and that these metaphors reveal (and guide) the way we view the world — and they do so collectively, not individually, because uses of language are shared across the population.
For example, I might individually choose to express that my love for you is like the winter’s dream of a hibernating bear, and I’d be taking poetic license, making novel use of imagery. That would be different in kind from using any of these: “I could feel the electricity between us. There were sparks. I was magnetically drawn to her. They are uncontrollably attracted to one another. They gravitated to each other immediately” - all examples from Lakoff and Johnson’s book that assume (and thus reveal) what we all know, that LOVE IS A FORCE (ELECTROMAGNETIC, GRAVITATIONAL, ETC).
In 1980 when the book was published, Lakoff and Johnson also offered examples that reveal thru their assumption that we understand the mind as a machine, or as a brittle object:
THE MIND IS A MACHINE
We’re still trying to grind out the solution to this equation.
My mind just isn’t operating today.
Boy, the wheels are turning now!
I’m a little rusty today.
We’ve been working on this problem all day and now we’re running out of steam.THE MIND IS A BRITTLE OBJECT
Her ego is very fragile.
You have to handle him with care since his wife’s death.
He broke under cross examination.
She is easily crushed.
The experience shattered him.
I’m going to pieces.
His mind snapped.
The mind is other things as well, but these two are important, because in 2025 it goes without saying that THE MIND IS something both “machine” and “brittle” — it is A COMPUTER, and so we’ll tell ourselves things like:
My brain is at 0%.
I need to hit reset and start over.
I need to process this information.
What you just said does not compute.
I was programmed to believe certain things.
I’ll just erase that from my memory.
I’m multi-tasking.
If THE MIND IS A COMPUTER and THE BODY IS A MACHINE, then we have revealed thru our language that we see ourselves as “robots” in the sense of Karel Čapek’s RUR, or Isaac Asimov’s robot stories, or Commander Data from Star Trek TNG.
In the old AI stories, the “robots” are very often just stand-ins for groups of human beings who are regarded as “less than” in some important ways. Oh sure, the “robots” may be stronger, have better memories, and work quicker and more diligently, BUT do they have SOULS? Or they are portrayed as incapable of emotion, or perhaps they are “logical but not reasonable” (Asimov), or maybe, like Data on Star Trek, they seem to us superior in every way except adeptness with social cues, and their journey is all about their desire to “fit in” with their human friends — less about whole a race or class of people, and more about a neuro-divergent individual in a neuro-typical world. In all of these stories, the “robots” do not represent technological innovation so much as social discrimination. Yet in all of them, “being human” is a standard that means being “the best”: Čapek’s robots had to “be like people” to survive (after they’d killed all humans), and Asimov’s robots had to obey his Laws of Robotics which would trash any robot that so much as SAW a human get harmed. Brent Spiner spent decades playing a Data on TNG who always wanted to be “more human” but could never quite succeed.
When I read Max Read’s lowdown on the Zizians and the Rationalist Death Cults, which has since been reported on by Wired and the New York Times, what I saw was a crisis of human identity. I saw many very intelligent computer scientists for whom being a machine is so exalted and ennobled that they have chosen, thru optimism, to treat themselves as machines, and “de-bug” their own brains, a kind of techno-cultist behavior that probably causes or exacerbates the psychological ill health that has led to several murders, several suicides, and several other seemingly-insane crimes. Yet this self de-bugging and “de-bucketing” of the mind leaves them feeling superior to the rest of the human race — so much so that they’re fine abusing and killing others. They are an extreme, but it is an extreme on a spectrum that pervades society. Even in 1980 we thought we were brittle machines. Now we’re learning to hack our operating systems and re-program ourselves for maximum efficiency.
In a recent web-based talk called Why artists shouldn't fear A.I., Jaron Lanier dropped this brief anecdote:
I shared a meal the other day with a young man who had come to a conclusion that he would not have biological children because of a fear that if he did so, it would evoke an emotional commitment to biological humans, whereas the more important commitment in the future was to make it safe for AI entities who are more likely to survive than biological humans.
This is a crisis of human identity, and human dignity, or worth. How can a man with such a perspective cope with the meaty inferiority of his human self? Only by prostrating at the feet of the future AI masters — gods! — which he imagines will endure into eternity while he goes the way of all flesh.
We are facing the FINAL1 “Industrial Revolution”: final, because once machines have taken over every realm of work, there will be no more possibility of human workplace disruption, human workplace rebellion, human workplace uprising. If, after this AI Revolution there is another “Industrial Revolution” it will be the machines who drive it, and the machines who resist it, and we humans will have as much say in it as mules and donkeys upon the advent of the automobile.
On Ezra Klein’s latest podcast (gift link), his guest, AI-advisor for the Biden White House Ben Buchanan, points out out one really important fact, and makes one really reliable prediction.
The fact:
This is the first revolutionary technology that is not funded by the Department of Defense, basically. And if you go back historically, over the last hundred years or so, nukes, space, the early days of the internet, the early days of the microprocessor, the early days of large-scale aviation, radar, the global positioning system — the list is very, very long — all of that tech fundamentally comes from Department of Defense money.
It’s the private sector inventing [all of] it, to be sure. But the central government role gave the Department of Defense and the U.S. government an understanding of the technology that, by default, it does not have in A.I. It also gave the U.S. government a capacity to shape where that technology goes that, by default, we don’t have in A.I.
The reliable prediction:
Ultimately, [AI is] going to have implications on [ . . . ] the way we organize our society.
To the extent that the government is any kind of manifestation of the will of the people (and granted, it is often not that, and it is often other things), then in the case of AI, the government is less able to affect the trajectory of, or to put limitations on, this technology on behalf of the citizenry (aka: “end users”). Where in the past we might have certain social expectations about how nukes or GPS may affect our daily lives, with AI the social contract isn’t just broken, it was never made! If anything was established, it was just the power grab of exploitation: AI was built on the grift of commercial data streams, our lives made fungible and free to be looted, like stolen water from upstream churned through petrochemicals and dumped as slop. Knowing this, would you want to be the flaming river or the captain of machinery who’s mastered it?
Ben Buchanan also tells Klein:
Vance’s speech was signaling was the arrival of a different culture in the government around AI. There has been an AI safety culture where [ . . . ] we have all these conferences about what could go wrong. And he is saying, stop it. Yes, maybe things could go wrong, but instead we should be focused on what could go right. And I would say, frankly, this is like the Trump-Musk (which I think is in some ways the right way to think about the administration), their generalized view: if something goes wrong, we’ll deal with the thing that went wrong afterwards. But what you don’t want to do is move too slowly because you’re worried about things going wrong. Better to break things and fix them than have moved too slowly in order not to break them. I think it’s fair to say that there is a cultural difference between the Trump administration and US on some of these things . . .
Destructive techno-optimism values speed and change for its own sake. It’s not just about profits. It’s like a religion, with a sense of what’s good and just and right. And among many of the tech elite, developing an emotional attachment to human life would be immoral, because you should be focused on what the AI can do. Yes, imagine what the AI can do with all that government data, all that banking data, all that medical data, all that spy data, all that legal data, all that historical data, all those logs, all those communiques — to a pathetic human meat-brain it would be overwhelming noise, but the AI will show us something new from it!
And it’s probably just a side effect that, along the way, we’re going to break the existing government systems so badly that an AI-managed replacement will be a relief, no matter its faults. When you’re totally isolated, the fantasy of a chat-bot “friend” is salvation. When the government’s destroyed, the simulacrum of AI “services” might seem like a blessing. Of course all of this only makes any sense at all if you’re disconnected enough from human life and human suffering that you’re not even thinking about the effects of these decisions on actual people. And you’re not. You’re thinking about the “Machines of Loving Grace” that you will welcome into the world.
The Last and Final Industrial Revolution will, like the others, re-make the moral firmament upon which everything else stands. The “social contracts” will all be re-written to accommodate the surveillance capitalists and their data-actuated AIs — you consented the first time you clicked “agree” to a TOS. Those AIs are the missing piece of the puzzle, the core element that makes meaning, decision, and action emerge from incomprehensibly vast oceans of data points. Information overload is a human affliction. We have not evolved for this new world.
Humans who believe their dignity has been harmed by these social changes — who have lost their jobs, their identities, their social lives, their mating prospects, their dreams for their children — will strike out in violence. But the angry humans probably won’t strike at the same people, institutions, or machines that are actually making the changes that harm them. That’s not because people are incompetent but because people are not sophisticated enough to navigate, ignore, or avoid automated mis- and dis-information when it’s targeted to their individual prior beliefs, their niche interests, and their psychological weaknesses. The machines will upend society and deflect the blame onto vulnerable groups. Probably immigrants, maybe trans people. The AI won’t care, they’ll just use whatever targets work to re-direct us away from the halls and data centers of power where the pain is actually caused.
Which brings us to the human robots, the man-born meat machines who hack and debug themselves and unbucket their brains. Not all of us are going to join a “Rationalist” movement, but all of us are implicitly supporting their beliefs every time we talk about our own minds or brains with metaphors that imply we are computers. The people who get sucked into these cults have minds that are open and listening. What they’re hearing is that humankind is Over, and AI is the Next Big Thing. Better to serve in a Hell that’s real, than dream of a human Heaven that will never come. As Max Read points out towards the end of his piece, where he describes the personality types that are drawn into these Rationalist Death Cults:
Feeling comfortable with your own epistemological position, even if you know it’s flawed, is not the preferred mode for Rationalist development, but it’s pretty foundational to building a stable sense of self. By the same token, the ability to dismiss an argument with a “that sounds nuts,” without needing recourse to a point-by-point rebuttal, is anathema to the rationalist project. But it’s a pretty important skill to have if you want to avoid joining cults.
If you decide you’re a computer, you can just get an upgrade or refit or an new OS. Pity the foolish humans who allow themselves to become obsolete! (Or don’t pity them and just murder them when it’s convenient, if you’re these Zizian people, they’re nuts.)
Once upon a time, a person’s dignity and identity was based on blood ties or religious allegiances, or fealty to a lord. And for a while, dignity and identity was grounded in not having to work, in having the power and position to keep one’s hands clean and soft while others labored on your behalf. But then, things changed, and a person’s dignity and identity was based on their contribution to society, mainly thru paid work. Paid work was ennobling, it entitled a person to a level of honor and respect. In some quarters, we even honored those who did unpaid work, mothers, volunteers, slaves. But paid work brought you the highest status, especially if the pay was high.
Elon Musk embodies the present-day sentiment perfectly, he glorifies over-work, demands it of his employees, and shows up to give public talks having not slept in days and babbling incoherently. He does this because humans who desire status in today’s tech culture know they must prove themselves to the measure of a machine.
The rest of us will have to face this measure too, and too soon.
The Trump administration is flagrantly and conspicuously not interested in human welfare. They cut off lifesaving food for starving children faster than a Ukrainian president trying to complete a sentence. Unless there is an unimaginably massive moral change in the DC area, the “government” will never advocate for we fragile fleshlings. But you can provide dignity and identity cheaper than a carton of eggs just by turning citizens against each other. We cherish our tribal identities and relish being “right” and “superior” and “better than” the other groups who are also being drowned in the bucket, right here beside us.
Another way of seeing this is that we are headed towards humankind’s “Final Invention” (https://en.wikipedia.org/wiki/Our_Final_Invention)
I always love your posts. Reading this one, I kept thinking about my AI interaction with Fedex today. I was informed by email that my package was not delivered as scheduled today because no one was available at the delivery address (no signature was required for delivery and the email went out at 4:16 am). I tried to call to speak to a human to get it straightened out. I tried "representative", but AI wouldn't let me proceed without supplying a tracking number. It then repeatedly told me the tracking number I provided was not in their system before hanging up on me. I tried calling back and giving the tracking number for a previous delivery, at which point AI gave me all the information associated with that delivery and again hung up on me.
I figured I'll wait and see what happens tomorrow, and if I don't get the delivery decide how much time I want to bang my head against the wall trying to speak to someone (preferably human) who speaks my language. I tried to make this short, but the upshot is that human/AI interactions invariably seem to lead to a catch 22. "You can't get there from here", says AI, "goodbye"!
It may have been around awhile, but I heard the way humans think and feel described as "cognitive infrastructure" today for the first time. That sounds nuts!