Are AI stories really ABA Stories?
Perhaps we fear not the "Other," but becoming the "Other-One"?
Hi everyone! I’ve been reading a lot about technology and teaching a course in AI in Fiction, and I’ve decided to start trying to put some of these ideas together in a meaningful way. I hope you’ll enjoy these musings the next time you’re watching Severance or Altered Carbon or Black Mirror, or reading or watching any of the stories that raise questions about the line between person and program, man and machine.
So believe it or not, the oldest reference I’ve found to Big Data is from 1872 (!!!) — no joke. It’s in Samuel Butler’s Erewhon. In this fictional story a fictional protagonist goes to a fictional land where modern technologies like steam engines are banished, and, while there, he takes the time to read and partially-translate The Book of the Machines, the fictional treatise that persuaded the Erewhonians’ ancestors to nip certain technologies in the bud, before those technologies could evolve to threaten the dominance of Man.
Erewhon was published, initially anonymously, a dozen years after Darwin’s On the Origins of Species, and Butler’s primary innovation in this section was applying evolutionary theory to the progress of machines — but that was not his limit. The Book of the Machines also discusses the relationship between man and machine by exploring the power dynamic between slaves and masters. And in an argument dismissing the seeming predictability of machines compared to Man, he points out that our inability to predict human behavior is a limitation of information; not evidence that mankind is unpredictable. If only we have enough data (and the machines to process it?), we could predict how masses of people might behave, certainly, and perhaps even an individual.
…they say that fire applied to dry shavings, and well fed with oxygen gas, will always produce a blaze, but that a coward brought into contact with a terrifying object will not always result in a man running away. Nevertheless, if there be two cowards perfectly similar in every respect, and if they be subjected in a perfectly similar way to two terrifying agents, which are themselves perfectly similar, there are few who will not expect a perfect similarity in the running away, even though a thousand years intervene between the original combination and its being repeated.
Shoshana Zuboff, in her Age of Surveillance Capitalism, explains that “guaranteed outcomes” is the promise of the surveillance economy. What Google and Facebook are selling is predicated on the idea that if they know enough about people they can predict people, and thereby affect their behavior. It’s just a question of having “all the world’s information.”
The fictional author of The Book of the Machines explains:
The only reason why we cannot see the future as plainly as the past, is because we know too little of the actual past and actual present; these things are too great for us, otherwise the future, in its minutest details, would lie spread out before our eyes . . . The more the past and present are known, the more the future can be predicted.
In fact, he argues, to believe otherwise is intolerable:
The future must be a lottery to those who think that the same combinations can sometimes precede one set of results, and sometimes another.
Isaac Asimov wrote frequently of a similar idea in his robot novels — “psychohistory” — the idea that the movements of whole societies can be computed by math. There are no surprises, just events we failed to predict because we lacked data or computation power or both.
Zuboff recounts the history of Behaviorism, “a scientific psychology” that “restrict[s] its interests to the social and therefore visible behaviors of” (and this following phrase means human beings viewed in this way) “the Other-One.”
More from Zuboff:
The logical consequences of the new viewpoint necessitated a reinterpretation of the higher order human experiences that we call “freedom” and “will.” [Max] Meyer echoed [Max] Planck in positing that “freedom of action in the animal world signifies the same that is meant by accidents in the world of physics.” Such accidents are simply phenomena for which there are insufficient information and understanding… Meyer wrote: “The Other-One’s conduct is free, uncaused, only in the same sense in which the issue of a disease, the outcome of a war, the weather, the crops are free and uncaused; that is, in the sense of general human ignorance of the particular causes of the particular outcome.”
So human behavior is as subject to universal laws of motion and cause-effect as anything else. Therefore, if we know enough, we should be able to predict all.
The fictional author of The Book of the Machines is arguing, in part, that machines can and will evolve to out-compete mankind; but this fictional author is also arguing that mankind are machines; that the deep distinctions of kind a person might feel they observe between their son and a steam engine or a pal and their pocket watch are actually mere distinctions of degree. After all, we accept that in biology we are related to dogs, mice, nematodes, really to all life on Earth.
Even a potato in a dark cellar has a certain low cunning about him which serves him in excellent stead. He knows perfectly well what he wants and how to get it. He sees the light coming from the cellar window and sends his shoots crawling straight thereto: they will crawl along the floor and up the wall and out at the cellar window; if there be a little earth anywhere on the journey he will find it and use it for his own ends…
No man is a potato any more than he’s a robot, but in our stories the crisis that we feel at cunning potatoes and emoting machines taps at a similar fear, the fear that we ourselves are not in control of our actions.
In Severance “innies” control the employees’ bodies while they are at work, and the workers leave at the end of the day with no memories of having been there. They are armed with the human knowledge and intuition (emotion!) they need to do their jobs, but they are divorced from their own identities, families, histories. They have no cause to believe themselves free, and they are quickly punished if they “misbehave.” They are cunning potatoes in an underground Skinner box, uniquely disarmed against the gross manipulations of reward and punishment.
The word “robot” comes from the 1920 play R.U.R. by Czech author Karel Capek. In this play, “robots” are made in a factory, from batter, according to a secret family recipe that no one alive really understands. These fleshy beings are used in commerce and industry and even in war. The word “robot” in Czech suggests a servant, slave, or serf. What they are is a universal working class that rises up together and kills all humans. All work and no play makes for some very angry robots.
In Asimov’s robot stories, the robots cannot rise up and kill all humans, because Asimov subjected them all to his laws of robotics:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s stories portrayed these laws as being so fundamental to a robot’s programming that purposeful violation is impossible and even accidental violation would leave the robot totaled. Seriously, even witnessing a human be harmed would wreck them. Man, these robots are really not going to rise up and kill all humans!
But at what cost comes this human comfort with their robot slaves? If I see someone is about to be killed — hit by a car, for example — and I see that I could somehow save the person, but that doing so would cost me my life, I could act; I could choose to sacrifice my life to save that other person. But one of Asimov’s robots would have no choice. It would have to sacrifice its life to save that human no matter what.
What a grotesque oppression to place on any thinking being: always you must die for us.
In Wargames (1983), a young Matthew Broderick wants to play “Global Thermonuclear War” and the NORAD computer he accidentally taps into takes him seriously. As the crisis escalates, they start referring to the computer as “Joshua” — which was the secret back-door password, the name of the original programmer’s dead son. Setting aside the really weak password choice, why would you refer to your computer by your password? Weird, yes, but it sure does help personify a bleeping box of bits into an innocent child who’s been handed the world’s most dangerous weapons.
And that is how the story progresses: endless games of tic-tac-toe make the child-machine come to understand the futility of a “game” no one can win. “The only winning move is not to play” it says at the end — out of the mouths of babes! Or, machines!
In Wargames the machine held the power of life and death over the humans, but it didn’t threaten to become one of them. That had already been done, in the 1969 classic 2001: A Space Odyssey — the HAL9000 was probably the most interesting character in the story, seeming to have nuance and gentleness, as well as homicidal urges and the ability to deceive, while the humans were all fairly flat, just mission-focused behavior-bots going about their routines. In early drafts, Arthur C. Clarke wrote HAL as an artificial intelligence that had to be raised like a child, one that threw toddler tantrums and went through a rebellious phase. A different early draft had Dave going rogue and stealing a pod (in this version the sleeping members of the crew also survived) so he (not any of the other humans) would be the one to encounter the obelisk and evolve into a higher form of life. Imagine if HAL had gotten that chance.
The 2009 film Moon, which plays off the aesthetic of 2001, goes a step further in two ways: first, by making the AI, GERTY (voiced by Kevin Spacey doing his best HAL impression) actually turn out to be a good comrade after creeping us out for 72 minutes; and second, by making it undeniable that the human is the machine. He’s a clone that’s been pre-programmed and manipulated by this controlled moon-base environment. He’s a “spare part” — a necessary cog in a profit-seeking system who is routinely worn down (sickness), disposed of (killed), and replaced (next clone, please!). The humans don’t care about the life of a clone. Why should they? It’s not a person, any more than a potato.
If it be urged that the action of the potato is chemical and mechanical only, and that it is due to the chemical and mechanical effects of light and heat, the answer would seem to lie in an inquiry whether every sensation is not chemical and mechanical in its operation? whether those things which we deem most purely spiritual are anything but disturbances of equilibrium in an infinite series of levers, beginning with those that are too small for microscopic detection, and going up to the human arm and the appliances which it makes use of? whether there be not a molecular action of thought, whence a dynamical theory of the passions shall be deducible? Whether strictly speaking we should not ask what kind of levers a man is made of rather than what is his temperament? How are they balanced? How much of such and such will it take to weigh them down so as to make him do so and so?
What will it take to make you do it?
What will it take to make you not?
In the Black Mirror episode “White Christmas” (2014) we’re introduced to the idea of “cookies”: conscious beings made of code created by inserting a chip at a person’s temple, where it can shadow or mirror the consciousness long enough that it becomes a second self. That second self, which entirely believes that it is THE self, can then be extracted and manipulated — through control of the environment it experiences and its experience of the passage of time — into confessing a crime, or becoming a domestic servant for the person they think they are, a smarthome management system that knows exactly what temperature you prefer and exactly how you like your coffee, because it’s you. At the end of the episode, after extracting their prisoner’s confession, the police tell the real him (he’s in a cell, having confessed to nothing), and then turn the dial on the cookie up to “1000 years a minute” and leave to enjoy the holidays.
But probably the most popular Black Mirror episode is the one that treads closer to reality, “Nosedive” (2016), in which people participate in a social-media rankings system that affects what price they pay for an apartment, what seats they get on an airplane, what car they can rent… and more subtly, who will talk to them, and what kinds of relationships they can have — and raising questions about whether they can have real relationships with other people, or just mutually-ratings-benefitting ones. Are their lives real at all, or performance for “stars”? Here is where AI in fiction must ultimately take us, because the machines being manipulated are now not even digital copies of us or clones of us, they are simply us, responding to the social stimulus that follows us wherever we go and defines the way we must behave to thrive in society, or really to participate in society at all.
That’s why, oddly enough, the most interesting novel about “AI” to me includes really no computers at all. The “conscious machines” are all humans. It is B.F. Skinner’s novel Walden Two, in which he describes a utopia that some readers back in the 40s and 50s mistook for a dystopia, because Skinner describes a society set up for everyone’s happiness and success, and that means no one is free to fuck up and fail; no one’s accomplishments are truly their own; and individual success is not evidence of great personal merit. Readers found this disturbing. But Skinner didn’t mean it to be disturbing — he meant it to be inspiring, a model for how humans could efficiently cohabitate in peace through intelligently engineered behavior management, through applied behavior analysis.
And so to readers it is a disturbing little paradise, because we read it as artificially intelligent control, even if there isn’t a computer named Joshua calling the shots. This is a utopia that makes us fear for our freedom, fear that we are under outside control.
Taken through this lens, we realize that every AI story is really about us (which we might have guessed): our desire to be free from control, to prove our autonomy and worth, to be part of a human network without becoming a tool of the network, to use the machines without becoming one of them, to be the masters not the robots.
And likewise, these stories are about our fear that — despite our love of freedom — we are being controlled, our autonomy is a fantasy, our “worth” is merely an “accident” that physics hasn’t calculated yet, that every human in the network is a tool of the network; we are all machines, and “masters vs robots” is a distinction without a difference; we are wired together seeming-chaotically and incomprehensibly, with no known party in control — and if such party exists, their motives, their morals, are entirely beyond our ken.
Everything is fine.
🔥 🐶 ☕ 🔥
[ to be continued ]
... what is ABA?
An AI in fiction class--what fun! Curious to know what else your students are reading in the course if you're interested in sharing your reading list/syllabus. And thanks for this post, Amy. As always, lots of good, meaty stuff to chew on.