“You’re in a desert, walking along in the sand, when all of a sudden you look down and you see a tortoise … You reach down and you flip the tortoise on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping.”
Perhaps nothing is more emblematic of Ridley Scott’s 1982 dystopian film Blade Runner than the Voight-Kampff test administered by the movie’s titular law enforcers, including Harrison Ford as Rick Deckard. The series of questions in the fictional test, such as the one above, are designed to separate out humans from replicants by provoking a physiological response indicating empathy. Only true humans, not replicants, feel that emotion. Deckard’s charge is to deal with replicants that start disobeying orders. He and others use the test to decide whether or not to “retire”—kill—the replicants.
Not only do these rebellious androids pose a threat to humans, but in this world, they don’t have any legal rights to protection. How could they, when they’re not considered human?
It’s such an engaging quandary that the story will continue in the long-awaited sequel Blade Runner: 2049. Part of the reason for the original movie’s enduring popularity is Deckard’s personal struggle, one that plays out similarly in movies like Her and shows like “Westworld”: Who or what counts as human, especially in a world of advanced technology?
And to understand it, we have to turn to some very old philosophers.
***
For the ancient Greeks, machines made by gods or exceptionally talented humans often fooled people into believing the androids were authentic, writes Adrienne Mayor in Aeon. King Nabis of Sparta owned a robotic version of his wife, her breast secretly adorned with nails. He used the machine to hug citizens who disobeyed him, their flesh pierced by the hidden weapons. And in China, a 10th-century B.C. automaton made by inventor Yan Shi looked so humanlike, singing and winking at ladies, that the king became enraged at it. Then he learned the truth, and marveled at a machine that even had mechanical organs. As scholar Nigel Wheale writes, “In all periods, ‘human-Things’ have been imagined as entities which test or define the contemporary sense of human value.”
All this is to say that concerns over how to distinguish flesh-and-blood humans from machines that merely look human (and deciding whether those machines pose a threat to us Homo sapiens) isn’t limited to modern times. We’ve always wondered whether all humans really are what they seem to be—which is why Enlightenment philosophers spent so much time dissecting the question of what makes a human, human.
Rene Descartes, a 17th-century French philosopher who traveled widely across Europe, deeply considered the question of what made us human. It’s no coincidence that his most famous quote is repeated by one of the replicants in Blade Runner: “I think, therefore I am.” And if all that isn’t enough proof of his connection to the film, consider the names: Descartes and Deckard.
As philosopher Andrew Norris points out, Descartes suspected there might someday be a need for a test of whether something was human or machine. “If there were machines bearing images of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men,” Descartes wrote. So he created his own tests, which relied on linguistic ability and flexibility of behavior.
Replicants speak and behave just as humans do, meaning they would pass Descartes’ tests. But there’s another reason Deckard struggles to disprove their humanity: Replicants also have implanted memories. For English philosopher John Locke, what gives a person a sense of self is the continuity of their memories. The human body changes with time, but memories remain, offering a foundation for a stable identity. “As far as this consciousness can be extended backwards to any past Action or Thought, so far reaches the Identity of that Person,” Locke wrote.
So for Blade Runner’s Rachael, the most advanced replicant yet developed, it doesn’t matter that she might only be a few years old; her memories stretch back much further, giving her the impression of having lived much longer. That’s what makes Rachael such a tragic figure—”her” memories don’t belong to her. They come from her inventor’s niece.
“That’s a heartbreaking thing, but you can image [the memories] are still special to her even after she learns they’re not truly hers,” says Susan Schneider, professor of philosophy at the University of Connecticut and member of the Ethics and Technology group at Yale. “It’s like finding out you’re the uploaded copy, not the individual doing the uploading. But you still have some special relationship to them. Like a parent.”
But it’s not just memories or rationality that make a human in Blade Runner. Most importantly of all, according to the Voight-Kampff test, is empathy. Since we can’t read minds or see any physical evidence of them, thinkers like German philosopher Theodor Lipps have argued we can perceive that others feel and act as we do through the power of empathy.
“The Blade Runner must, ironically enough, test the empathy of others—not, her, in Lipps’ sense, but in that of their sensitivity to a now perished natural world populated by non-human animals,” Norris writes in his paper on the philosophy of the film. This is where the famous tortoise-trapped-on-its-back-in-the-desert question comes from.
“Emotions themselves will never be a perfect test of humanity: sociopaths are human, too, after all,” Deborah Knight, a professor of philosophy at Queen’s University, said by email. “But emotions are more than non-cognitive responses. They help us to make judgments about what we should do and who we should aspire to be.”
This is especially clear in the case of replicant Roy Batty, played by Rutger Hauer. Roy feels human-like emotions and has aspirations, but doesn’t get a human lifespan, Knight said. Roy is aware that, like the other replicants, he has been built to die after a mere four years, which understandably enrages him.
So replicants arguably do feel emotions, and they have memories. Does that make them human? For Schneider, a definitive answer doesn’t necessarily matter. The replicants share enough qualities with humans that they deserve protection. “It’s a very strong case for treating [a non-human] with the same legal rights we give a human. We wouldn’t call [Rachel] a human, but maybe a person,” she says.
For Eric Schwitzgebel, professor of philosophy at University of California at Riverside, the conclusion is even more dramatic. “If we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings,” he writes in Aeon. “We will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state.”
***
Blade Runner is only a movie and humans still haven’t managed to create replicants. But we’ve made plenty of advances in artificial intelligence, from self-driving cars learning to adapt to human error to neural networks that argue with each other to get smarter. That’s why, for Schneider, the questions posed by the film about the nature of humanity and how we might treat androids have important real-world implications.
“One of the things I’ve been doing is thinking about whether it will ever feel like anything to be an AI. Will there ever be a Rachael?” says Schneider, who uses Blade Runner in her class on philosophy in science fictions. This year, Schneider published a paper on the test she developed with astrophysicist Edwin Turner to discover whether a mechanical being might actually be conscious. Like the Voight-Kampff test, it is based on a series of questions, but instead of demanding the presence of empathy—feelings directed towards another—it looks at feelings about being a self. The test, called the AI Consciousness Test, is in the process of being patented at Princeton.
The test differs from the more famous Turing Test, developed by mathematician Alan Turing in 1951. In this earlier test, a judge would engage in a digital conversation with the participant (like what you’d experience today in chatrooms), asking questions to discern whether the respondent was human or a machine. But as Schneider points out in her paper, scientists can develop programs that pass the Turing test but aren’t conscious beings. The Turing test is interested in assessing the verisimilitude between a machine’s response and a human’s response, not with understanding whether the machine is sentient or not. Like the Voight-Kampff test, Schneider’s AI Consciousness Test is about trying to understand what’s happening inside the machine.
Work like this is urgent, she says, because humanity is not ethically prepared to deal with the repercussions of creating sentient life. What will make judging our creations even harder is the human reliance on anthropomorphism to indicate what should count as a being worthy of moral consideration. “Some [robots] look human, or they’re cute and fluffy, so we think of our cats and dogs,” Schneider says. “It makes us believe that they feel. We’re very gullible. It may turn out that only biological systems can be conscious, or that the smartest AIs are the conscious ones, those things that don’t look human.”
It’s important for scientists to confer with philosophers—which many already do, Schneider says—but also for members of the public to think through the repercussions of this type of technology. And, she adds, not all philosophers agree on the nature of consciousness, so there are no easy answers.
Maybe Hollywood films like Blade Runner: 2049 will bring us one step closer to engaging in those conversations. But if it doesn’t, we’ll have to take on the work of entering the ethical quagmire on our own. Sooner, rather than later—or we’ll end up with a problem like the replicants and no idea how to respond.