It’s been many years since I’ve read Philip K. Dick’s masterful science fiction novel: Do Androids Dream of Electric Sheep? I’ve watched the movie version, Blade Runner, at least a dozen times, and have fond memories—albeit through nostalgia’s goggles—of the very awesome 90’s point-and-click computer game. Often I’ve managed get details confused among the three. Between the theatrical edit of the movie and the original book I can recall one controversial change, which involved the protagonist, Rick Deckard, being an android himself—a replicant hunting replicants. The movie removed that part, though in a far better director’s cut, with voice-over bits thankfully snipped, we see that it’s quite possible for Deckard’s memories to have been planted. This would indicate that he too is a replicant—perhaps the same model as Rachel.
What I’d like to look at, though, is the title of the book itself: Do Androids Dream of Electric Sheep? It’s likely a question meant to ask: Do androids experience consciousness like us? We dream of sheep, metaphorically. We have dialogues that occur within our unconscious minds that layer a deeper reality into our existence—not quite a soul, but a core being. That buried mechanic is what supposedly separates the living from the automaton—biological from mere animated machine. For an amoeba it’d be a pretty simple conversation—something closer to pure instinct. For a human the process would be far more complicated.
Deckard’s world in both the book and the movie seems to have settled on the fact that machines, despite their outwardly displayed intelligence, are in a lower class. They’re almost like animals, except you don’t shoot your pet when it escapes. Society and industry have come to a strong conclusion on the matter and protect that conclusion by giving androids built-in expiry dates. They also saddle them with jobs that people might not otherwise want (like off-world mining), and push the androids into military duty, as Roy Batty (Rutger Hauer) so eloquently explains in his “attack ships on fire” monologue at the end of Blade Runner.
Though Deckard’s future has made their decision, the title’s query still bothers me because it should be a question easily resolved. They’re asking whether androids dream as a way of asking if they’re real beyond circuitry and programming—real like us, and by definition meeting the standards of personhood. Replicants could rightly ask us to prove our own existence with the same question. Do you, human, dream? If so, can you really prove it? Are your neural pathways more than highly evolved circuits?
Here, though, dreaming is a benchmark built off memories. Even animals experience original dreams. When my dog kicks and whimpers, I wake her up because I know she’s having a bad dream, which she wouldn’t without her own reference material to draw upon. It’s not a nebulous thing we’re looking to see in the brain of an android—just the larger act of being alive by creating, through original experiences and memories, dreams of one’s own.
The title of the book also begs a broader question already asked about the rights of machines when they reach that level of intelligence and understanding. I don’t know how differently we as a human species will deal with androids when confronted with them—our existing track record in matters of humane treatment is pretty shitty.
Animals have shown amazing levels of intelligence, including chimps and dolphins. Dogs are our lifelong compatriots, cats tolerate us, and cows develop close friends. We’ve even seen elephants and whales that mourn. But aside from loosely enforced animal protection laws, they’re still treated as just animals—or, other. They’re food, pets, and fun things to hunt. They’re kept crowded in industrial farms and slaughtered in great numbers—when looking at cows alone, to the tune of 41 million heads per year. Add in all forms of table-worthy animals and you’re looking at an estimated 9.7 billion animals being killed for food per annum. Despite what we know about them, they’re not given rights or personhood, so maybe it makes sense for Deckard’s future world to come to the same conclusion with androids.
Some of the memories argument could be surpassed if the androids in Deckard’s world were allowed to build their own foundation of memories, by being grown from a young age (at a time of lower intelligence), treated as children, taught gradually, and transferred from smaller bodies to larger ones as they grew in intellect—much like Motoko Kusanagi in the anime: Ghost in the Shell. In the original movie and it’s various episodic offshoots we see Motoko struggle regularly with her own consciousness, better known as her “ghost,” and the fear that in losing it she would become something other than human. With large parts of her society functioning through the use of cybernetic brains, it must have been a conundrum most enhanced people dealt with. Weirdly, it’s a problem moving in the opposite direction from Deckard’s. In Deckard’s world synthetic intelligence is growing towards inevitable consciousness, and in Motoko’s they’re growing away from it—so much so that consciousness is treated in a pseudo-spiritual way. Sadly, when confronted by the Puppet Master, an artificial intelligence seeking rights and asylum, they can’t recognize their own philisophical proximity to it.
Ultimately, I believe the truth about replicant’s rights lies in Roy’s final rooftop talk with Deckard. The book doesn’t address this in the same way because Rutger Hauer famously improvised the speech for the film (based loosely off the script). In it Roy talks about things that only he has seen and known, and that in his death, the memories of these things would be lost forever. It’s a brilliant, moving speech that can really tug at the heart strings. For the sake of our argument and the book’s title, I would say that this fear, highly irrational for a mere machine to have, undoubtedly proves his consciousness.
So, yes, androids do dream of electric sheep.
Featured image credit: Digital Spy