The Yowl! in the machine

Posted by

Image: (c) Franco Brambilla. Cover for City, Urania Collezione 157. Used with the kind permission of the artist.

Lord, make me the kind of man my dog thinks I am. — Anonymous

Why are our stories of the future so often dark? Tolstoy put his finger on it when he wrote that all happy families are the same, but unhappy families are miserable in their own unique way. Sameness is dull, difference captures our attention. It’s how our brains are wired to work. The familiar is usually not a threat, but a novelty entering our field of view might be. So we focus on the new, senses on alert for danger, at least subliminally. The lizard brain is also hard to overrule when it comes to our Netflix streaming choices. Stories of the Apocalypse, and the shenanigans of various and sundry incarnations of the Four Horseman — Death, Plague, War, and Famine — hold our attention. Stories of pleasant idylls are boring.

But, it’s best that we don’t fool ourselves into thinking our love of sci-fi disaster stories means they are predictive, rather than just entertaining. Fun as they are, if popcorn-fueled carnage and mayhem are all we use to feed our hungry imaginations, then perhaps it’s not surprising that we often feel anxious about future prospects. This would count as a failure of storytelling, that very human activity our species has used for millennia as a tool to find and create meaning in radically new circumstances. Consider the rise of the robot. When doing the serious work of ruminating about the future, perhaps we’d feel less anxiety about living in a world full of intelligent machines if we first look for a model of it in our shared lives with animal companions.

The 1952 science fiction novel City, by Clifford Simak, is a story cycle of the far future told by dogs to their pups about a mythical being called ‘Man’ who once lived in ‘cities’ and practiced ‘war’. In the distant past, according to these canine legends, cities had depopulated from fear of nuclear attack. Human scientists, feeling lonely as the only species capable of speech, engineered an increase in the number of our chatty companions, creating augmentations that widened the range of our conversation, nudging the evolution of animals and machines in directions guided by human desire. Centuries later, the dogs could now talk while robots, gentle-souled, ancient and venerable, pursued lives of service in memory of their human designers. The legends told around the fire at night relate how humans then aspired to become like gods and left in search of transcendence on distant Jupiter. ‘Man’ has now passed into myth, an Olympian forever out of reach whose current abode is a moving point of light in the sky. Meanwhile, the ants are up to something mysterious on the outskirts of town.

I resonated with City when I first read it as a teenager, and the disorientation this engendered for me was delicious. It dissolved into fluid motion things that seemed solid and fixed, the possible shifting of species boundaries and the upending of social hierarchies. And it sparked the idea that there might be far distant time when human beings have passed from the scene, but the creation of meaning continues. These stories awakened me to the idea that everything ultimately is in play. 

Simak’s book was a prescient intuition about what has come to be called post-humanism. As a term of art in the academy, post-humanism refers not to the end of humanity, but instead the attempt to move beyond the five-hundred-year Enlightenment project of humanism, a conceptual and ethical framework that centers humanity as the sole locus of meaning and value in the world to the exclusion of all others. Humanism was a reaction to the rigid theocracies of earlier ages, which found the sole source of meaning in their particular and varied understandings of God, and brooked no dissent. For these belief systems, Man was at the pinnacle of God’s creation, but it was God who determined what it all means. Humanism dethroned God, and put Man in God’s place as the arbiter of values. Post-humanism does not reject this emancipatory impulse, but instead attempts to enlarge the frame by placing people within a spectrum that stretches from animals on the one end to machines on the other, seeing all of us as part of nature, engaged in one large co-evolution as companion beings.

On one far end of the spectrum, wild animals pursue their own agendas separate from human concerns, and their evolution is a pure Darwinian drama. Closer to us along the spectrum, domesticated animals have been shaped by human tinkering. For them co-evolution is by design. Close to us on the machine end of the spectrum we have cyborgs, the name I’ll use for human-machine hybrids. Those with hearing aids or other implants are cyborgs of one flavor or another. And then there are pure machines. On the cyborg/pure-machine end of the spectrum, the pace of co-evolution is accelerating. The emergence of post-humanism at this time is no mere happenstance. The deepening awareness of our genetic, emotional, and perhaps even cognitive kinship, with other non-human beings comes at the same moment that we fear machines might soon outrun us. 

At the center of my ironic faith, my blasphemy, is the image of the cyborg. — Donna Haraway, A Cyborg Manifesto

Designs give materiality to our deepest desires. I intuited this from an early age, but could only articulate it much later. I was a cheeky child, full of wild ideas and curious energies, fretful, reading widely in search of ways to escape the grayness of my everyday life. In elementary school in the 1960s, I once gave a presentation about the need for self-driving cars. Perhaps this was motivated by having seen a fatal accident on the New Jersey Turnpike. It happened in a rest-stop parking lot when a trucker, hungry for a quick lunch, had thought he’d set the breaks on his eighteen-wheeler. He hadn’t. As it began to roll away from the curb he leaped to get back into the cab and got caught in the jackknife. His neck snapped, body broken, he lay in the warm sunshine on the black pavement while stunned travelers stood nearby. My father saw the whole thing, but I saw only the aftermath and the broken body, and it was the first time in my life I saw my father physically shaken. It was all so quick, and the setting so mundane. Perhaps my urge to study science and technology grew from fears implanted that day, fears that life could be so fragile, and fears that mindless machines loose in the world can be dangerous.

A few years later, as a college student in the 1970s, one of my classes made a trek to the computer science department to view a demonstration. Our professor was excited to show us a new marvel: a machine that could converse in text form. As the computer booted up, the class waited in hushed anticipation, watching the blinking cursor on a television screen. “Hello,” appeared. The professor smiled, and he wrote: “Hello,” in response. There was a pause, then the machine asked something like: “What can I help you with?” A student suggested: “How are you feeling today?” The professor typed in the query, and the cursor blinked. Once. Twice. Then stopped. We had crashed the computer, and we never got back to the man-machine conversation that day. AI didn’t seem very dangerous, almost comical instead.

At that point in my life, however, I was a huge science fiction fan, and so constantly thinking beyond the here-and-now into some dimly perceived future that was sure to be awesome, if not terrifying. The computers of the day were clunky and hidden away in mainframe data centers. Robots were toys. But the glimmer of vast potentials could be discerned, if you had a little imagination. Perhaps too much imagination in my case. I recall having an urgent conversation with a student friend where we both agreed that our society needed to wake up right now: thinking robots were just around the corner, and we needed to settle upon their place in our ethical philosophy and their legal status before they arrived. How should they be treated? Would they have rights? If not, why not? 

The field of robot ethics is now hot because autonomous cars are on the verge of wide deployment. Since we live in a complicated and dangerous world, every now and then they will inevitably be making life and death decisions, engaged in endless variations of the Trolley Problem. This is a favorite thought experiment used by philosophy professors to torment undergraduates. A trolley is rolling down the track and if it stays on its current course it will kill five people. If it switches to another track, it will kill just one. Should the conductor make the decision to kill the one in order to save the five? What if the five are escaping war criminals? What if that one person was the conductor’s mother? Or a child who might be the next Einstein? There are endless variations, each designed to tease out those normally unarticulated reflexes, to reveal how we assess the relative value of persons, and how much responsibility we are willing to take for actions that cause harm to some people, even if those actions are intended to avoid harming others.

Robots unleashed on the highways will be called upon to solve trolley problems when full collision or damage avoidance is not viable, and — like any fallible human in the same circumstances — the robot is thrown back on using the limited sensor information available to it in the moment, to judge relative value in the moment based upon superficial appearances, and it must invoke some algorithm for choosing the least bad option. But least bad as assessed by whom? According to what ethical value system? The philosopher Derek Leben argues that humans are really quite bad at these thought experiments. Our decisions are driven by a host of unexamined assumptions and irrational predilections. Bertrand Russell said that we have not yet invented an ethical system that was both rigorous, and humane. If, after twenty-five hundred years the philosophers have failed us in this vital task, is it any surprise that we’ve really only just begun to think seriously about how to program robots to carry out real-world and real-time moral reasoning?  Robot ethics is therefore an urgent and practical problem. But what about the question of robot rights?

If my 1970s sophomore angst about robot rights seems far-fetched, first note that we were already treating corporations as legal persons back then. These are legal fictions that live only in our collective imaginations, we conjure them into existence through a kind of incantation called the “articles of incorporation.” Corporations take on the mantel of existence because large numbers of people act as though they are real. The 2010 Citizens United vs Federal Election Commission ruling, where the US Supreme Court recognized speech rights for corporations, is only one in a long line of court decisions stretching back to the 19th century that establish the legal status of corporate ‘persons’, entities that can own property, sue other legal persons, etc. Those who believe Citizens United was decided correctly argue that corporations are really just conglomerations of people. But, business decision-making is increasingly driven by algorithms. So, would it really be so strange to grant machines some form of personhood status?

Google has already argued in court that search results are speech acts, and therefore they should be protected under the First Amendment of the US Constitution. So that horse is not just out of the barn, it’s crossed the paddock, jumped the fence, and gone free range. In 2012, they commissioned a white paper by UCLA law professor Eugene Volokh that lays out the full argument. This is still unsettled law, but very much in play and there is a lot at stake. But, this begs the question: speech acts by whom? Google Search is an algorithm running on computers distributed across vast cloud server farms. Does the simulacrum of intelligence that the Google AI exhibits pass the threshold for us to consider it truly worthy of speech rights just because Google finds it useful to its business model? If so, why not Siri or the Amazon Echo? Why not a more humble speech-to-text algorithm running on my local computer? Once we start granting rights to smart machines, where would it stop?

YouTube, by contrast, chooses a hands-off approach to videos posted to its site, to avoid stepping beyond the role of humble “platform provider” into the more significant role of “content editor.” This decision was taken because of concerns over the exposure to the legal liabilities that would follow from accepting even limited editorial responsibility for videos posted to the site. Taken together, Google’s assertion of rights and YouTube’s avoidance of responsibilities, it seems the thin edge of a wedge has already been inserted like a shiv into the body politic. It’s as if our algorithms have more protections — but less oversight — than a rabid dog let loose in the village square.

Tools extend our reach, but they can also deform us. We will have to adapt if we are to thrive in a world shaped by ubiquitous tech. At the advancing edge of science and technology things will always be unsettled. For example, looking ahead, it is pretty clear that cloud computing is probably here to stay, and AI will continue to improve rapidly. The Internet of Things, long promised but slow to arrive, may now finally be ready for prime time. The 2020s promise to be a decade of disruptions: electric vehicles, solar farms, agile robots. But instead of worrying about the unlikely ‘Skynet scenario’, where an AI Colossus awakens and our web-enabled coffee maker betrays us, it’s important to focus our worries where the real threats lie. We need to keep reminding ourselves that impressive as AI might be, it’s still limited to very specialized applications. While AI grandmasters in chess and Go are impressive, the fact remains that learning to play board games, a circumscribed universe with well-defined rules, is far simpler than the type of computational complexity a squirrel in my backyard has to deal with day-in, day-out, navigating a shifting landscape in search of scarce food, hunted by that red-tailed hawk that appears out of nowhere, dodging those big iron things called cars to get back to its family in the tree across the way. When a robot can scamper like that, and live off the land by its wits, it means trouble. If they can also self-reproduce, we’re doomed.

More seriously: while the performance advances of AI are impressive, humans and machines have much better problem solving skills working together than either of us do separately. This will remain true for a very long time because humans and AIs think differently. In fact, it’s human performance enhancement that’s the real threat AI presents, because the human capacity for ruthlessness in pursuit of power and wealth will now be augmented by machine intelligence. This is a very real darkness on our immediate horizon, and something worth worrying about, not some robotic apocalypse of the distant future.

To think creatively about this from a post-humanist perspective, we only need look to our historical relationship with our animal companions. For centuries we have hunted effectively with trained dogs and falcons, even though our sense of smell and sight are inferior to theirs. Therefore, as AIs grow in power and nuance working with them will become more and more like working with a companion intelligence, similar to our relationships with animals, a relationship that recognizes compensating strengths and weaknesses, not a simple question of superiority or inferiority relative to some artificial performance measure. 

How can we move the conversation along in more productive directions, away from imagined terrors that are unlikely to arise, so we can focus on the more realistic threats, and also to capture the very real potential benefits? How can we use machine intelligence as a means to create more humane societies? Viewing our relations with technology as part of a spectrum that connects us with animals is helpful in this kind of creative work. Some AI researchers, like Margaret Boden, Professor of Cognitive Science at the University of Sussex, a researcher in computational psychology, argue that when we claim humans are special because we feel emotions and machines cannot, we are probably missing the point. A post-humanist would sound the alarm that the humanist instinct to single us out from other types of minds has snuck into the conversation. Boden believes it’s likely that intelligence itself, true intelligence, not just brute force computation, requires something like emotions.

Taking a look now at the other end of the spectrum, with our animal companions, my friend and colleague Barbara King has studied communication among the higher primates, along with outward forms of grief and even spirituality exhibited by them. These are species with whom we have shared this planet over the long arc of deep time, and our brains have evolved similar structures that suggests kinship even in some aspects of our emotional makeup. One commonality we share is the central role of emotion in memory formation and learning: it is emotions that flag experiences as important, and worthy of attention. Otherwise we can be overwhelmed by the flood of sensory data we swim in all day long, trying to find patterns to guide our actions. Given the fact that it is truly a flood of data, machine learning theorists would recognize this as something they call the ‘curse of high dimensionality’.

To navigate the high-dimensional landscapes of data space, even an AI will need intuition and a silicon-based simulacrum of a ‘gut’ feeling. Combinatoric overload would result otherwise. AI researchers, like Professor Melanie Mitchell, of Portland State, argue that true intelligence requires embodiment. That is: it’s not a simple matter of finding patterns in numerical arrays of data. The types of intelligence that we take for granted — what we sometimes call ‘common sense’ — in fact requires an extraordinary amount of complex information processing. Finding patterns that are fixed and stable, not mere accidental fits, requires context and an internal model of the world, something that we like all other mammals build up in early childhood through play and interaction with others. Human-machine interactions, the study of how we respond emotionally to machines that appear to exhibit agency and intelligence, is the focus of researchers like Kate Darling at MIT. The importance of such work will only increase in coming decades, as more agile and ever more intelligent machines intrude into our daily lives.

The science fiction writer Ted Chiang explores these ideas in stories like his 2010 novella, The life-cycle of software objects. Chiang is notable for the humaneness of his vision. In Life-cycle, machine intelligences have to be nurtured through a kind of childhood, and they serve as surrogate children for ‘parents’ who don’t have kids of their own. The nurturant aspects of the story is not far fetched. After all, Google’s DeepMind and IBM’s Watson are trained by presenting the machines with isolated examples drawn from the human world, through a kind of tutoring. Chiang’s story The Great Silence is told by a gray parrot, wondering why humans have built large radio telescopes in the search for alien conversations, when we live on a planet already teeming with alien minds that carry on conversations all around us, and we’ve never listened to them until recently, when it is nearly too late. Annalee Newitz’s short story When Robot and Crow save East St Louis also explores cross-species communication between machines, humans, and other animals. In Robot and Crow, a public health monitoring machine continues with its mission to help people even after the CDC goes offline and leaves it stranded. Where others write techno-thrillers about killer-bots, these stories seek ways to humanize our technology, and open up conversations with animals, too.

Let’s return to my sophomoric question of 1970, which is now gaining in urgency: Should robots be granted some form of ‘personhood’? The philosophical debate over whether a machine can be conscious, or if it can pass the Turing Test, is the wrong way to frame the issue. Or rather, we aren’t yet ready to tackle that fundamental problem. We don’t understand human consciousness, so why talk about the possibility of self-aware machines? This only clutters up the discussion of machine intelligence by foregrounding a question that is fundamentally unanswerable with our current science, and it distracts from more fruitful and urgent lines of inquiry. Let’s put the question of machine consciousness aside for the time being.

Thinking again in a post-humanist vein, instead of asking whether machines can have a mind, let’s turn the question around and ask why humans are so willing to assign a theory of mind to other animals, and whether that willingness serves a vital social role that could be adapted to our future life with smart machines. This ability to detect agents in the world is a cognitive process that evolved over aeons, one that presumably conferred a selection advantage by helping us to create ‘theories of mind’ for other humans. It also gets called up whenever we encounter an animal that is recognizably mammalian, with a face and nose, two ears, and especially when that other being seems to like being near us, and wants to share our day. Humans are quite willing to assign agency to other non-human beings. It’s one of the things that many of us find likable in other people. That ability to connect is why so many people love to share their world with animals, and why so many of us find that life is a bit gray if we don’t share a daily snuggle with a feline or canine friend. These cross-species relationships are vitally important to the psychological health of many people, and yet this ability to form cross-species bonds of affection is a bit mysterious from an evolutionary perspective. It speaks to something deep in our psychological makeup, and yet it can be hard to fit within a simply-imagined theory of the selfish gene, nature red in tooth and claw, and all that.

Traditional creation stories are full of non-human actors, talking animals and plants. These reflect a very humane intuition that the entire world is aware, that everything has its story, that animals have an interior life, intentions, and thoughts. We seem primed to live our lives in a way that generously grants agency to other beings, and to believe they have minds, too. The notion that only humans have a story is a recent invention, in historical and evolutionary terms. The more traditional sensibility suggests that our future relationship with intelligent machines could be more positive than we think, if we play to this strength in our cognitive makeup. Perhaps by designing smart machines that seem to like us, after centuries of alienation we’ll finally start to feel more at home in the built world, those urban landscapes where most of us spend our lives. Perhaps we’d also feel less anxiety about the prospect of intelligent machines if we knew in our hearts that while attempting to create this new kind of thinker, we will be as much concerned with the machine’s welfare as our own. Not because we’ve agreed to some metaphysical position on whether machines can be sensate or conscious, but because we value so greatly the aspects of our own humanity that feels such impulses.

The fear of our annihilation by a rogue AI is a form of projection: a worry that our creations will turn out to be just as ruthless as we can be. While much of the focus in the popular press around AI is on the growing power of machines to carry out what we would consider intelligent tasks, in fact they are still highly specialized. The AI apocalypse is not yet around the corner, but it’s worth worrying about, provided we worry about the right things. Economic disruptions are coming, with social upheavals in train. We shouldn’t worry about a super-intelligent machine ‘waking up’ one day, and deciding to take over the planet. This is far fetched. But, we already see the growing use of facial recognition technology, coupled with ubiquitous surveillance cameras. How is this information being used? And by whom? Toward what purpose?  It’s not the machines we should be worried about, but machines used by human beings who lack empathy or a strong moral center. AI can be a great amplifier of human sociopathy, and it can easily tilt the political playing field toward authoritarianism. This speaks to the fundamental need to move beyond intelligent machines, to aspire to something more ambitious.

There is a critically important distinction between intelligence, empathy, and compassion. I have ordered these in increasing levels of complexity, with intelligence the easiest to mimic using a machine, and compassion the hardest of all. Oversimplifying to make a point, here’s what I mean by these words: Intelligence is the ability to detect patterns in complex data. Empathy concerns the ability to detect and understand another’s emotional state. Compassion concerns the felt desire to reduce another’s suffering. Nearly all of our research efforts in AI to date are bent toward enhancing machine intelligence, with relatively little work on developing machines that can detect human emotional states, for example through facial image analysis, or vocal modulations, and to classify it, primarily looking for ways to detect when human operators of machines get tired or distracted.  There is a lively interest in the topic of human-machine intimacy. This covers not only robotic sex, which tends to get most of the press, but also robots used in caregiving for the elderly and infirm, or as companions for people with autism.

Intelligence and empathy alone are not enough. Together, they can make one a superb torturer. Intelligence and empathy are also the easiest problems to solve, although we tend to revere intelligence most of all in the Western academic tradition. Logic is easy to turn into code, love and friendship are not. But, as machines gain in power and subtlety, they will need to develop compassion too, otherwise the entire artificial intelligence project will not deliver on its promise to improve human well-being. Instead it risks becoming the favorite tool of totalitarian societies. Considering that its importance will only grow in coming decades, not enough work is currently has been done on robot compassion.

Promoting greater compassion for the Other has been at the heart of all religious traditions. The robot compassion project hasn’t even really begun. It will require an enormous leap in complexity over playing Go or chess, because compassion requires not only the ability to sort patterns and analyze data, but also to model the interior universe of the Other, to faithfully sense what might be causing their suffering, and to decide what steps might be taken to alleviate it. Think about how bad human beings are at this, and now think about how far we have to go to create machines that are better. It requires a full theory of mind, along with an understanding of large parts of the external world, a theory that reflects the full variety and diversity of the human experience. Perhaps taking on this grand task will make us better humans.

But, of course, creating a true machine companion is not an end state, but a transitional state to something after that. Evolution never ends, even if we now believe that human cultural evolution dominates biological evolution. We co-evolve with our machines and the self-aware machine, or the digital upload of consciousness, if it ever happens, will be an end for only part of humanity as we know it, an end of what many of us find meaningful. But, as Simak’s City showed me in my adolescence: it would not be an end of life, and it need not be the end of meaning. The story never ends.

Acknowledgements: Thanks to my friends and colleagues Barbara King for a long conversation which set some of these ideas in motion, Deborah Morse who first alerted me to the field of animal studies, and Kelly Joyce who did the same for the question of how algorithms reflect our ethical values.

Image: (c) Franco Brambilla. Cover for City, Urania Collezione 157. Used with the kind permission of the artist.

The text is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.