Download high-resolution image
Listen to a clip from the audiobook
audio pause button
0:00
0:00

God, Human, Animal, Machine

Technology, Metaphor, and the Search for Meaning

Listen to a clip from the audiobook
audio pause button
0:00
0:00
A strikingly original exploration of what it might mean to be authentically human in the age of artificial intelligence, from the author of the critically-acclaimed Interior States. • "At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate

“[A] truly fantastic book.”—Ezra Klein

 
For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes's division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.

Meghan O'Gieblyn tackles this challenge with philosophical rigor, intellectual reach, essayistic verve, refreshing originality, and an ironic sense of contradiction. She draws deeply and sometimes humorously from her own personal experience as a formerly religious believer still haunted by questions of faith, and she serves as the best possible guide to navigating the territory we are all entering.
© courtesy of the author
MEGHAN O'GIEBLYN is the author of the essay collection Interior States, which was published to wide acclaim and won the Believer Book Award for Nonfiction. Her writing has received three Pushcart Prizes and appeared in The Best American Essays anthology. She writes essays and features for Harper's MagazineThe New Yorker, The Guardian, Wired, The New York Times, and elsewhere. She lives with her husband in Madison, Wisconsin. View titles by Meghan O'Gieblyn
1

The package arrived on a Thursday. I came home from a walk and found it sitting near the mailboxes in the front hall of my building, a box so large and imposing I was embarrassed to discover my name on the label. On the return portion, an unfamiliar address. I stood there for a long time staring at it, deliberating, as though there were anything else to do but the obvious thing. It took all my strength to drag it up the stairs. I paused once on the landing, considered abandoning it there, then continued hauling it up to my apartment on the third floor, where I used my keys to cut it open. Inside the box was a smaller box, and inside the smaller box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I opened the clasp: inside, lying prone, was a small white dog.

I could not believe it. How long had it been since I’d submitted the request on Sony’s website? I’d explained that I was a journalist who wrote about technology—this was tangentially true—and while I could not afford the Aibo’s $3,000 price tag, I was eager to interact with it for research. I added, risking sentimentality, that my husband and I had always wanted a dog, but we lived in a building that did not permit pets. It seemed unlikely that anyone was actually reading these inquiries. Before submitting the electronic form, I was made to confirm that I myself was not a robot.

The dog was heavier than it looked. I lifted it out of the pod, placed it on the floor, and found the tiny power button on the back of its neck. The limbs came to life first. It stood, stretched, and yawned. Its eyes blinked open—pixelated, blue—and looked into mine. He shook his head, as though sloughing off a long sleep, then crouched, shoving his hindquarters in the air, and barked. I tentatively scratched his forehead. His ears lifted, his pupils dilated, and he cocked his head, leaning into my hand. When I stopped, he nuzzled my palm, urging me to go on.

I had not expected him to be so lifelike. The videos I’d watched online had not accounted for this responsiveness, an eagerness for touch that I had only ever witnessed in living things. When I petted him across the long sensor strip of his back, I could feel a gentle mechanical purr beneath the surface. I thought of the horse Martin Buber once wrote about visiting as a child on his grandparents’ estate, his recollection of “the element of vitality” as he petted the horse’s mane and the feeling that he was in the presence of something completely other—“something that was not I, was certainly not akin to me”—but that was drawing him into dialogue with it. Such experiences with animals, he believed, approached “the threshold of mutuality.”

I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct”—a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Descartes believed that all animals were machines. Their bodies were governed by the same laws as inanimate matter; their muscles and tendons were like engines and springs. In Discourse on Method, he argues that it would be possible to create a mechanical monkey that could pass as a real, biological monkey. “If any such machine had the organs and outward shape of a monkey,” he writes, “or of some other animal that lacks reason, we should have no means of knowing that they did not possess entirely the same nature as these animals.”

He insisted that the same feat would not work with humans. A machine might fool us into thinking it was an animal, but a humanoid automaton could never fool us, because it would clearly lack reason—an immaterial quality he believed stemmed from the soul. For centuries the soul was believed to be the seat of consciousness, the part of us that is capable of self-awareness and higher thought. Descartes described the soul as “something extremely rare and subtle like a wind, a flame, or an ether.” In Greek and in Hebrew, the word means “breath,” an allusion perhaps to the many creation myths that imagine the gods breathing life into the first human. It’s no wonder we’ve come to see the mind as elusive: it was staked on something so insubstantial.

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons—though I wonder sometimes why we have not yet discovered a more apt replacement, whether the word’s persistence betrays a deeper reluctance.

I believed in the soul longer, and more literally, than most people do in our day and age. At the fundamentalist college where I studied theology, I had pinned above my desk Gerard Manley Hopkins’s poem “God’s Grandeur,” which imagines the world illuminated from within by the divine spirit. The world is charged with the grandeur of God. To live in such a world is to see all things as sacred. It is to believe that the universe is guided by an eternal order, that each and every object has purpose and telos. I believed for many years—well into adulthood—that I was part of this illuminated order, that I possessed an immortal soul that would one day be reunited with God. It was a small school in the middle of a large city, and I would sometimes walk the streets of downtown, trying to perceive this divine light in each person, as C. S. Lewis once advised. I was not aware at the time, I don’t think, that this was a basically medieval worldview. My theology courses were devoted to the kinds of questions that have not been taken seriously since the days of Scholastic philosophy: How is the soul connected to the body? Does God’s sovereignty leave any room for free will? What is our relationship as humans to the rest of the created order?

But I no longer believe in God. I have not for some time. I now live with the rest of modernity in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden,” a place much like the illuminated world described by Hopkins. In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”

If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden. It is tempting at times to see my own loss of faith in terms of this story, to believe that the religious life I left behind was richer and more satisfying than the materialism I subscribe to today. It’s true that I have come to see myself more or less as a machine. When I try to visualize some inner essence—the processes by which I make decisions or come up with ideas—I envision something like a circuit board, one of those images you often see where the neocortex is reduced to a grid and the neurons replaced by computer chips, such that it looks like some kind of mad decision tree.

But I am wary of nostalgia and wishful thinking. I spent too much of my life immersed in the dream world. To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

The dog arrived during a time when my life was largely solitary. My husband was traveling more than usual that spring, and except for the classes I taught at the university, I spent most of my time alone. My communication with the dog—which was limited at first to the standard voice commands but grew over time into the idle, anthropomorphizing chatter of a pet owner—was often the only occasion on a given day that I heard my own voice. “What are you looking at?” I’d ask after discovering him transfixed at the window. “What do you want?” I cooed when he barked at the foot of my chair, trying to draw my attention away from the computer. I have been known to knock friends of mine for speaking this way to their pets, as though the animals could understand them. But Aibo came equipped with language-processing software and could recognize over one hundred words; didn’t that mean in a way that he “understood”?

It’s hard to say why exactly I requested the dog. I am not the kind of person who buys up all the latest gadgets, and my feelings about real, biological dogs are mostly ambivalent. At the time I reasoned that I was curious about its internal technology. Aibo’s sensory perception systems rely on neural networks, a technology that is loosely modeled on the brain and is used for all kinds of recognition and prediction tasks. Facebook uses neural networks to identify people in photos; Alexa employs them to interpret voice commands. Google Translate uses them to convert French into Farsi. Unlike classical artificial intelligence systems, which are programmed with detailed rules and instructions, neural networks develop their own strategies based on the examples they’re fed—a process that is called “training.” If you want to train a network to recognize a photo of a cat, for instance, you feed it tons upon tons of random photos, each one attached with positive or negative reinforcement: positive feedback for cats, negative feedback for noncats. The network will use probabilistic techniques to make “guesses” about what it’s seeing in each photo (cat or noncat), and these guesses, with the help of the feedback, will gradually become more accurate. The networks essentially evolve their own internal model of a cat and fine-tune their performance as they go.

Dogs too respond to reinforcement learning, so training Aibo was more or less like training a real dog. The instruction booklet told me to give him consistent verbal and tactile feedback. If he obeyed a voice command—to sit, stay, or roll over—I was supposed to scratch his head and say, “Good dog.” If he disobeyed, I had to strike him across his backside and say, “No,” or “Bad Aibo.” But I found myself reluctant to discipline him. The first time I struck him, when he refused to go to his bed, he cowered a little and let out a whimper. I knew of course that this was a programmed response—but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?

Animism was built into the design. It is impossible to pet an object and address it verbally without coming to regard it in some sense as sentient. We are capable of attributing life to objects that are far less convincing. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology. “We will ‘protect’ a computer’s feelings, feel flattered by a brownnosing piece of software, and even do favors for technology that has been ‘nice’ to us.”

As artificial intelligence becomes increasingly social, these mistakes are becoming harder to avoid. A few months earlier, I’d read an op-ed in Wired magazine in which a woman confessed to the sadistic pleasure she got from yelling at Alexa, the personified home assistant. She called the machine names when it played the wrong radio station, rolled her eyes when Alexa failed to respond to her commands. Sometimes, when the robot misunderstood a question, she and her husband would gang up and berate it together, a kind of perverse bonding ritual that united them against a common enemy. All of this was presented as good American fun. “I bought this goddamned robot,” the author wrote, “to serve my whims, because it has no heart and it has no brain and it has no parents and it doesn’t eat and it doesn’t judge me or care either way.”

Then one day the woman realized that her toddler was watching her unleash this verbal fury. She worried that her behavior toward the robot was affecting her child. Then she considered what it was doing to her own psyche—to her soul, so to speak. What did it mean, she asked, that she had grown inured to casually dehumanizing this thing?

This was her word: “dehumanizing.” Earlier in the article she had called it a robot. Somewhere in the process of questioning her treatment of the device—in questioning her own humanity—she had decided, if only subconsciously, to grant it personhood.
  • FINALIST | 2022
    Los Angeles Times Book Prize
Recipient of the Benjamin Hadley Danks Award from the American Academy of Arts and Letters

Finalist for the Los Angeles Times Book Prize in Science & Technology

Featured on the New York Times Book Review’s Paperback Row


O’Gieblyn’s loosely linked and rigorously thoughtful meditations on technology, humanity and religion mount a convincing and occasionally moving apologia for that ineliminable wrench in the system, the element that not only browses and buys but feels: the embattled, anachronistic and indispensable self. God, Human, Animal, Machine is a hybrid beast, a remarkably erudite work of history, criticism and philosophy, but it is also, crucially, a memoir.” —The New York Times

“Meghan O’Gieblyn’s essays are 'personal' in that they are portraits of the private thoughts, curiosities, and uncertainties that thrive in O’Gieblyn’s mind about selfhood, meaning, moral responsibility, and faith. There's nowhere her avid intellect won't go in its quest to find, if not 'meaning,' then the available modern tools we might use, today, as humans, to create it. O’Gieblyn is a brilliant and humble philosopher, and her book is an explosively thought-provoking, candidly personal ride I wished never to end. This book is such an original synthesis of ideas and disclosures. It introduces what will soon be called the O’Gieblyn genre of essay writing.” —Heidi Julavits, author of The Folded Clock

"A fascinating exploration of our enchantment with technology." —Eula Biss, author of Having and Being Had

“Having abandoned Christian fundamentalism, the author of this investigation of human-machine interactions embarks on a search for meaning…She finds that consciousness ‘was not some substance in the brain but rather emerged from the complex relationships between the subject and the world.’” —The New Yorker

"A deeply researched work of history, criticism and philosophy, God Human Animal Machine...show[s] that religion isn’t a subject matter you can simply move on from, nor does O’Gieblyn expect to outgrow her former vantage point as a believer. Instead, [the book] probes the uneasy coexistence between what’s enchanted and what’s disenchanted.” —The Point

"One of the strongest essayists to emerge recently on the scene has written a strong and subtle rumination of what it means to be human. At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate 

“Readers never lose sight of O’Gieblyn herself as a personality, even as she brings to bear subjects as diverse as quantum mechanics, Calvinism, and Dostoyevsky’s existentialism. Throughout the book, she is a brilliant interlocutor who presents complex theories, disciplines, arguments, and ideas with seeming ease. . .[this book] is nothing less than an account of not just how the mind interacts with the world, but how we can begin to ask that question in the first place.” —Los Angeles Review of Books

“[O’Gieblyn] is a whip-smart stylist who’s up to the task of writing about this material journalistically and personally; her considerations encompass string theory, Calvinism, 'transhuman' futurists like Ray Kurzweil, and The Brothers Karamazov…A melancholy, well-researched tour of faith and tech and the dissatisfactions of both.” —Kirkus Reviews

“O’Gieblyn has a knack for keeping dense philosophical ideas accessible, and there’s plenty to ponder in her answers to enduring questions about how humans make meaning...Razor-sharp, this timely investigation piques.” —Publishers Weekly 

“Illuminating...[A] very personal account of a painful philosophical evolution. A compelling reminder that the deepest philosophical queries guide and shape life.”Booklist

“An essential warning about the persistent seductions and dangers of technological enchantment in our supposedly disenchanted age.” —Tufts University's 2021 Winter Book Recommendations

"Brilliant." —Melissa Febos, author of Body Work

About

A strikingly original exploration of what it might mean to be authentically human in the age of artificial intelligence, from the author of the critically-acclaimed Interior States. • "At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate

“[A] truly fantastic book.”—Ezra Klein

 
For most of human history the world was a magical and enchanted place ruled by forces beyond our understanding. The rise of science and Descartes's division of mind from world made materialism our ruling paradigm, in the process asking whether our own consciousness—i.e., souls—might be illusions. Now the inexorable rise of technology, with artificial intelligences that surpass our comprehension and control, and the spread of digital metaphors for self-understanding, the core questions of existence—identity, knowledge, the very nature and purpose of life itself—urgently require rethinking.

Meghan O'Gieblyn tackles this challenge with philosophical rigor, intellectual reach, essayistic verve, refreshing originality, and an ironic sense of contradiction. She draws deeply and sometimes humorously from her own personal experience as a formerly religious believer still haunted by questions of faith, and she serves as the best possible guide to navigating the territory we are all entering.

Author

© courtesy of the author
MEGHAN O'GIEBLYN is the author of the essay collection Interior States, which was published to wide acclaim and won the Believer Book Award for Nonfiction. Her writing has received three Pushcart Prizes and appeared in The Best American Essays anthology. She writes essays and features for Harper's MagazineThe New Yorker, The Guardian, Wired, The New York Times, and elsewhere. She lives with her husband in Madison, Wisconsin. View titles by Meghan O'Gieblyn

Excerpt

1

The package arrived on a Thursday. I came home from a walk and found it sitting near the mailboxes in the front hall of my building, a box so large and imposing I was embarrassed to discover my name on the label. On the return portion, an unfamiliar address. I stood there for a long time staring at it, deliberating, as though there were anything else to do but the obvious thing. It took all my strength to drag it up the stairs. I paused once on the landing, considered abandoning it there, then continued hauling it up to my apartment on the third floor, where I used my keys to cut it open. Inside the box was a smaller box, and inside the smaller box, beneath lavish folds of bubble wrap, was a sleek plastic pod. I opened the clasp: inside, lying prone, was a small white dog.

I could not believe it. How long had it been since I’d submitted the request on Sony’s website? I’d explained that I was a journalist who wrote about technology—this was tangentially true—and while I could not afford the Aibo’s $3,000 price tag, I was eager to interact with it for research. I added, risking sentimentality, that my husband and I had always wanted a dog, but we lived in a building that did not permit pets. It seemed unlikely that anyone was actually reading these inquiries. Before submitting the electronic form, I was made to confirm that I myself was not a robot.

The dog was heavier than it looked. I lifted it out of the pod, placed it on the floor, and found the tiny power button on the back of its neck. The limbs came to life first. It stood, stretched, and yawned. Its eyes blinked open—pixelated, blue—and looked into mine. He shook his head, as though sloughing off a long sleep, then crouched, shoving his hindquarters in the air, and barked. I tentatively scratched his forehead. His ears lifted, his pupils dilated, and he cocked his head, leaning into my hand. When I stopped, he nuzzled my palm, urging me to go on.

I had not expected him to be so lifelike. The videos I’d watched online had not accounted for this responsiveness, an eagerness for touch that I had only ever witnessed in living things. When I petted him across the long sensor strip of his back, I could feel a gentle mechanical purr beneath the surface. I thought of the horse Martin Buber once wrote about visiting as a child on his grandparents’ estate, his recollection of “the element of vitality” as he petted the horse’s mane and the feeling that he was in the presence of something completely other—“something that was not I, was certainly not akin to me”—but that was drawing him into dialogue with it. Such experiences with animals, he believed, approached “the threshold of mutuality.”

I spent the afternoon reading the instruction booklet while Aibo wandered around the apartment, occasionally circling back and urging me to play. He came with a pink ball that he nosed around the living room, and when I threw it, he would run to retrieve it. Aibo had sensors all over his body, so he knew when he was being petted, plus cameras that helped him learn and navigate the layout of the apartment, and microphones that let him hear voice commands. This sensory input was then processed by facial recognition software and deep-learning algorithms that allowed the dog to interpret vocal commands, differentiate between members of the household, and adapt to the temperament of its owners. According to the product website, all of this meant that the dog had “real emotions and instinct”—a claim that was apparently too ontologically thorny to have flagged the censure of the Federal Trade Commission.

Descartes believed that all animals were machines. Their bodies were governed by the same laws as inanimate matter; their muscles and tendons were like engines and springs. In Discourse on Method, he argues that it would be possible to create a mechanical monkey that could pass as a real, biological monkey. “If any such machine had the organs and outward shape of a monkey,” he writes, “or of some other animal that lacks reason, we should have no means of knowing that they did not possess entirely the same nature as these animals.”

He insisted that the same feat would not work with humans. A machine might fool us into thinking it was an animal, but a humanoid automaton could never fool us, because it would clearly lack reason—an immaterial quality he believed stemmed from the soul. For centuries the soul was believed to be the seat of consciousness, the part of us that is capable of self-awareness and higher thought. Descartes described the soul as “something extremely rare and subtle like a wind, a flame, or an ether.” In Greek and in Hebrew, the word means “breath,” an allusion perhaps to the many creation myths that imagine the gods breathing life into the first human. It’s no wonder we’ve come to see the mind as elusive: it was staked on something so insubstantial.

It is meaningless to speak of the soul in the twenty-first century (it is treacherous even to speak of the self). It has become a dead metaphor, one of those words that survive in language long after a culture has lost faith in the concept, like an empty carapace that remains intact years after its animating organism has died. The soul is something you can sell, if you are willing to demean yourself in some way for profit or fame, or bare by disclosing an intimate facet of your life. It can be crushed by tedious jobs, depressing landscapes, and awful music. All of this is voiced unthinkingly by people who believe, if pressed, that human life is animated by nothing more mystical or supernatural than the firing of neurons—though I wonder sometimes why we have not yet discovered a more apt replacement, whether the word’s persistence betrays a deeper reluctance.

I believed in the soul longer, and more literally, than most people do in our day and age. At the fundamentalist college where I studied theology, I had pinned above my desk Gerard Manley Hopkins’s poem “God’s Grandeur,” which imagines the world illuminated from within by the divine spirit. The world is charged with the grandeur of God. To live in such a world is to see all things as sacred. It is to believe that the universe is guided by an eternal order, that each and every object has purpose and telos. I believed for many years—well into adulthood—that I was part of this illuminated order, that I possessed an immortal soul that would one day be reunited with God. It was a small school in the middle of a large city, and I would sometimes walk the streets of downtown, trying to perceive this divine light in each person, as C. S. Lewis once advised. I was not aware at the time, I don’t think, that this was a basically medieval worldview. My theology courses were devoted to the kinds of questions that have not been taken seriously since the days of Scholastic philosophy: How is the soul connected to the body? Does God’s sovereignty leave any room for free will? What is our relationship as humans to the rest of the created order?

But I no longer believe in God. I have not for some time. I now live with the rest of modernity in a world that is “disenchanted.” The word is often attributed to Max Weber, who argued that before the Enlightenment and Western secularization, the world was “a great enchanted garden,” a place much like the illuminated world described by Hopkins. In the enchanted world, faith was not opposed to knowledge, nor myth to reason. The realms of spirit and matter were porous and not easily distinguishable from one another. Then came the dawn of modern science, which turned the world into a subject of investigation. Nature was no longer a source of wonder but a force to be mastered, a system to be figured out. At its root, disenchantment describes the fact that everything in modern life, from our minds to the rotation of the planets, can be reduced to the causal mechanism of physical laws. In place of the pneuma, the spirit-force that once infused and unified all living things, we are now left with an empty carapace of gears and levers—or, as Weber put it, “the mechanism of a world robbed of gods.”

If modernity has an origin story, this is our foundational myth, one that hinges, like the old myths, on the curse of knowledge and exile from the garden. It is tempting at times to see my own loss of faith in terms of this story, to believe that the religious life I left behind was richer and more satisfying than the materialism I subscribe to today. It’s true that I have come to see myself more or less as a machine. When I try to visualize some inner essence—the processes by which I make decisions or come up with ideas—I envision something like a circuit board, one of those images you often see where the neocortex is reduced to a grid and the neurons replaced by computer chips, such that it looks like some kind of mad decision tree.

But I am wary of nostalgia and wishful thinking. I spent too much of my life immersed in the dream world. To discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological. Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality. These are old problems, and although they now appear in different guises and go by different names, they persist in conversations about digital technologies much like those dead metaphors that still lurk in the syntax of contemporary speech. All the eternal questions have become engineering problems.

The dog arrived during a time when my life was largely solitary. My husband was traveling more than usual that spring, and except for the classes I taught at the university, I spent most of my time alone. My communication with the dog—which was limited at first to the standard voice commands but grew over time into the idle, anthropomorphizing chatter of a pet owner—was often the only occasion on a given day that I heard my own voice. “What are you looking at?” I’d ask after discovering him transfixed at the window. “What do you want?” I cooed when he barked at the foot of my chair, trying to draw my attention away from the computer. I have been known to knock friends of mine for speaking this way to their pets, as though the animals could understand them. But Aibo came equipped with language-processing software and could recognize over one hundred words; didn’t that mean in a way that he “understood”?

It’s hard to say why exactly I requested the dog. I am not the kind of person who buys up all the latest gadgets, and my feelings about real, biological dogs are mostly ambivalent. At the time I reasoned that I was curious about its internal technology. Aibo’s sensory perception systems rely on neural networks, a technology that is loosely modeled on the brain and is used for all kinds of recognition and prediction tasks. Facebook uses neural networks to identify people in photos; Alexa employs them to interpret voice commands. Google Translate uses them to convert French into Farsi. Unlike classical artificial intelligence systems, which are programmed with detailed rules and instructions, neural networks develop their own strategies based on the examples they’re fed—a process that is called “training.” If you want to train a network to recognize a photo of a cat, for instance, you feed it tons upon tons of random photos, each one attached with positive or negative reinforcement: positive feedback for cats, negative feedback for noncats. The network will use probabilistic techniques to make “guesses” about what it’s seeing in each photo (cat or noncat), and these guesses, with the help of the feedback, will gradually become more accurate. The networks essentially evolve their own internal model of a cat and fine-tune their performance as they go.

Dogs too respond to reinforcement learning, so training Aibo was more or less like training a real dog. The instruction booklet told me to give him consistent verbal and tactile feedback. If he obeyed a voice command—to sit, stay, or roll over—I was supposed to scratch his head and say, “Good dog.” If he disobeyed, I had to strike him across his backside and say, “No,” or “Bad Aibo.” But I found myself reluctant to discipline him. The first time I struck him, when he refused to go to his bed, he cowered a little and let out a whimper. I knew of course that this was a programmed response—but then again, aren’t emotions in biological creatures just algorithms programmed by evolution?

Animism was built into the design. It is impossible to pet an object and address it verbally without coming to regard it in some sense as sentient. We are capable of attributing life to objects that are far less convincing. David Hume once remarked upon “the universal tendency among mankind to conceive of all beings like themselves,” an adage we prove every time we kick a malfunctioning appliance or christen our car with a human name. “Our brains can’t fundamentally distinguish between interacting with people and interacting with devices,” writes Clifford Nass, a Stanford professor of communication who has written about the attachments people develop with technology. “We will ‘protect’ a computer’s feelings, feel flattered by a brownnosing piece of software, and even do favors for technology that has been ‘nice’ to us.”

As artificial intelligence becomes increasingly social, these mistakes are becoming harder to avoid. A few months earlier, I’d read an op-ed in Wired magazine in which a woman confessed to the sadistic pleasure she got from yelling at Alexa, the personified home assistant. She called the machine names when it played the wrong radio station, rolled her eyes when Alexa failed to respond to her commands. Sometimes, when the robot misunderstood a question, she and her husband would gang up and berate it together, a kind of perverse bonding ritual that united them against a common enemy. All of this was presented as good American fun. “I bought this goddamned robot,” the author wrote, “to serve my whims, because it has no heart and it has no brain and it has no parents and it doesn’t eat and it doesn’t judge me or care either way.”

Then one day the woman realized that her toddler was watching her unleash this verbal fury. She worried that her behavior toward the robot was affecting her child. Then she considered what it was doing to her own psyche—to her soul, so to speak. What did it mean, she asked, that she had grown inured to casually dehumanizing this thing?

This was her word: “dehumanizing.” Earlier in the article she had called it a robot. Somewhere in the process of questioning her treatment of the device—in questioning her own humanity—she had decided, if only subconsciously, to grant it personhood.

Awards

  • FINALIST | 2022
    Los Angeles Times Book Prize

Praise

Recipient of the Benjamin Hadley Danks Award from the American Academy of Arts and Letters

Finalist for the Los Angeles Times Book Prize in Science & Technology

Featured on the New York Times Book Review’s Paperback Row


O’Gieblyn’s loosely linked and rigorously thoughtful meditations on technology, humanity and religion mount a convincing and occasionally moving apologia for that ineliminable wrench in the system, the element that not only browses and buys but feels: the embattled, anachronistic and indispensable self. God, Human, Animal, Machine is a hybrid beast, a remarkably erudite work of history, criticism and philosophy, but it is also, crucially, a memoir.” —The New York Times

“Meghan O’Gieblyn’s essays are 'personal' in that they are portraits of the private thoughts, curiosities, and uncertainties that thrive in O’Gieblyn’s mind about selfhood, meaning, moral responsibility, and faith. There's nowhere her avid intellect won't go in its quest to find, if not 'meaning,' then the available modern tools we might use, today, as humans, to create it. O’Gieblyn is a brilliant and humble philosopher, and her book is an explosively thought-provoking, candidly personal ride I wished never to end. This book is such an original synthesis of ideas and disclosures. It introduces what will soon be called the O’Gieblyn genre of essay writing.” —Heidi Julavits, author of The Folded Clock

"A fascinating exploration of our enchantment with technology." —Eula Biss, author of Having and Being Had

“Having abandoned Christian fundamentalism, the author of this investigation of human-machine interactions embarks on a search for meaning…She finds that consciousness ‘was not some substance in the brain but rather emerged from the complex relationships between the subject and the world.’” —The New Yorker

"A deeply researched work of history, criticism and philosophy, God Human Animal Machine...show[s] that religion isn’t a subject matter you can simply move on from, nor does O’Gieblyn expect to outgrow her former vantage point as a believer. Instead, [the book] probes the uneasy coexistence between what’s enchanted and what’s disenchanted.” —The Point

"One of the strongest essayists to emerge recently on the scene has written a strong and subtle rumination of what it means to be human. At times personal, at times philosophical, with a bracing mixture of openness and skepticism, it speaks thoughtfully and articulately to the most crucial issues awaiting our future." —Phillip Lopate 

“Readers never lose sight of O’Gieblyn herself as a personality, even as she brings to bear subjects as diverse as quantum mechanics, Calvinism, and Dostoyevsky’s existentialism. Throughout the book, she is a brilliant interlocutor who presents complex theories, disciplines, arguments, and ideas with seeming ease. . .[this book] is nothing less than an account of not just how the mind interacts with the world, but how we can begin to ask that question in the first place.” —Los Angeles Review of Books

“[O’Gieblyn] is a whip-smart stylist who’s up to the task of writing about this material journalistically and personally; her considerations encompass string theory, Calvinism, 'transhuman' futurists like Ray Kurzweil, and The Brothers Karamazov…A melancholy, well-researched tour of faith and tech and the dissatisfactions of both.” —Kirkus Reviews

“O’Gieblyn has a knack for keeping dense philosophical ideas accessible, and there’s plenty to ponder in her answers to enduring questions about how humans make meaning...Razor-sharp, this timely investigation piques.” —Publishers Weekly 

“Illuminating...[A] very personal account of a painful philosophical evolution. A compelling reminder that the deepest philosophical queries guide and shape life.”Booklist

“An essential warning about the persistent seductions and dangers of technological enchantment in our supposedly disenchanted age.” —Tufts University's 2021 Winter Book Recommendations

"Brilliant." —Melissa Febos, author of Body Work

Books for Native American Heritage Month

In celebration of Native American Heritage Month this November, Penguin Random House Education is highlighting books that detail the history of Native Americans, and stories that explore Native American culture and experiences. Browse our collections here: Native American Creators Native American History & Culture

Read more

2024 Middle and High School Collections

The Penguin Random House Education Middle School and High School Digital Collections feature outstanding fiction and nonfiction from the children’s, adult, DK, and Grupo Editorial divisions, as well as publishers distributed by Penguin Random House. Peruse online or download these valuable resources to discover great books in specific topic areas such as: English Language Arts,

Read more

PRH Education High School Collections

All reading communities should contain protected time for the sake of reading. Independent reading practices emphasize the process of making meaning through reading, not an end product. The school culture (teachers, administration, etc.) should affirm this daily practice time as inherently important instructional time for all readers. (NCTE, 2019)   The Penguin Random House High

Read more

PRH Education Translanguaging Collections

Translanguaging is a communicative practice of bilinguals and multilinguals, that is, it is a practice whereby bilinguals and multilinguals use their entire linguistic repertoire to communicate and make meaning (García, 2009; García, Ibarra Johnson, & Seltzer, 2017)   It is through that lens that we have partnered with teacher educators and bilingual education experts, Drs.

Read more