The Ideology Behind AI Discourse: An Interview With Meghan O’Gieblyn

CJLC editors Campbell Campbell and Thomas Mar Wee interview Meghan O’Gieblyn on her new nonfiction book, God, Human, Animal, Machine (Doubleday, 2021), an expansive and rich text that traces the connections between the history of religious discourse and the current artificial intelligence discourse.

Meghan O’Gieblyn is also the author of Interior States (Anchor, 2018), which won the 2018 Believer Book Award for nonfiction. Her essays have appeared in The New Yorker, Harper’s Magazine, Wired, The Guardian, The New York Times, Bookforum, n+1, The Believer, The Paris Review, and elsewhere. She is the recipient of three Pushcart Prizes and her work has been anthologized in The Best American Essays 2017 and The Contemporary American Essay (2021). She also writes the “Cloud Support” advice column for Wired.

This interview has been edited for clarity and brevity.

July 25th, 2021, 6pm.

Campbell Campbell: We hoped to start with a few questions to give our readers a sense of the topics in your book. What drew you to the connections between religious thinking and artificial intelligence discourse? How do you see religious thinking resurfacing in contemporary language around technology advancements such as artificial intelligence? 

Meghan O’Gieblyn: I grew up evangelical and went to a small fundamentalist Bible college, where I studied theology.  Two years into my program, I had a crisis of faith and left the school, and a few years later I renounced that whole worldview and became an atheist. I was living in Chicago, working at a bar, and one of my coworkers introduced me to Ray Kurzweil’s book The Age of Spiritual Machines. It’s one of the landmark texts of transhumanism, a form of West Coast utopianism that arose in the 80s and 90s that insists that humans can use technology to further their evolution into another species. Transhumanists helped popularize a lot of futuristic possibilities like nanotechnology and mind-uploading–topics that now seem a bit dated, though some of their ideas, like mind-computer interfaces, are now being developed. They basically wanted to achieve immortality, eradicate suffering, and transcend human nature to become something “beyond the human.” 

I’d never heard of this movement before, but I quickly became obsessed with it. I wasn’t able to articulate at the time why I was so attracted to it, but it’s clear to me now that transhumanism was making the same essential promises you find in Christian prophecies. I grew up in this very millenarian strain of Evangelicalism that was focused on the “end times” and Christ’s return. We believed this would happen in our lifetime, that we would become immortal, get raptured into the sky, and live forever with Christ in a state of perfection. Most transhumanists are not religious people–they are by and large atheists–yet they are drawing on these hopes that have played a larger role in Western culture.

So that was one initial seed that the book grew out of. The other was the machine-learning revolution, which was generating a lot of buzz around the time I started writing the book. This was around 2016 or 2017, right after AlphaGo beat the world champion of Go. These advanced AI systems have since been incorporated into the justice system, policing, financial institutions, medicine. Because they rely on deep learning, most of them are black-box technologies, meaning it’s impossible to know how they arrive at their outputs. At the time, a lot of tech criticism was drawing on religious language to describe these algorithms. The popular refrain was that they are unfathomable the way that god is unfathomable. We have to take their answers on faith. One Harper’s critic drew on the Book of Job, which was a book I struggled with when I was studying theology. Most of the critics making these comparisons didn’t have a background in theology, so I was interested in unpacking that tech criticism and thinking about how these religious ideas were mirrored in emerging technologies. 

CC: I loved the humor and earnestness when you’re training your Aibo dog in the first chapter and thought that this was a great entry point into the discussion on artificial intelligence as it relates to the reader. How do you see people turning to technology for emotional intimacy, and where do you see this going in the next twenty years? 

MO: I don’t know that we’re seeking emotional intimacy in technology so much as it’s being thrust on us by companies. I never had the urge to have a phone equipped with Siri, but these features now come along with the technology. We’re increasingly forced to interact with social AI, these programs that speak to us, respond to us, joke with us–that have some of the trappings of human interaction. All of us anthropomorphize, we attribute human qualities to things that are not actually human, or that are not actually alive. And I think corporations know that they can maximize engagement if people begin to emotionally bond with their products. I think that will be the trend going forward. 

During the pandemic, there were a lot of stories about people who were so lonely they started talking to chatbots. I actually downloaded one of these apps around the beginning of the pandemic because I kept seeing these stories about them, and I was curious. The newer ones are eerily intuitive–it feels at times like you’re talking to a real person. And the technology is going to get better, especially with the recent developments in natural language processing. 

CC: I find myself wondering what need capitalism will create next. I couldn’t help but wonder if these tech companies were creating a need or resolving a need with the creation of emotional technology? 

MO: I’m interested in the extent to which technology is now trying to solve problems that technology created. The most obvious example is the app that blocks you from checking other apps, or from checking your email during certain hours. A lot of people have made this point in terms of social media.  Technology has isolated us and alienated us from real face to face human interactions, and now we turn to social platforms because we’re hungry for connection. It’s a vicious cycle. It’s an overstated argument, but there is undoubtedly some truth to the fact that living our lives online entails a loss of intimacy that then prompts us to engage even more with social technologies.

Thomas Mar Wee: You point to a discourse that has transformed from discussing religious subjects metaphorically to religious subjects literally, and you try to warn readers about making a similar mistake with technology discourse. Where do you see this happening in technology discourse, and what are the pitfalls of discussing technology in this mode? 

MO: When I began writing this book, I was interested in the idea that our brains are analogous to computers. This metaphor has become integrated into everyday speech. Even people who know nothing about computers speak of “processing” new information, or “storing” memories, “retrieving” memories, as though we had a hard drive in our brain. The metaphor is usually traced back to the pair of cyberneticists, Warren McCulloch and Walter Pitts, who pioneered neural networks in the late 1940’s. They were responsible for developing the computational theory of the mind, this idea that the human brain functions like a Turing machine, that thought is basically symbol manipulation.  It was a brilliant metaphor in a lot of ways, and it was also wrong in a lot of ways. 

The analogy has been crucial to both cognitive psychology and artificial intelligence, and over time it’s become more literal than metaphorical. When people in AI speak of systems that “learn” or “understand,” those terms were once put in quotes to signify that they were metaphors. But now you rarely see those terms in quotes. People say instead that the computer’s visual system is actually seeing, the machine learning system is actually learning, as though there were no difference between the way a machine and a brain performed those tasks. As far as how this translates to religion, I grew up in a fundamentalist culture that often confused metaphors for literal truth. We read ancient myths and parables as though they were literal prophecies about what was going to happen in the future, transposing texts from the sixth century BC on to 21st century global politics. I’m really interested in the slippage that occurs with metaphors–how we often forget, over time, that metaphors are metaphors. What happens when we begin to think of them as literal? 

God, Human, Animal, Machine is available for purchase through Penguin Random House.

TW: Do you think your religious upbringing gives you a unique perspective on the topics of philosophy of the mind and consciousness in comparison to other thinkers and philosophers who are pondering these questions? 

MO: I suppose that growing up in a culture that held a great deal of certainty about the validity of its beliefs made me attuned to the presence of ideology in rhetoric–ideology that is pretending not to be ideology. Every intellectual framework, religious or scientific, has certain assumptions that are so basic to the worldview they’re not questioned. It’s more subtle in scientific discourse, but there are still premises that are taken for granted, or ideas that people have only recently started to question. The topic of consciousness, for example, is not really taken seriously in AI at the moment. It’s too vague, it has a lot of metaphysical baggage. People in the field tend to focus exclusively on intelligence. But they often appear to be talking around the idea of consciousness, especially when they’re discussing the limitations of certain systems. Another problem I discuss in the book is the question of what matter actually is. It’s such a basic question that most people never really think about it. But recently there’s been a renewed interest in panpsychism, the idea that all matter is conscious. More mainstream thinkers object; they think this is absurd. But then when they’re challenged to say what comprises matter, they argue that it’s not a relevant question. Matter is matter, end of debate. 

CC: I was thinking about the distinction you make between assuming that plants have whatever quality makes humans special and assuming humans have whatever banal quality that makes plants function. You make parallel comparisons between the thinking behind artificial intelligence and religion throughout the text. Could you make a similar distinction in this case: are you assigning wishful thinking—leaps of faith—to artificial intelligence discourse or are you assigning the rationality of artificial intelligence discourse to religious thought? Is this a matter of debunking AI discourse or elevating religious discourse, or neither? 

MO: I’m more interested in uncovering the wishful thinking lurking in technological discourse. I don’t think religious discourse is especially rational. I don’t say that in a pejorative sense. It’s not ‘irrational,’ per say (though some of it is, obviously) as much as ‘non-rational.’ 

I’m interested in frameworks like transhumanism, which shares some things with religion but is rooted in empirical realities. We have Moore’s Law that says that computing power is increasing at an exponential rate. All of the technologies that would be needed for mind-uploading and digital immortality are theoretically plausible–there’s nothing about them that would require a supernatural force, at any rate.. But these ideas are also clearly appealing to those deep emotional longings that were once satisfied by religion. It’s very difficult to accept the reality of death, especially if you don’t believe there’s anything after it. What appealed to me about transhumanism, when I first discovered it, is that it promised me everything that Christianity promised me, but through science. 

So yeah, there’s a lot of wishful thinking that is sort of sailing under the guise of empiricism, and I think this is becoming more true as secularization advances. The more that we eradicate those traditional sources of meaning, the more we’re going to look for them in other areas of our culture, including in science and technology. And I think there are dangers in mixing those two pursuits. Max Weber wrote that if you find yourself with religious or spiritual longing, you should just go to church. Don’t try to find transcendence in science; that’s not an attitude to bring into the lab. 

TW:  To what extent do you entertain fears that technology will exceed human functioning? You discuss the importance of being explicit in terms such as consciousness, thinking, information, etc, so I will ask you, where do you see gaps between the terms we are using, such as consciousness, and what is really happening with technology? 

MO: The largest threat with AI is not the classic science fiction scenario where robots are conscious, have a will of their own, and try to kill us out of some malicious intent to take over the world. The more likely possibility is that AI will exceed human intelligence, but it won’t have any idea what it’s doing. The machines won’t have a will of their own, or consciousness. 

But in a way, that scenario is just as dangerous. This is the concern Nick Bostrom illustrated with his famous example about the paperclip-maximizer. You program a machine with a very simple goal–for example, you train it to maximize the number of paperclips in its possession. If there were an intelligence explosion and the machine gained infinite intelligence, it would essentially destroy the world. It would take all of the available resources to create paperclip factories, killing humanity to fulfil this aim. That’s not because it’s an “evil machine.” It’s just doing what it was programmed to do. 

Norbert Wiener, the godfather of cybernetics, talked about this problem in his 1964 book God & Golem. He compared it to those folktales where a genie offers to grant someone a wish, but the person words their wish carelessly and it creates some catastrophic scenario. Computers, too, are very literal. So the terms we use really matter. The stakes are especially high with machine learning. It’s difficult to know how they are going to evolve or what new, creative means they might discover for fulfilling their programmed objectives. 

TW: To what extent are David Chalmers’ claims that GPT-3 possesses consciousness warranted? Do you buy into that claim? 

MO: [Laughs] I don’t really buy into that claim. And I think that Chalmers is being a tad facetious. I find it hard to believe he actually thinks that.

I’m more interested in the point where it no longer matters that AI doesn’t have consciousness. You can already see glimpses of this with GPT-3, where some of its writing is so convincing, it’s hard to believe that there is no human consciousness behind it. Add to that our natural tendency to anthropomorphize. The anthropologist Stuart Guthrie talks about how we are especially prone to assign human qualities to a machine that uses language because we’ve only ever encountered humans with that ability. Now we’ve created these systems in our image, with our language abilities, and we’re basically helpless, due to our evolved tendencies, not to see them as human, even if we know on an intellectual level that they’re not conscious. 

TW: Do you think developments in natural language processing, such as GPT, prompt a reassessment of our traditional hermeneutic concepts such as “text” and “author”? 

MO: There’s a way in which these systems literalize a lot of the poststructuralist theory that arose around the middle of the last century. The death of the author, the Lacanian idea that when we use language we draw from a public treasury of speech rather than consciously translating our preexisting thoughts into words. We are a medium for this amorphous system of language that is working through us. 

Natural language processing models are doing precisely that. They’re drawing from a public treasury of language, the internet, and are working blindly to produce language that looks like our language. It raises the questions of well, what are we humans doing that’s different from machines? To what extent do we understand what we’re saying? Is it unconscious or conscious? If you read about those systems, it’s very difficult to avoid questioning our own use of language and what it means to understand language as a human.

TW: Where do you place the role of the writer in a world where algorithms can generate believable, coherent, human-sounding prose? 

MO: I had this conversation with a writer friend a few weeks ago. She had asked whether I could ever connect with a book written by an algorithm, assuming it was as convincing and powerful as a novel authored by a great writer. Would knowing it was written by a machine make a difference?  I didn’t know how to answer, and I still don’t know how to answer. So much of the pleasure of reading, for me, is the sense of connecting with another consciousness. That feeling of being less alone. I suspect that if an algorithm ever produced a great work of literature, I would either not connect with the text, or I would end up convincing myself that the system was truly conscious. 

I do worry sometimes about my economic livelihood as a writer. Despite their limitations, the systems are probably going to be able to do a lot of standard magazine writing very soon. And in a sense, we’ve already paved the way for this to happen. We’ve reduced writers to “content creators,” a term that implies that the writing is secondary to the clicks and the ad revenue it brings in. Magazines have been, for a long time, using algorithms to determine what content gets the most views, which tends to be the content that hits some lowest common denominator–whatever is most popular. And language models are very good at producing this type of writing, stuff that isn’t especially provocative or original. Content that uses a very elementary level of language and avoids any kind of artistic flourishes. That’s not to say that writers are going to disappear or become obsolete. But I do think we’re going to witness a lot of changes to the structure of the media world. 

TW: Going off of this, what do you think about the general hesitation to grant computers attributes such as “creativity” or artistic “genius”? Is this an example of, as you put it, humans “moving the bar” for intelligence, or order to “maintain our sense of distinction” as human beings? 

MO: Daniel Susskind, an economist who writes about automation, uses the phrase “the intelligence of the gaps” to refer to this tendency to define human intelligence in relation to machines. He’s alluding to the  “god of gaps” theology, the notion that we attribute to God anything that cannot be explained by science. We do the same thing with machines. Whenever they come up against a limitation, we point to that and say there, that’s a distinctively human quality. But the bar keeps moving.

The cybernetic pioneers in the 1950’s and 1960’s wanted to build an enormous intellect that could beat humans at chess and solve complex theorems. This goes back to the Medieval idea that humans are unique, compared to animals, because we’re rational beings. For centuries, this is what made us distinct. But as soon as computers could beat us at chess, the terms shifted and we began to say that to be human was to be social and emotional, with feelings and intuition. 

That’s a natural reaction to have, but I am concerned about the ad hoc nature of these definitions, the fact that we are reconceiving what it means to be human every time a computer develops a new skill. What does it mean for humans if GPT-3 can write beautiful poetry and perform other creative tasks? For years, automation experts have advised us to work on being more creative so that we could get jobs that wouldn’t be outsourced to machines. Well, if computers can produce sonnets and compose classical music, to what extent is creativity a unique human quality? 

CC:  I am curious about how well GPT-3 writes because my knowledge is limited to your article and conversations with tech friends who say in a condescending manner, “We are coming for your jobs!” We may question if we are writing better throughout the course of history, but we cannot deny that we are expanding and perverting genres, to which we assign value in literary communities. Does GPT-3 have the ability to expand and pervert genres in the way that Piers Plowman can? 

MO: The way it works, to my understanding, is that the algorithm identifies the genre of whatever text you feed in as input, and then mimics it for the output. You input a Q&A or short story, and it will recognize that genre and continue in that vein, based on the examples it’s consumed during training. I don’t know if it can blur or innovate genres; I suppose that you could feed it genre-bending work and ask it to mimic it, but I wouldn’t really call that innovation. 

I would like to think–and maybe this is my own wishful thinking, trying to maintain some human distinction–that it would need some other kind of intelligence to innovate on the level of genre, something that the models do not currently have. That’s not to say that they won’t once they have more parameters. 

Thomas, you have experience with the algorithm and may have more to say? 

TW: I gave GPT my own poetry and prose, and I had to manipulate the text to resemble those genres because it currently does not know what a genre is. It does not have the human categorizations of genre because it is still interpreting it as a text and as ones and zeros. You can give it a genre-defying text, but it is up to the reader to say that it is prose or poetry or something in-between. It doesn’t yet have a theory of languages or literature. 

TW: One recurring theme in this book I noticed is the limitations of language to accurately convey subjective experiences such as consciousness and the perils of speaking metaphorically. I couldn’t help but think of the work of Wittgenstein and his claim that many of the problems debated in philosophy are actually just the result of a miscommunication or a misuse of language:  Did you encounter the limitations of language—such as its failure to accurately communicate subjective experience—while you were writing this book? 

MO: Yes, all the time. The first draft of the book did not have much of my personal experiences in it, and that was the result of my being overwhelmed by the research and wanting to privilege it. I kept resisting the use of the “I,” and then I would get lost in the writing and not know where I was going. The book only started to come together when I began discussing my personal experiences.

Subjective experience tends to be my anchor, my connection to the world. That doesn’t always require the use of first person, but it does require thinking about what’s at stake for me, and why I became interested in whatever question I’m exploring. When I’m not thinking about that, the words on the page begin to lose their meaning. It’s almost like I become some version of GPT-3, this disembodied machine that is blindly manipulating symbols apart from any real-world knowledge.  

The subjective, of course, can be treacherous territory. We have limitations and biases; we don’t always see our experience clearly. But I do think that the self, in writing, is useful as a limit, or a lens. It becomes a kind of filter, a way of understanding a world that is overwhelmingly complex from a specific vantage. I think that we need those limits, to some extent, in order to write. 

CC: For our final question, what are you reading and writing that you want our readers to know about? 

MO: I’m working on an essay for Harper’s on habit and automation, exploring the virtue of ritual in a world where many routine, repeatable tasks are being handed over to machines. I’ve been reading mostly fiction lately. The research for the book was somewhat intense, and I’ve been eager for something lighter in comparison to some of the darker subjects in my book [Laughs], so I’ve been reading and enjoying Jonathan Franzen’s new novel Crossroads. 

For more about Meghan O’Gieblyn, visit her website at http://www.meghanogieblyn.com/.