Vue lecture

Finding Meaning in Human Lives

AI is replacing humans in the workplace, with tech companies among the quickest to simply innovate people out of the job market altogether. Amazon announced plans to lay off up to 30,000 people. The company hasn’t commented publicly on why, but Amazon’s CEO Andy Jassy has talked about how AI will eventually replace many of his white-collar employees. And it’s likely the money saved will be used to — you guessed it — build out more AI infrastructure. 

This is just the beginning. “Innovation related to artificial intelligence could displace 6-7% of the US workforce if AI is widely adopted,” says a recent Goldman Sachs report.

In the last week, over 53,000 people signed a statement calling for “a prohibition on the development of superintelligence.” A wide coalition of notable figures, from Nobel-winning scientists to senior politicians, writers, British royals, and radio shockjocks agreed that AI companies are racing to build superintelligence with little regard for concerns that include “human economic obsolescence and disempowerment.”

The petition against superintelligence development could be the beginning of organized political resistance to AI's unchecked advance. The signatories span continents and ideologies, suggesting a rare consensus emerging around the need for democratic oversight of AI development. The question is: can it organize quickly enough to influence policy before the key decisions are made in Silicon Valley boardrooms and government backrooms?

But it’s not just jobs we could lose. The petition talks about the “losses of freedom, civil liberties, dignity… and even potential human extinction.” It reflects a deeper unease about the quasi-religious zeal of AI evangelists who view superintelligence not as a choice to be democratically decided, but as an inevitable evolution the tech bros alone can shepherd.

Coda explored this messianic ideology at length in "Captured," a six-part investigative series available as a podcast on Audible and as a series of articles on our website, in which we dove deep into the future envisioned by the tech elite for the rest of us.

Philosopher Nick Bostrom, author of the book Superintelligence: Paths, Dangers, Strategies.
The Washington Post / Contributor via Getty Images

During our reporting, data scientist Christopher Wylie, best known as the Cambridge Analytica whistleblower, and I spoke to the Swedish philosopher Nick Bostrom, whose 2014 book foresaw the possibility that our world might be taken over by an uncontrollable artificial superintelligence.

A decade later, with AI companies racing toward Artificial General Intelligence with minimal oversight, Bostrom’s concerns have become urgent. What struck me most during our conversation was how he believes we’re on the precipice of a huge societal paradigm shift, and that it’s unrealistic to think otherwise. It’s hyperbolic, Bostrom says, to think human civilization will continue to potter along as it is. 

Do we believe in Bostrom’s version of the future where society plunges into dystopia or utopia? Or is there a middle way? Judge for yourself whether his warnings still sound theoretical.

This conversation has been edited and condensed for clarity.

Christopher Wylie: To start, could you define what you mean by superintelligence and how it differs from the AI we see today?

Nick Bostrom: Superintelligence is a form of cognitive processing system that not just matches but exceeds human cognitive abilities. If we're talking about general superintelligence, it would exceed our cognitive capacities in all fields — scientific creativity, common sense, general wisdom.

Isobel Cockerell: What kind of future are we looking at — especially if we manage to develop superintelligence?

Bostrom: So I think many people have the view that the most likely scenario is that things more or less continue as they have — maybe a little war here, a cool new gadget there, but basically the human condition continues indefinitely. 

But I think that looks pretty implausible. It’s more likely that it will radically change. Either for the much better or for the much worse.

The longer the timeframe we consider — and these days I don’t think in terms of that many years — we are kind of approaching this critical juncture in human affairs, where we will either go extinct or suffer some comparably bad fate, or else be catapulted into some form of utopian condition.

You could think of the human condition as a ball rolling along a thin beam — and it will probably fall off that beam. But it’s hard to predict in which direction.

Wylie: When you think about these two almost opposite outcomes — one where humanity is subsumed by superintelligence, and the other where technology liberates us into a utopia — do humans ultimately become redundant in either case?

Bostrom: In the sense of practical utility, yes — I think we will reach, or at least approximate, a state where human labor is not needed for anything. There’s no practical objective that couldn’t be better achieved by machines, by AIs and robots.

But you have to ask what it’s all for. Possibly we have a role as consumers of all this abundance. It’s like having a big Disneyland — maybe in the future you could automate the whole park so no human employees are needed. But even then, you still need the children to enjoy it.

If we really take seriously this notion that we could develop AI that can do everything we can do, and do it much better, we will then face quite profound questions about the purpose of human life. If there’s nothing we need to do — if we could just press a button and have everything done — what do we do all day long? What gives meaning to our lives?

And so ultimately, I think we need to envisage a future that accommodates humans, animals, and AIs of various different shapes and levels — all living happy lives in harmony.

Cockerell: How far do you trust the people in Silicon Valley to guide us toward a better future?

Bostrom: I mean, there’s a sense in which I don’t really trust anybody. I think we humans are not fully competent here — but we still have to do it as best we can.

If you were a divine creature looking down, it might seem like a comedy: these ape-like humans running around building super-powerful machines they barely understand, occasionally fighting with rocks and stones, then going back to building again. That must be what the human condition looks like from the point of view of some billion-year-old alien civilization.

So that’s kind of where we are.

Ultimately, it’ll be a much bigger conversation about how this technology should be used. If we develop superintelligence, all humans will be exposed to its risks — even if you have nothing to do with AI, even if you’re a farmer somewhere you’ve never heard of, you’ll still be affected. So it seems fair that if things go well, everyone should also share some of the upside.

You don’t want to pre-commit to doing all of this open-source. For example, Meta is pursuing open-source AI — so far, that’s good. But at some point, these models will become capable of lending highly useful assistance in developing weapons of mass destruction.

Now, before releasing their model, they fine-tune it to refuse those requests. But once they open-source it, everyone has access to the model weights. It’s easy to remove that fine-tuning and unlock these latent capabilities.

This works great for normal software and relatively modest AI, but there might be a level where it just democratizes mass destruction.

Wylie : But on the flip side — if you concentrate that power in the hands of a few people authorized to build and use the most powerful AIs, isn’t there also a high risk of abuse? Governments or corporations misusing it against people or other groups?

Bostrom: When we figure out how to make powerful superintelligence, if development is completely open — with many entities, companies, and groups all competing to get there first — then if it turns out it’s actually hard to align them, where you might need a year or two to train, make sure it’s safe, test and double-test before really ramping things up, that just might not be possible in an open competitive scenario.

You might be responsible — one of the lead developers who chooses to do it carefully — but that just means you forfeit the lead to whoever is willing to take more risks. If there are 10 or 20 groups racing in different countries and companies, there will always be someone willing to cut more corners.

Wylie: More broadly, do you have conversations with people in Silicon Valley — Sam Altman, Elon Musk, the leaders of major tech companies — about your concerns, and their role in shaping or preventing some of the long-term risks of AI?

Bostrom: Yeah. I’ve had quite a few conversations. What’s striking, when thinking specifically about AI, is that many of the early people in the frontier labs have, for years, been seriously engaged with questions about what happens when AI succeeds — superintelligence, alignment, and so on.

That’s quite different from the typical tech founder focused on capturing markets and launching products. For historical reasons, many early AI researchers have been thinking ahead about these deeper issues for a long time, even if they reach different conclusions about what to do.

And it’s always possible to imagine a more ideal world, but relatively speaking, I think we’ve been quite lucky so far. The impact of current AI technologies has been mostly positive — search engines, spam filters, and now these large language models that are genuinely useful for answering questions and helping with coding.

I would imagine that the benefits will continue to far outweigh the downsides — at least until the final stage, where it becomes more of an open question whether we end up with a kind of utopia or an existential catastrophe.

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post Finding Meaning in Human Lives appeared first on Coda Story.

  •  

“It’s a devil’s machine.”

Tech leaders say AI will bring us eternal life, help us spread out into the stars, and build a utopian world where we never have to work. They describe a future free of pain and suffering, in which all human knowledge will be wired into our brains. Their utopian promises sound more like proselytizing than science, as if AI were the new religion and the tech bros its priests. So how are real religious leaders responding?

As Georgia's first female Baptist bishop, Rusudan Gotsiridze challenges the doctrines of the Orthodox Church, and is known for her passionate defence of women’s and LGBTQ+ rights. She stands at the vanguard of old religion, an example of its attempts to modernize — so what does she think of the new religion being built in Silicon Valley, where tech gurus say they are building a superintelligent, omniscient being in the form of Artificial General Intelligence?

Gotsiridze first tried to use AI a few months ago. The result chilled her to the bone. It made her wonder if Artificial Intelligence was in fact a benevolent force, and to think about how she should respond to it from the perspective of her religious beliefs and practices.

In this conversation with Coda’s Isobel Cockerell, Bishop Gotsiridze discusses the religious questions around AI: whether AI can really help us hack back into paradise, and what to make of the outlandish visions of Silicon Valley’s powerful tech evangelists.

Bishop Rusudan Gotsiridze and Isobel Cockerell in conversation at the ZEG Storytelling Festival in Tbilisi last month. Photo: Dato Koridze.

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025. It has been lightly edited and condensed for clarity. 

Isobel: Tell me about your relationship with AI right now. 

Rusudan: Well, I’d like to say I’m an AI virgin. But maybe that’s not fully honest. I had one contact with ChatGPT. I didn’t ask it to write my Sunday sermon. I just asked it to draw my portrait. How narcissistic of me. I said, “Make a portrait of Bishop Rusudan Gotsiridze.” I waited and waited. The portrait looked nothing like me. It looked like my mom, who passed away ten years ago. And it looked like her when she was going through chemo, with her puffy face. It was really creepy. So I will think twice before asking ChatGPT anything again. I know it’s supposed to be magical... but that wasn’t the best first date. 

AI-generated image via ChatGPT / OpenAI.

Isobel: What went through your mind when you saw this picture of your mother? 

Rusudan: I thought, “Oh my goodness, it’s really a devil’s machine.” How could it go so deep? Find my facial features and connect them with someone who didn’t look like me? I take more after my paternal side. The only thing I could recognize was the priestly collar and the cross. Okay. Bishop. Got it. But yes, it was really very strange.

Isobel: I find it so interesting that you talk about summoning the dead through Artificial Intelligence. That’s something happening in San Francisco as well. When I was there last summer, we heard about this movement that meets every Sunday. Instead of church, they hold what they call an “AI séance,” where they use AI to call up the spirit world. To call up the dead. They believe the generative art that AI creates is a kind of expression of the spirit world, an expression of a greater force.

They wouldn’t let us attend. We begged, but it was a closed cult. Still, a bunch of artists had the exact same experience you had: they called up these images and felt like they were summoning them, not from technology, but from another realm. 

Rusudan: When you’re a religious person dealing with new technologies, it’s uncomfortable. Religion — Christianity, Protestantism, and many others — has earned a very cautious reputation throughout history because we’ve always feared progress.

Remember when we thought printing books was the devil’s work? Later, we embraced it. We feared vaccinations. We feared computers, the internet. And now, again, we fear AI.

 It reminds me of the old proverb about a young shepherd who loved to prank his friends by shouting “Wolves! Wolves!” until one day, the wolves really came. He shouted, but no one believed him anymore.

We’ve been shouting “wolves” for centuries. And now, I’m this close to shouting it again, but I’m not sure. 

Isobel: You said you wondered if this was the devil’s work when you saw that picture of your mother. It’s quite interesting. In Silicon Valley, people talk a lot about AI bringing about the rapture, apocalypse, hell.

They talk about the real possibility that AI is going to kill us all, what the endgame or extinction risk of building superintelligent models will be. Some people working in AI are predicting we’ll all be dead by 2030.

On the other side, people say, “We’re building utopia. We’re building heaven on Earth. A world where no one has to work or suffer. We’ll spread into the stars. We’ll be freed from death. We’ll become immortal.”

I’m not a religious person, but what struck me is the religiosity of these promises. And I wanted to ask you — are we hacking our way back into the Garden of Eden? Should we just follow the light? Is this the serpent talking to us?

Rusudan: I was listening to a Google scientist. He said that in the near future, we’re not heading to utopia but dystopia. It’s going to be hell on Earth. All the world’s wealth will be concentrated in a small circle, and poverty will grow. Terrible things will happen, before we reach utopia.

Listening to him, it really sounded like the Book of Revelation. First the Antichrist comes, and then Christ.

Because of my Protestant upbringing, I’ve heard so many lectures about the exact timeline of the Second Coming. Some people even name the day, hour, place. And when those times pass, they’re frustrated. But they carry on calculating. 

It’s hard for me to speak about dystopia, utopia, or the apocalyptic timeline, because I know nothing is going to be exactly as predicted.

The only thing I’m afraid of in this Artificial Intelligence era is my 2-year-old niece. She’s brilliant. You can tell by her eyes. She doesn’t speak our language yet. But phonetically, you can hear Georgian, English, Russian, even Chinese words from the reels she watches non-stop.

That’s what I’m afraid of: us constantly watching our devices and losing human connection. We’re going to have a deeply depressed young generation soon. 

I used to identify as a social person. I loved being around people. That’s why I became a priest. But now, I find it terribly difficult to pull myself out of my house to be among people. And it’s not just a technology problem — it’s a human laziness problem.

When we find someone or something to take over our duties, we gladly hand them over. That’s how we’re using this new technology. Yes, I’m in sermon mode now — it’s a Sunday, after all. 

I want to tell you an interesting story from my previous life. I used to be a gender expert, training people about gender equality. One example I found fascinating: in a Middle Eastern village without running water, women would carry vessels to the well every morning and evening. It was their duty.

Western gender experts saw this and decided to help. They installed a water supply. Every woman got running water in her kitchen: happy ending. But very soon, the pipeline was intentionally broken by the women. Why? Because that water-fetching routine was the only excuse they had to leave their homes and see their friends. With running water, they became captives to their household duties.

One day, we may also not understand why we’ve become captives to our own devices. We’ll enjoy staying home and not seeing our friends and relatives. I don’t think we’ll break that pipeline and go out again to enjoy real life.

Isobel: It feels like it’s becoming more and more difficult to break that pipeline. It’s not really an option anymore to live without the water, without technology. 

Sometimes I talk with people in a movement called the New Luddites. They also call themselves the Dumbphone Revolution. They want to create a five-to-ten percent faction of society which doesn’t have a smartphone, and they say that will help us all, because it will mean the world will still have to cater to people who don’t participate in big tech, who don’t have it in their lives. But is that the answer for all of us? To just smash the pipeline to restore human connection? Or can we have both?

Rusudan: I was a new mom in the nineties in Georgia. I had two children at a time when we didn’t have running water. I had to wash my kids’ clothes in the yard in cold water, summer and winter. I remember when we bought our first washing machine.  My husband and I sat in front of it for half an hour, watching it go round and round. It was paradise for me for a while. 

Now this washing machine is there and I don't enjoy it anymore. It's just a regular thing in my life. And when I had to wash my son’s and daughter-in-law’s wedding outfits, I didn’t trust the machine. I washed those clothes by hand. There are times when it’s important to do things by hand.

Of course, I don’t want to go back to a time without the internet when we were washing clothes in the yard, but there are things that are important to do without technology.

I enjoy painting, and I paint quite a lot with watercolors. So far, I can tell which paintings are AI and which are real. Every time I look at an AI-made watercolour, I can tell it’s not a human painting. It is a technological painting. And it's beautiful. I know I can never compete with this technology. 

But that feeling, when you put your brush in, the water — sometimes I accidentally put it in my coffee cup — and when you put that brush on the paper and the pigment spreads, that feeling can never be replaced by any technology. 

Isobel:
As a writer, I'm now pretty good, I think, at knowing if something is AI-written or not. I'm sure in the future it will get harder to tell, but right now, there are little clues. There’s this horrible construction that AI loves: something is not just X, it’s Y. For example: “Rusudan is not just a bishop, she’s an oracle for the LGBTQ community in Georgia.” Even if you tell it to stop using that construction, it can’t. Same for the endless em-dashes: I can’t get ChatGPT to stop using them no matter how many times or how adamantly I prompt it. It's just bad writing.

It’s missing that fingerprint of imperfection that a human leaves: whether it’s an unusual sentence construction or an interesting word choice, I’ve started to really appreciate those details in real writing. I've also started to really love typos. My whole life as a journalist I was horrified by them. But now when I see a typo, I feel so pleased. It means a human wrote it. It’s something to be celebrated. It’s the same with the idea that you dip your paintbrush in the coffee pot and there’s a bit of coffee in the painting. Those are the things that make the work we make alive. 

There’s a beauty in those imperfections, and that’s something AI has no understanding of. Maybe it’s because the people building these systems want to optimize everything. They are in pursuit of total perfection. But I think that the pursuit of imperfection is such a beautiful thing and something that we can strive for.

Rusudan: Another thing I hope for with this development of AI is that it’ll change the formula of our existence. Right now, we’re constantly competing with each other. The educational system is that way. Business is that way. Everything is that way. My hope is that we can never be as smart as AI. Maybe one day, our smartness, our intelligence, will be defined not by how many books we have read, but by how much we enjoy reading books, enjoy finding new things in the universe, and how well we live life and are happy with what we do. I think there is potential in the idea that we will never be able to compete with AI, so why don’t we enjoy the book from cover to cover, or the painting with the coffee pigment or the paint? That’s what I see in the future, and I’m a very optimistic person. I suppose here you’re supposed to say “Halleluljah!” 

Isobel: In our podcast, CAPTURED, we talked with engineers and founders in Silicon Valley whose dream for the future is to install all human knowledge in our brains, so we never have to learn anything again. Everyone will speak every language! We can rebuild the Tower of Babel! They talk about the future as a paradise. But my thought was, what about finding out things? What about curiosity? Doesn’t that belong in paradise? Certainly, as a journalist, for me, some people are in it for the impact and the outcome, but I’m in it for finding out, finding the story—that process of discovery.

Rusudan: It’s interesting —this idea of paradise as a place where we know everything. One of my students once asked me the same thing you just did. “What about the joy of finding new things? Where is that, in paradise?” Because in the Bible, Paul says that right now, we live in a dimension where we know very little, but there will be a time when we know everything. 

In the Christian narrative, paradise is a strange, boring place where people dress in funny white tunics and play the harp. And I understand that idea back then was probably a dream for those who had to work hard for everything in their everyday life — they had to chop wood to keep their family warm, hunt to get food for the kids, and of course for them, paradise was the place where they just could just lie around and do nothing. 

But I don’t think paradise will be a boring place. I think it will be a place where we enjoy working.

Isobel: Do you think AI will ever replace priests?

Rusudan: I was told that one day there will be AI priests preaching sermons better than I do. People are already asking ChatGPT questions they’re reluctant to ask a priest or a psychologist. Because it’s judgment-free and their secrets are safe…ish. I don’t pretend I have all the answers because I don’t. I only have this human connection. I know there will be questions I cannot answer, and people will go and ask ChatGPT. But I know that human connection — the touch of a hand, eye-contact — can never be replaced by AI. That’s my hope. So we don’t need to break those pipelines. We can enjoy the technology, and the human connection too. 

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

Related Articles

The post “It’s a devil’s machine.” appeared first on Coda Story.

  •  

The Vatican challenges AI’s god complex

As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence. 

Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"

Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.

During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.

OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade. 

For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?

Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person.  The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard. 

As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up. 

The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.

A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd. 

It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.

"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."

Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.” 

As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.

Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system. 

Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?

I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIV said: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."

 Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.

She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.

“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post The Vatican challenges AI’s god complex appeared first on Coda Story.

  •