Vue lecture

“It’s a devil’s machine.”

Tech leaders say AI will bring us eternal life, help us spread out into the stars, and build a utopian world where we never have to work. They describe a future free of pain and suffering, in which all human knowledge will be wired into our brains. Their utopian promises sound more like proselytizing than science, as if AI were the new religion and the tech bros its priests. So how are real religious leaders responding?

As Georgia's first female Baptist bishop, Rusudan Gotsiridze challenges the doctrines of the Orthodox Church, and is known for her passionate defence of women’s and LGBTQ+ rights. She stands at the vanguard of old religion, an example of its attempts to modernize — so what does she think of the new religion being built in Silicon Valley, where tech gurus say they are building a superintelligent, omniscient being in the form of Artificial General Intelligence?

Gotsiridze first tried to use AI a few months ago. The result chilled her to the bone. It made her wonder if Artificial Intelligence was in fact a benevolent force, and to think about how she should respond to it from the perspective of her religious beliefs and practices.

In this conversation with Coda’s Isobel Cockerell, Bishop Gotsiridze discusses the religious questions around AI: whether AI can really help us hack back into paradise, and what to make of the outlandish visions of Silicon Valley’s powerful tech evangelists.

Bishop Rusudan Gotsiridze and Isobel Cockerell in conversation at the ZEG Storytelling Festival in Tbilisi last month. Photo: Dato Koridze.

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025. It has been lightly edited and condensed for clarity. 

Isobel: Tell me about your relationship with AI right now. 

Rusudan: Well, I’d like to say I’m an AI virgin. But maybe that’s not fully honest. I had one contact with ChatGPT. I didn’t ask it to write my Sunday sermon. I just asked it to draw my portrait. How narcissistic of me. I said, “Make a portrait of Bishop Rusudan Gotsiridze.” I waited and waited. The portrait looked nothing like me. It looked like my mom, who passed away ten years ago. And it looked like her when she was going through chemo, with her puffy face. It was really creepy. So I will think twice before asking ChatGPT anything again. I know it’s supposed to be magical... but that wasn’t the best first date. 

AI-generated image via ChatGPT / OpenAI.

Isobel: What went through your mind when you saw this picture of your mother? 

Rusudan: I thought, “Oh my goodness, it’s really a devil’s machine.” How could it go so deep? Find my facial features and connect them with someone who didn’t look like me? I take more after my paternal side. The only thing I could recognize was the priestly collar and the cross. Okay. Bishop. Got it. But yes, it was really very strange.

Isobel: I find it so interesting that you talk about summoning the dead through Artificial Intelligence. That’s something happening in San Francisco as well. When I was there last summer, we heard about this movement that meets every Sunday. Instead of church, they hold what they call an “AI séance,” where they use AI to call up the spirit world. To call up the dead. They believe the generative art that AI creates is a kind of expression of the spirit world, an expression of a greater force.

They wouldn’t let us attend. We begged, but it was a closed cult. Still, a bunch of artists had the exact same experience you had: they called up these images and felt like they were summoning them, not from technology, but from another realm. 

Rusudan: When you’re a religious person dealing with new technologies, it’s uncomfortable. Religion — Christianity, Protestantism, and many others — has earned a very cautious reputation throughout history because we’ve always feared progress.

Remember when we thought printing books was the devil’s work? Later, we embraced it. We feared vaccinations. We feared computers, the internet. And now, again, we fear AI.

 It reminds me of the old proverb about a young shepherd who loved to prank his friends by shouting “Wolves! Wolves!” until one day, the wolves really came. He shouted, but no one believed him anymore.

We’ve been shouting “wolves” for centuries. And now, I’m this close to shouting it again, but I’m not sure. 

Isobel: You said you wondered if this was the devil’s work when you saw that picture of your mother. It’s quite interesting. In Silicon Valley, people talk a lot about AI bringing about the rapture, apocalypse, hell.

They talk about the real possibility that AI is going to kill us all, what the endgame or extinction risk of building superintelligent models will be. Some people working in AI are predicting we’ll all be dead by 2030.

On the other side, people say, “We’re building utopia. We’re building heaven on Earth. A world where no one has to work or suffer. We’ll spread into the stars. We’ll be freed from death. We’ll become immortal.”

I’m not a religious person, but what struck me is the religiosity of these promises. And I wanted to ask you — are we hacking our way back into the Garden of Eden? Should we just follow the light? Is this the serpent talking to us?

Rusudan: I was listening to a Google scientist. He said that in the near future, we’re not heading to utopia but dystopia. It’s going to be hell on Earth. All the world’s wealth will be concentrated in a small circle, and poverty will grow. Terrible things will happen, before we reach utopia.

Listening to him, it really sounded like the Book of Revelation. First the Antichrist comes, and then Christ.

Because of my Protestant upbringing, I’ve heard so many lectures about the exact timeline of the Second Coming. Some people even name the day, hour, place. And when those times pass, they’re frustrated. But they carry on calculating. 

It’s hard for me to speak about dystopia, utopia, or the apocalyptic timeline, because I know nothing is going to be exactly as predicted.

The only thing I’m afraid of in this Artificial Intelligence era is my 2-year-old niece. She’s brilliant. You can tell by her eyes. She doesn’t speak our language yet. But phonetically, you can hear Georgian, English, Russian, even Chinese words from the reels she watches non-stop.

That’s what I’m afraid of: us constantly watching our devices and losing human connection. We’re going to have a deeply depressed young generation soon. 

I used to identify as a social person. I loved being around people. That’s why I became a priest. But now, I find it terribly difficult to pull myself out of my house to be among people. And it’s not just a technology problem — it’s a human laziness problem.

When we find someone or something to take over our duties, we gladly hand them over. That’s how we’re using this new technology. Yes, I’m in sermon mode now — it’s a Sunday, after all. 

I want to tell you an interesting story from my previous life. I used to be a gender expert, training people about gender equality. One example I found fascinating: in a Middle Eastern village without running water, women would carry vessels to the well every morning and evening. It was their duty.

Western gender experts saw this and decided to help. They installed a water supply. Every woman got running water in her kitchen: happy ending. But very soon, the pipeline was intentionally broken by the women. Why? Because that water-fetching routine was the only excuse they had to leave their homes and see their friends. With running water, they became captives to their household duties.

One day, we may also not understand why we’ve become captives to our own devices. We’ll enjoy staying home and not seeing our friends and relatives. I don’t think we’ll break that pipeline and go out again to enjoy real life.

Isobel: It feels like it’s becoming more and more difficult to break that pipeline. It’s not really an option anymore to live without the water, without technology. 

Sometimes I talk with people in a movement called the New Luddites. They also call themselves the Dumbphone Revolution. They want to create a five-to-ten percent faction of society which doesn’t have a smartphone, and they say that will help us all, because it will mean the world will still have to cater to people who don’t participate in big tech, who don’t have it in their lives. But is that the answer for all of us? To just smash the pipeline to restore human connection? Or can we have both?

Rusudan: I was a new mom in the nineties in Georgia. I had two children at a time when we didn’t have running water. I had to wash my kids’ clothes in the yard in cold water, summer and winter. I remember when we bought our first washing machine.  My husband and I sat in front of it for half an hour, watching it go round and round. It was paradise for me for a while. 

Now this washing machine is there and I don't enjoy it anymore. It's just a regular thing in my life. And when I had to wash my son’s and daughter-in-law’s wedding outfits, I didn’t trust the machine. I washed those clothes by hand. There are times when it’s important to do things by hand.

Of course, I don’t want to go back to a time without the internet when we were washing clothes in the yard, but there are things that are important to do without technology.

I enjoy painting, and I paint quite a lot with watercolors. So far, I can tell which paintings are AI and which are real. Every time I look at an AI-made watercolour, I can tell it’s not a human painting. It is a technological painting. And it's beautiful. I know I can never compete with this technology. 

But that feeling, when you put your brush in, the water — sometimes I accidentally put it in my coffee cup — and when you put that brush on the paper and the pigment spreads, that feeling can never be replaced by any technology. 

Isobel:
As a writer, I'm now pretty good, I think, at knowing if something is AI-written or not. I'm sure in the future it will get harder to tell, but right now, there are little clues. There’s this horrible construction that AI loves: something is not just X, it’s Y. For example: “Rusudan is not just a bishop, she’s an oracle for the LGBTQ community in Georgia.” Even if you tell it to stop using that construction, it can’t. Same for the endless em-dashes: I can’t get ChatGPT to stop using them no matter how many times or how adamantly I prompt it. It's just bad writing.

It’s missing that fingerprint of imperfection that a human leaves: whether it’s an unusual sentence construction or an interesting word choice, I’ve started to really appreciate those details in real writing. I've also started to really love typos. My whole life as a journalist I was horrified by them. But now when I see a typo, I feel so pleased. It means a human wrote it. It’s something to be celebrated. It’s the same with the idea that you dip your paintbrush in the coffee pot and there’s a bit of coffee in the painting. Those are the things that make the work we make alive. 

There’s a beauty in those imperfections, and that’s something AI has no understanding of. Maybe it’s because the people building these systems want to optimize everything. They are in pursuit of total perfection. But I think that the pursuit of imperfection is such a beautiful thing and something that we can strive for.

Rusudan: Another thing I hope for with this development of AI is that it’ll change the formula of our existence. Right now, we’re constantly competing with each other. The educational system is that way. Business is that way. Everything is that way. My hope is that we can never be as smart as AI. Maybe one day, our smartness, our intelligence, will be defined not by how many books we have read, but by how much we enjoy reading books, enjoy finding new things in the universe, and how well we live life and are happy with what we do. I think there is potential in the idea that we will never be able to compete with AI, so why don’t we enjoy the book from cover to cover, or the painting with the coffee pigment or the paint? That’s what I see in the future, and I’m a very optimistic person. I suppose here you’re supposed to say “Halleluljah!” 

Isobel: In our podcast, CAPTURED, we talked with engineers and founders in Silicon Valley whose dream for the future is to install all human knowledge in our brains, so we never have to learn anything again. Everyone will speak every language! We can rebuild the Tower of Babel! They talk about the future as a paradise. But my thought was, what about finding out things? What about curiosity? Doesn’t that belong in paradise? Certainly, as a journalist, for me, some people are in it for the impact and the outcome, but I’m in it for finding out, finding the story—that process of discovery.

Rusudan: It’s interesting —this idea of paradise as a place where we know everything. One of my students once asked me the same thing you just did. “What about the joy of finding new things? Where is that, in paradise?” Because in the Bible, Paul says that right now, we live in a dimension where we know very little, but there will be a time when we know everything. 

In the Christian narrative, paradise is a strange, boring place where people dress in funny white tunics and play the harp. And I understand that idea back then was probably a dream for those who had to work hard for everything in their everyday life — they had to chop wood to keep their family warm, hunt to get food for the kids, and of course for them, paradise was the place where they just could just lie around and do nothing. 

But I don’t think paradise will be a boring place. I think it will be a place where we enjoy working.

Isobel: Do you think AI will ever replace priests?

Rusudan: I was told that one day there will be AI priests preaching sermons better than I do. People are already asking ChatGPT questions they’re reluctant to ask a priest or a psychologist. Because it’s judgment-free and their secrets are safe…ish. I don’t pretend I have all the answers because I don’t. I only have this human connection. I know there will be questions I cannot answer, and people will go and ask ChatGPT. But I know that human connection — the touch of a hand, eye-contact — can never be replaced by AI. That’s my hope. So we don’t need to break those pipelines. We can enjoy the technology, and the human connection too. 

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025.

The post “It’s a devil’s machine.” appeared first on Coda Story.

The Vatican challenges AI’s god complex

As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence. 

Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"

Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.

During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.

OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade. 

For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?

Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person.  The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard. 

As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up. 

The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.

A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd. 

It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.

"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."

Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.” 

As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.

Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system. 

Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?

I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIV said: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."

 Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.

She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.

“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post The Vatican challenges AI’s god complex appeared first on Coda Story.

Pope Francis’s final warning

Whoever becomes the next Pope will inherit not just the leadership of the Catholic Church but a remarkably sophisticated approach to technology — one that in many ways outpaces governments worldwide. While Silicon Valley preaches Artificial Intelligence as a quasi-religious force capable of saving humanity, the Vatican has been developing theological arguments to push back against this narrative.

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

In the hours after Pope Francis died on Easter Monday, I went, like thousands of others in Rome, straight to St Peter's Square to witness the city in mourning as the basilica's somber bell tolled. 

Just three days before, on Good Friday, worshippers in the eternal city proceeded, by candlelight, through the ruins of the Colosseum, as some of the Pope's final meditations were read to them. "When technology tempts us to feel all powerful, remind us," the leader of the service called out. "We are clay in your hands," the crowd responded in unison.

As our world becomes ever more governed by tech, the Pope's meditations are a reminder of our flawed, common humanity. We have built, he warned, "a world of calculation and algorithms, of cold logic and implacable interests." These turned out to be his last public words on technology. Right until the end, he called on his followers to think hard about how we're being captured by the technology around us. "How I would like for us to look less at screens and look each other in the eyes more!" 

Faith vs. the new religion 

Unlike politicians who often struggle to grasp AI's technical complexity, the Vatican has leveraged its centuries of experience with faith, symbols, and power to recognize AI for what it increasingly represents: not just a tool, but a competing belief system with its own prophets, promises of salvation, and demands for devotion.

In February 2020, the Vatican's Pontifical Academy for Life published the Rome Call for AI ethics, arguing that "AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live." And in January of this year, the Vatican released a document called Antiqua et Nova – one of its most comprehensive statements to date on AI – that warned we're in danger of worshipping AI as a God, or as an idol.

Our investigation into Silicon Valley's cult-like movement 

I first became interested in the Vatican's perspective on AI while working on our Audible podcast series "Captured" with Cambridge Analytica whistleblower Christopher Wylie. In our year-long investigation, we discovered how Silicon Valley's AI pioneers have adopted quasi-religious language to describe their products and ambitions — with some tech leaders explicitly positioning themselves as prophets creating a new god.

In our reporting, we documented tech leaders like Bryan Johnson speaking literally about "creating God in the form of superintelligence," billionaire investors discussing how to "live forever" through AI, and founders talking about building all-knowing, all-powerful machines that will free us from suffering and propel us into utopia. One founder told us their goal was to install "all human knowledge into every human" through brain-computer interfaces — in other words, make us all omniscient.

Nobel laureate Maria Ressa, whom I spoke with recently, told me she had warned Pope Francis about the dangers of algorithms designed to promote lies and disinformation. "Francis understood the impact of lies," she said. She explained to the Pope how Facebook had destroyed the political landscape in the Philippines, where the platform’s engagement algorithms allowed disinformation to spread like wildfire. "I said — 'this is literally an incentive structure that is rewarding lies.'"

According to Ressa, AI evangelists in Silicon Valley are acquiring "the power of gods without the wisdom of God." It is power, she said, "that is in the hands of men whose arrogance prevents them from seeing the impact of rolling out technology that's not safe for their kids."

The battle for humanity's future 

The Vatican has always understood how to use technology, engineering and spectacle to harness devotion and wield power — you only have to walk into St Peter’s Basilica to understand that. I spoke to a Vatican priest, on his way to Rome to pay his respects to the Pope. He told me why the Vatican understands the growing power of artificial intelligence so well. "We know perfectly well," he said, "that certain structures can become divinities. In the end, technology should be a tool for living — it should not be the end of man."

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post Pope Francis’s final warning appeared first on Coda Story.

Captured: how Silicon Valley is building a future we never chose

In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal. 

It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.

“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”

A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call. 

“You're wading through knee-deep water, people are screaming everywhere, and then…  What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”

Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world.  We looked at how the rest of us —  journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back. 

Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.

Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.

One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta. 

Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video –  training the company’s system so it can automatically filter out such content before the rest of us are exposed to it. 

She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward. 

Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe. 

In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me. 

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”

I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?

You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.” 

As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join. 

In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.

What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.

“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.” 

The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future. 

There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now. 

In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.

Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

Who owns the rights to your brain?

Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.” 

Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.

This conversation has been edited for length and clarity.

Q: Rafael, can you tell me how your journey into neurorights started?

Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse  brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.

Q: How did that work? 

Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it. 

These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain. 

We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there. 

Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?

Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment. 

Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars? 

Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.

Q: It must have been a huge breakthrough

Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.

 And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”

I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.

Q: What do you mean by that?

Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.

Q: Jared, can you tell me how you came into this? 

Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

Q: What was your reaction when you heard of the mouse experiment?

Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.

Q: Can you talk me through some of the other implications of this technology? 

Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.

That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.

In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting. 

Rafael:  I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information. 

Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.

But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces. 

And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.

How did you start thinking about ways to build rights and guardrails around neurotechnology?

Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.

So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data. 

Jared:  If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.

There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.

So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.

Rafael: We identified five areas of concern where neurotechnology could impact human rights:

The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.

The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.

The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.

The fourth is the right to equal access to neural augmentation.  Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.

And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.

Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post Who owns the rights to your brain? appeared first on Coda Story.

In Kenya’s slums, they’re doing our digital dirty work

This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.  

We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work. 

Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:

“The world is an evil place, and nobody's coming to save you.”

It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.


It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world. 

“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”

There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.

These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.

This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.

Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?” 

We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister. 

It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta. 

Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards. 

She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own. 

And then, she switched to working for a different client: Meta. 

“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death. 

“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said. 

Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT. 

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said. 

After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.

Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.

Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.

Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.

“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty. 

And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.

Kibera is Africa's largest urban slum. Hundreds of young people living here were recruited to work on projects for Big Tech. Becky Lipscombe. Simone Boccaccio/SOPA Images/LightRocket via Getty Images.

But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.

When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI. 

Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”

You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.

Despite their defense of their record, Sama is facing legal action in Kenya. 

“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.

“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?” 

As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.

“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.” 

“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”

At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator. 

Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”

For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.

A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?

Artificial intelligence may well go down in history as one of humanity’s greatest triumphs.  Future generations may look back at this moment as the time we truly entered the future.

And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.  

So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen.  But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.

Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post In Kenya’s slums, they’re doing our digital dirty work appeared first on Coda Story.

❌