As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few tech
As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"
Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.
During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.
OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade.
For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?
Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person. The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard.
As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up.
The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.
A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd.
It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.
"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."
Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.”
As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.
Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system.
Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?
I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIVsaid: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."
Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.
She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.
“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
Whoever becomes the next Pope will inherit not just the leadership of the Catholic Church but a remarkably sophisticated approach to technology — one that in many ways outpaces governments worldwide. While Silicon Valley preaches Artificial Intelligence as a quasi-religious force capable of saving humanity, the Vatican has been developing theological arguments to push back against this narrative.
Subscribe to our Coda Currents newsletter
Weekly insights from our global n
Whoever becomes the next Pope will inherit not just the leadership of the Catholic Church but a remarkably sophisticated approach to technology — one that in many ways outpaces governments worldwide. While Silicon Valley preaches Artificial Intelligence as a quasi-religious force capable of saving humanity, the Vatican has been developing theological arguments to push back against this narrative.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
In the hours after Pope Francis died on Easter Monday, I went, like thousands of others in Rome, straight to St Peter's Square to witness the city in mourning as the basilica's somber bell tolled.
Just three days before, on Good Friday, worshippers in the eternal city proceeded, by candlelight, through the ruins of the Colosseum, as some of the Pope's final meditations were read to them. "When technology tempts us to feel all powerful, remind us," the leader of the service called out. "We are clay in your hands," the crowd responded in unison.
As our world becomes ever more governed by tech, the Pope's meditations are a reminder of our flawed, common humanity. We have built, he warned, "a world of calculation and algorithms, of cold logic and implacable interests." These turned out to be his last public words on technology. Right until the end, he called on his followers to think hard about how we're being captured by the technology around us. "How I would like for us to look less at screens and look each other in the eyes more!"
Faith vs. the new religion
Unlike politicians who often struggle to grasp AI's technical complexity, the Vatican has leveraged its centuries of experience with faith, symbols, and power to recognize AI for what it increasingly represents: not just a tool, but a competing belief system with its own prophets, promises of salvation, and demands for devotion.
In February 2020, the Vatican's Pontifical Academy for Life published the Rome Call for AI ethics, arguing that "AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live." And in January of this year, the Vatican released a document called Antiqua et Nova – one of its most comprehensive statements to date on AI – that warned we're in danger of worshipping AI as a God, or as an idol.
Our investigation into Silicon Valley's cult-like movement
I first became interested in the Vatican's perspective on AI while working on our Audible podcast series "Captured" with Cambridge Analytica whistleblower Christopher Wylie. In our year-long investigation, we discovered how Silicon Valley's AI pioneers have adopted quasi-religious language to describe their products and ambitions — with some tech leaders explicitly positioning themselves as prophets creating a new god.
In our reporting, we documented tech leaders like Bryan Johnson speaking literally about "creating God in the form of superintelligence," billionaire investors discussing how to "live forever" through AI, and founders talking about building all-knowing, all-powerful machines that will free us from suffering and propel us into utopia. One founder told us their goal was to install "all human knowledge into every human" through brain-computer interfaces — in other words, make us all omniscient.
Nobel laureate Maria Ressa, whom I spoke with recently, told me she had warned Pope Francis about the dangers of algorithms designed to promote lies and disinformation. "Francis understood the impact of lies," she said. She explained to the Pope how Facebook had destroyed the political landscape in the Philippines, where the platform’s engagement algorithms allowed disinformation to spread like wildfire. "I said — 'this is literally an incentive structure that is rewarding lies.'"
According to Ressa, AI evangelists in Silicon Valley are acquiring "the power of gods without the wisdom of God." It is power, she said, "that is in the hands of men whose arrogance prevents them from seeing the impact of rolling out technology that's not safe for their kids."
The battle for humanity's future
The Vatican has always understood how to use technology, engineering and spectacle to harness devotion and wield power — you only have to walk into St Peter’s Basilica to understand that. I spoke to a Vatican priest, on his way to Rome to pay his respects to the Pope. He told me why the Vatican understands the growing power of artificial intelligence so well. "We know perfectly well," he said, "that certain structures can become divinities. In the end, technology should be a tool for living — it should not be the end of man."
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.
“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”
A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call.
“You're wading through knee-deep water, people are screaming everywhere, and then… What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”
Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world. We looked at how the rest of us — journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back.
Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.
Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.
One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta.
Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video – training the company’s system so it can automatically filter out such content before the rest of us are exposed to it.
She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward.
Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe.
In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. “We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”
I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?
“You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.”
As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join.
In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.
What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.
“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.”
The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future.
There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now.
In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.
Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.”
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.”
Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.
This conversation has been edited for length and clarity.
Q: Rafael, can you tell me how your journey into neurorights started?
Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.
Q: How did that work?
Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it.
These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain.
We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there.
Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?
Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment.
Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars?
Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.
Q: It must have been a huge breakthrough.
Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.
And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”
I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.
Q: What do you mean by that?
Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.
Q: Jared, can you tell me how you came into this?
Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Q: What was your reaction when you heard of the mouse experiment?
Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.
Q: Can you talk me through some of the other implications of this technology?
Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.
That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.
In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting.
Rafael: I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information.
Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.
But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces.
And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.
How did you start thinking about ways to build rights and guardrails around neurotechnology?
Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.
So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data.
Jared: If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.
There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.
So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.
Rafael: We identified five areas of concern where neurotechnology could impact human rights:
The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.
The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.
The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.
The fourth is the right to equal access to neural augmentation. Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.
And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.
Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.
We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring
This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.
We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work.
Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:
“The world is an evil place, and nobody's coming to save you.”
It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.
It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world.
“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”
There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.
These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.
This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.
Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?”
We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister.
It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta.
Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards.
She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own.
And then, she switched to working for a different client: Meta.
“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death.
“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said.
Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said.
After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.
Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.
Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.
Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.
“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty.
And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.
Kibera is Africa's largest urban slum. Hundreds of young people living here were recruited to work on projects for Big Tech. Becky Lipscombe. Simone Boccaccio/SOPA Images/LightRocket via Getty Images.
But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.
When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI.
Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”
You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.
Despite their defense of their record, Sama is facing legal action in Kenya.
“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.
“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?”
As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.
“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.”
“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”
At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator.
Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”
For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.
A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?
Artificial intelligence may well go down in history as one of humanity’s greatest triumphs. Future generations may look back at this moment as the time we truly entered the future.
And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.
So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen. But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.
Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?