Russia has started using a new drone tactic in Ukraine. Russian Shahed kamikaze drones have begun performing complex maneuvers mid-flight in an apparent attempt to evade Ukrainian interceptor drones, according to electronic warfare expert Serhii Beskrestnov, also known as Flash.
Ukrainian interceptor drones are the country’s most advanced weapon for defending against Russian drones. President Volodymyr Zelenskyy has set a clear goal for domestic manufacturers: ensure the capacity to deploy at least 1,000 such interceptors daily to protect Ukrainian cities and military targets.
“Shaheds have started executing a set of complex in-flight maneuvers aimed at reducing the effectiveness of our aerial interceptor drones,” explains Beskrestnov.
According to him, the Russian military has long been preparing to counter Ukrainian interceptors, and this new drone approach is only the beginning.
Ukraine prepares to strike back
Despite the new threat, the expert assures that Ukraine is actively improving its own interception technology.
In the first half of 2025, 6,754 civilians in Ukraine were killed or injured, the highest number for a six-month period since 2022, the UN reports. In July alone, Russia launched at least 5,183 long-range munitions at Ukraine, including a record 728 drones on 9 July. Kyiv and the port city of Odesa have been hit hardest in recent weeks.
“We will keep working on countering their tech with ours. You didn’t really think the enemy would abandon its most widespread weapon so easily, did you?” the expert says.
A technological fight unfolds
Shaheds remain one of the main threats to Ukraine’s rear, making the development of interceptor drones a key component of defense. As the situation shows, the air war is entering a new phase, the one where each side upgrades its unmanned systems in real time.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
A new Russian drone built to deceive Ukrainian air defenses has been exposed by Ukraine’s intelligence as being made entirely from Chinese components. Militarnyi reports that the aircraft, though intended primarily as a decoy, is capable of carrying a 15-kg warhead.
The drone’s fuselage is shaped like a delta wing, resembling the Iranian-designed Shahed-136, but it is significantly smaller in size. Russia uses the Shaheds, carrying up to 90 kg of explosives each, in daily attacks against Ukrainian civilians. In order to overwhelm Ukrainian air defenses, the Russians launch multiple cheaper decoy drones.
Drone mimics Shahed shape but is smaller and Chinese-made
Ukraine’s Main Intelligence Directorate has published a detailed breakdown of the drone’s construction. Although its main function is to act as a false target alongside long-range drones, it can also carry a warhead weighing up to 15 kg.
All onboard systems and electronic blocks are of Chinese origin. Nearly half of them — including the flight controller with autopilot, navigation modules and antennas, airspeed sensor, and Pitot tube — come from a single Chinese company, CUAV Technology. The company specializes in developing and producing UAV system modules and applications.
Banned CUAV tech still shows up in new Russian UAV
Besides CUAV components, the TsBST decoy drone contains the following Chinese-made parts: DLE-60 engine and ignition module, KST servos, a Razer video camera by Foxeer Technology, Mayatech RFD900X data transmission module, ReadyToSky video transmitter, Hobbywing Technology power regulator, and an HRB Power battery.
The UAV is also equipped with a Chinese-made copy of the Australian RFD900x data transmission module by RFDesign. Like the original, this device is designed to transmit data over long distances — up to 40 km in line of sight depending on the antenna. It enables data links from the drone to a ground station or from one UAV to another, expanding reconnaissance capabilities.
In October 2022, CUAV Technology announced restrictions on supplying its products to both Ukraine and Russia to prevent their use in military applications. However, in 2023, Russia presented a vertical takeoff drone as an original development, which turned out to be a CUAV product available on Aliexpress.
Militarnyi notes that DLE engines were previously used by Russian developers in the Gerbera and Parodiia decoy drones. KST servos have appeared in the Shahed-136 drones, V2U, aerial bomb glide kits.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
The numbers are staggering: Meta is offering AI researchers total compensation packages of up to $300 million over four years, with individual deals like former Apple executive Ruoming Pang's $200 million package making headlines across Silicon Valley. Meanwhile, OpenAI just raised $40 billion, with the company valued at $300, reportedly the largest private tech funding round in history.
But beneath these eye-watering dollar figures lies a profound transformation: Silicon Valley’s elite have evolved from eager innovators into architects of a new world order, reshaping society with their unprecedented power. This shift is not just about money or technology, it marks a fundamental change in how power is conceived and exercised.
We often talk about technology as if it exists in a silo, separate from politics or culture. But those boundaries are rapidly dissolving. Technology is no longer just a sector or a set of tools; it is reshaping everything, weaving itself into the very fabric of society and power. The tech elite are no longer content with tech innovation alone, they are crafting a new social and political reality, wielding influence that extends far beyond the digital realm.
To break out of these siloed debates, at the end of June we convened a virtual conversation with four remarkable minds: Christopher Wylie (the Cambridge Analytica whistleblower and host of our Captured podcast), pioneering technologist Judy Estrin, filmmaker and digital rights advocate Justine Bateman, and philosopher Shannon Vallor. Our goal: to explore how Silicon Valley’s culture of innovation has morphed into a belief system, one that’s migrated from the tech fringe to the center of our collective imagination, reimagining what it means to be human.
The conversation began with a story from Chris Wylie that perfectly captured the mood of our times. While recording the Captured podcast, he found himself stranded in flooded Dubai, missing a journalism conference in Italy. Instead, he ended up at a party thrown by tech billionaires, a gathering that, as he described in a voice note he sent us from the bathroom, felt like a dispatch from the new center of power:
“People here are talking about longevity, how to live forever. But also prepping—how to prepare for when society gets completely undermined.”
https://www.youtube.com/watch?v=CS1Xs_z1rFk
Listen to Chris Wylie’s secret voice message from a Dubai bathroom.
At that party, tech billionaires weren’t debating how to fix democracy or save society. They were plotting how to survive its unraveling. That fleeting moment captured the new reality: while some still debate how to repair the systems we have, others are already plotting their escape, imagining futures where technology is not just a tool, but a lifeboat for the privileged few. It was a reminder that the stakes are no longer abstract or distant: they are unfolding, right now, in rooms most of us will never enter.
Our discussion didn’t linger on the spectacle of that Dubai party for long. Instead, it became a springboard to interrogate the broader shift underway: how Silicon Valley’s narratives, once quirky, fringe, utopian, have become the new center of gravity for global power. What was once the domain of science fiction is now the quiet logic guiding boardrooms, investment strategies, and even military recruitment.
As Wylie put it, “When you start to think about Silicon Valley not simply as a technology industry or a political institution, but one that also emits spiritual ideologies and prophecies about the nature and purpose of humanity, a lot of the weirdness starts to make a lot more sense.”
Judy Estrin, widely known in tech circles as the "mother of the cloud" for her pioneering role in building the foundational infrastructure of the internet, has witnessed this evolution firsthand. Estrin played a crucial part in developing the TCP/IP protocols that underpin digital communication, and later served as CTO of Cisco during the internet’s explosive growth. She’s seen the shift from Steve Jobs’ vision of technology as "a bicycle for the mind" to Marc Andreessen’s declaration that "software is eating the world."
Now, Estrin sounds the alarm: the tech landscape has moved from collaborative innovation to a relentless pursuit of control and dominance. Today’s tech leaders are no longer just innovators, they are crafting a new social architecture that redefines how we live, think, and connect.
What makes this transformation of power particularly insidious is the sense of inevitability that surrounds it. The tech industry has succeeded in creating a narrative where its vision of the future appears unstoppable, leaving the rest of us as passive observers rather than active participants in the shaping of our technological destiny.
Peter Thiel, the billionaire investor and PayPal co-founder, embodies this mindset. In a recent interview, Thiel was asked point-blank whether he wanted the human race to endure. He hesitated before answering, “Uh, yes,” then added: “I also would like us to radically solve these problems…” Thiel’s ambivalence towards other human beings and his appetite for radical transformation capture the mood of a class of tech leaders who see the present as something to be escaped, not improved—a mindset that feeds the sense of inevitability and detachment Estrin warns about.
Estrin argues that this is a new form of authoritarianism, where power is reinforced not through force but through what she calls "silence and compliance." The speed and scale of today's AI integration, she says, requires us " to be standing up and paying more attention."
Shannon Vallor, philosopher and ethicist, widened the lens. She cautioned that the quasi-religious narratives emerging from Silicon Valley—casting AI as either savior or demon—are not simply elite fantasies. Rather, the real risk lies in elevating a technology that, at its core, is designed to mimic us. Large language models, she explained, are “merely broken reflections of ourselves… arranged to create the illusion of presence, of consciousness, of being understood.”
The true danger, Vallor argued, is that these illusions are seeping into the minds of the vulnerable, not just the powerful. She described receiving daily messages from people convinced they are in relationships with sentient AI gods—proof that the mythology surrounding these technologies is already warping reality for those least equipped to resist it.
She underscored that the harms of AI are not distributed equally: “The benefits of technological innovation have gone to the people who are already powerful and well-resourced, while the risks have been pushed onto those that are already suffering from forms of political disempowerment and economic inequality.”
Vallor’s call was clear: to reclaim agency, we must demystify technology, recognize who is making the choices, and insist that the future of AI is not something that happens to us, but something that we shape together.
As the discussion unfolded, the panelists agreed: the real threat isn’t just technological overreach, but the surrender of human agency. The challenge is not only to question where technology is taking us, but to insist on our right to shape its direction, before the future is decided without us.
Justine Bateman, best known for her iconic roles in Hollywood and her outspoken activism for artists’ rights, entered the conversation with the perspective of someone who has navigated both the entertainment and technology industries. Bateman, who holds a computer science degree from UCLA, has become a prominent critic of how AI and tech culture threaten human creativity and agency.
During the discussion, Bateman and Estrin found themselves at odds over how best to respond to the growing influence of AI. Bateman argued that the real threat isn’t AI itself becoming all-powerful, but rather the way society risks passively accepting and even revering technology, allowing it to become a “sacred cow” beyond criticism. She called for open ridicule of exaggerated tech promises, insisting, “No matter what they do about trying to live forever, or try to make their own god stuff, it doesn’t matter. You’re not going to make a god that replaces God. You are not going to live forever. It’s not going to happen.” Bateman also urged people to use their own minds and not “be lazy” by simply accepting the narratives being sold by tech elites.
Estrin pushed back, arguing that telling people to use their minds and not be lazy risks alienating those who might otherwise be open to conversation. Instead, she advocated for nuance, urging that the debate focus on human agency, choice, and the real risks and trade-offs of new technologies, rather than falling into extremes or prescribing a single “right” way to respond.
“If we have a hope of getting people to really listen… we need to figure out how to talk about this in terms of human agency, choice, risks, and trade-offs,” she said. “Because when we go into the , you’re either for it or against it, people tune out, and we’re gonna lose that battle.”
Justine Bateman and Judy Estrin - Debate Over AI's Future.
At this point, Christopher Wylie offered a strikingly different perspective, responding directly to Bateman’s insistence that tech was “not going to make a god that replaces God.”
“I’m actually a practicing Buddhist, so I don’t necessarily come to religion from a Judeo-Christian perspective,” he said, recounting a conversation with a Buddhist monk about whether uploading a mind to a machine could ever count as reincarnation. Wylie pointed out that humanity has always invested meaning in things that cannot speak back: rocks, stars, and now, perhaps, algorithms. “There are actually valid and deeper, spiritual and religious conversations that we can have about what consciousness actually is if we do end up tapping into it truly,” he said.
Rather than drawing hard lines between human and machine, sacred and profane, Wylie invited the group to consider the complexity, uncertainty, and humility required as we confront the unknown. He then pivoted to a crucial obstacle in confronting the AI takeover:
“We lack a common vocabulary to even describe what the problems are,” Wylie argued, likening the current moment to the early days of climate change activism, when terms like “greenhouse gases” and “global warming” had to be invented before a movement could take shape. “Without the words to name the crisis, you can’t have a movement around those problems.”
The danger, he suggested, isn’t just technological, it’s linguistic and cultural. If we can’t articulate what’s being lost, we risk losing it by default.
Finally, Wylie reframed privacy as something far more profound than hiding: “Privacy is your ability to decide how to shape yourself in different situations on your own terms, which is, like, really, really core to your ability to be an individual in society.”
When we give up that power, we don’t just become more visible to corporations or governments, we surrender the very possibility of self-determination. The conversation, he insisted, must move beyond technical fixes and toward a broader fight for human agency.
Christopher Wylie: The Real Barrier to an AI Movement Missing Vocabulary.
As we wrapped up, what lingered was not a sense of closure, but a recognition that the future remains radically open—shaped not by the inevitability of technology, but by the choices we make, questions we ask, and movements we are willing to build. Judy Estrin’s call echoed in the final moments: “We need a movement for what we’re for, which is human agency.”
This movement, however, should not be against technology itself. As Wylie argued in the closing minutes, “To criticize Silicon Valley, in my view, is to be pro-tech. Because what you're criticizing is exploitation, a power takeover of oligarchs that ultimately will inhibit what technology is there for, which is to help people.”
The real challenge is not to declare victory or defeat, but to reclaim the language, the imagination, and the collective will to shape humanity's next chapter.
A version of this story was published in last week’s Sunday Read newsletter. Sign up here.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
Ukrainian factories building drones to down Russian aircraft are changing the face of modern air defense — one low-cost interceptor at a time. On 18 July, the New York Post published a reportage about its journalists visiting two drone production facilities in Kyiv. The publication got an inside look at how Ukraine is confronting drone warfare with ingenuity and affordability.
Amid the ongoing Russo-Ukrainian war, Moscow continues launching daily drone strikes against Ukrainian cities, often deploying hundreds of Iranian-designed Shahed explosive drones to target civilians. Each Shahed can carry up to 90 kg of explosives. With limited access to foreign air defense systems, Ukraine has focused on developing and scaling up production of interceptor drones to counter Russia’s growing Shahed onslaught.
Kyiv engineers race to scale drone interceptors
The New York Post says Nomad Drones and a second, anonymous company are leading a new surge in Ukrainian factories building drones. These interceptors are crafted specifically to neutralize Russian-launched Shaheds, which cost around $50,000 apiece. Meanwhile, Ukraine’s new models are dramatically cheaper — priced between $3,000 and $7,000, depending on type and size.
Nomad Drones co-founder and CEO Andrii Fedorov explained the concept to the NYP.
“In Ukraine, there is a phrase people have been using — that ‘quantity’ becomes ‘quality,’” he said.
According to Fedorov, deploying a $1 million missile to destroy a $50,000 drone makes no economic sense.
“If you have 20 drones, then the capacity costs you, say, $40,000 to shoot it down.”
Cost-effective jamming-proof drones
Nomad’s aircraft are designed for cost-effective lethality. Equipped with fiber-optic cables, they avoid jamming and reach enemy drones undetected by radars. Each unit carries explosives and can be detonated remotely on approach. That ability is critical against fast-moving targets like Shaheds, often launched in swarms across Ukrainian airspace.
A second firm — unnamed in the report due to repeated Russian strikes on its facility — builds a meter-long missile-style interceptors. That company continues operating despite multiple attacks.
“It’s all about cost-effectiveness,” an employee said. “Western technologies are so cool and modern — they are expensive at the same time.”
Built for war, priced for survival
The strategy centers on affordability, speed, and scalable output. Nomad Drones and others now produce tens of thousands of interceptors monthly. These low-cost systems are not meant to endure — they’re made to fly once, explode midair, and protect civilian lives.
Tis model contrasts sharply with existing Western air defense systems, which rely heavily on expensive precision strikes. With Russia launching over 700 drones in a single night last week, Ukrainian engineers have prioritized high-volume production as the only viable path forward.
Ukrainian-made drones may soon bolster US forces trailing China in tech. As the NYP reported earlier, Ukraine’s president confirmed a “mega deal” under discussion with the Trump administration to trade battle-tested UAVs for American weapons.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
The US plans to invest in the production of Ukrainian drones. New Prime Minister Yuliia Svyrydenko has announced that Ukraine intends to sign a drone agreement with American partners, Reuters reports.
Drone warfare has defined the Russo-Ukrainian war, with unmanned systems deployed across air, land, and sea. Ukraine and Russia remain locked in a fast-paced arms race, constantly advancing their drone technologies and testing new offensive and defensive systems.
“We plan to sign a ‘drone deal’ with the United States. We are discussing investments in the expansion of production of Ukrainian drones by the US,” says Svyrydenko.
The official has clarified that the deal involves the purchase of a large batch of Ukrainian unmanned aerial vehicles.
Svyrydenko added that Ukrainian President Volodymyr Zelenskyy and US President Donald Trump made the political decision on the agreement earlier, and officials are now discussing its details.
Earlier, Euromaidan Press reported that both leaders were considering what’s being called a “mega deal.” Under the proposed agreement, Kyiv would sell its combat-hardened drone systems to Washington. In return, it would sell Ukraine a significant array of American weapons.
Zelenskyy emphasized that Ukraine is ready to share its knowledge gained from over three years of fighting against Russia’s full-scale invasion.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Romania wants to build drones with Ukraine, but production is delayed until 2026 due to lack of military funding. Digi24 reports that Romania’s Defense Ministry wants to launch a joint drone-manufacturing project, but no funds are available this year to begin construction or procurement.
Drone warfare has shaped the Russo-Ukrainian conflict, with Ukraine deploying UAVs across all domains. The ongoing Russian invasion has driven a surge in Ukrainian drone production, and the Ministry of Defense recently stated it could produce up to 10 million drones a year if properly funded.
Romania wants to build drones with Ukraine, but budget delay blocks start
Romania wants to build drones with Ukraine, aiming to manufacture UAVs inside Romania and eventually export them to other European countries. Digi24 reports that the Romanian Ministry of Defense has confirmed it is set to negotiate with officials from Kyiv. The two sides aim to establish a co-production plan for drones, following models already used by Ukraine in partnerships with Denmark and Norway.
According to Digi24, the business plan is not complex: Romania would purchase the technical specifications of drones that Ukraine has developed during its war experience. Those designs, proven in combat, would serve as the base for production inside Romania.
The proposed facility would likely be located in Brașov, Transylvania. Romanian and Ukrainian engineers would cooperate on-site to assemble the UAVs. Most of the drones would enter service with the Romanian army, but many would also be intended for sale across Europe, per the reported plan.
Factory plan awaits funding, likely in 2026
Despite alignment on the concept, the project faces a major obstacle: Romania currently lacks the funding to implement it. Digi24 notes that while Ukraine is willing to move forward and eager to secure income from such cooperation, Romania cannot commit to payments this year.
The next opportunity to fund the drone partnership would come with Romania’s 2026 defense budget. Until then, the joint production initiative remains in the planning phase.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Ukraine’s new Magura W6P naval drone patrols 1000 km, offering longer range and smarter sea reconnaissance, Militarnyi reports. This latest model shifts from strike operations to focus on maritime patrol and intelligence gathering. Militarnyi’s correspondent visited a closed presentation of the new maritime robotic system, recently organized by Ukraine’s HUR military intelligence agency.
Ukraine’s earlier Magura V5 naval kamikaze drones helped push Russia’s Black Sea Fleet out of eastern Crimea by sinking a significant part of the fleet. Recent upgrades like the V7 and W6 series mark the next phase in Ukraine’s maritime drone capabilities, with the W6P as the latest modification in this highly successful series.
Magura W6P naval drone patrols 1000 km with enhanced stability and sensors
Magura W6P replaces kamikaze capabilities with advanced reconnaissance systems and an extended operational radius from 800 km to 1000 km. Unlike its predecessor Magura v5, which reached speeds up to 50 knots, the W6P has a top speed of 36 knots and cruises at 21 knots powered by a 200-horsepower Suzuki DF200 gasoline engine. This change favors endurance over speed for longer patrols.
The drone features a unique trimaran hull with two outriggers, increasing stability at sea and reducing side rolling during waves or movement. This design also expands the deck width to 2 meters, providing space for mounting equipment such as launch containers for strike FPV drones, although the W6P itself no longer performs kamikaze attacks. The full loaded weight is 1,900 kg, including a 400 kg payload capacity.
Advanced radar, optical systems, and satellite communications enhance reconnaissance
Magura W6P is equipped with a gyro-stabilized optical station featuring day and thermal imaging channels. The drone’s onboard Furuno radar detects ships up to 30 kilometers away and large tankers up to 60 kilometers, though the low antenna height may reduce this range. Smaller boats can be detected within 7 kilometers.
Additionally, the drone uses a multichannel satellite communication system to maintain control despite enemy electronic warfare attempts.
Magura W6P part of Ukraine’s growing naval drone defense system
Ukraine’s naval forces and developers are working to integrate unmanned systems like Magura W6P into a comprehensive maritime defense network. These drones will patrol, locate, and help neutralize threats in Ukraine’s waters.
The Magura W6P serves primarily as a reconnaissance and patrol component, complementing other drones such as the recently introduced Magura v7, which includes acoustic monitoring.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Tech leaders say AI will bring us eternal life, help us spread out into the stars, and build a utopian world where we never have to work. They describe a future free of pain and suffering, in which all human knowledge will be wired into our brains. Their utopian promises sound more like proselytizing than science, as if AI were the new religion and the tech bros its priests. So how are real religious leaders responding?
As Georgia's first female Baptist bishop, Rusudan Gotsiridze challenges the doctrines of the Orthodox Church, and is known for her passionate defence of women’s and LGBTQ+ rights. She stands at the vanguard of old religion, an example of its attempts to modernize — so what does she think of the new religion being built in Silicon Valley, where tech gurus say they are building a superintelligent, omniscient being in the form of Artificial General Intelligence?
Gotsiridze first tried to use AI a few months ago. The result chilled her to the bone. It made her wonder if Artificial Intelligence was in fact a benevolent force, and to think about how she should respond to it from the perspective of her religious beliefs and practices.
In this conversation with Coda’s Isobel Cockerell, Bishop Gotsiridze discusses the religious questions around AI: whether AI can really help us hack back into paradise, and what to make of the outlandish visions of Silicon Valley’s powerful tech evangelists.
Bishop Rusudan Gotsiridze and Isobel Cockerell in conversation at the ZEG Storytelling Festival in Tbilisi last month. Photo: Dato Koridze.
This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025. It has been lightly edited and condensed for clarity.
Isobel: Tell me about your relationship with AI right now.
Rusudan: Well, I’d like to say I’m an AI virgin. But maybe that’s not fully honest. I had one contact with ChatGPT. I didn’t ask it to write my Sunday sermon. I just asked it to draw my portrait. How narcissistic of me. I said, “Make a portrait of Bishop Rusudan Gotsiridze.” I waited and waited. The portrait looked nothing like me. It looked like my mom, who passed away ten years ago. And it looked like her when she was going through chemo, with her puffy face. It was really creepy. So I will think twice before asking ChatGPT anything again. I know it’s supposed to be magical... but that wasn’t the best first date.
AI-generated image via ChatGPT / OpenAI.
Isobel: What went through your mind when you saw this picture of your mother?
Rusudan: I thought, “Oh my goodness, it’s really a devil’s machine.” How could it go so deep? Find my facial features and connect them with someone who didn’t look like me? I take more after my paternal side. The only thing I could recognize was the priestly collar and the cross. Okay. Bishop. Got it. But yes, it was really very strange.
Isobel: I find it so interesting that you talk about summoning the dead through Artificial Intelligence. That’s something happening in San Francisco as well. When I was there last summer, we heard about this movement that meets every Sunday. Instead of church, they hold what they call an “AI séance,” where they use AI to call up the spirit world. To call up the dead. They believe the generative art that AI creates is a kind of expression of the spirit world, an expression of a greater force.
They wouldn’t let us attend. We begged, but it was a closed cult. Still, a bunch of artists had the exact same experience you had: they called up these images and felt like they were summoning them, not from technology, but from another realm.
Rusudan: When you’re a religious person dealing with new technologies, it’s uncomfortable. Religion — Christianity, Protestantism, and many others — has earned a very cautious reputation throughout history because we’ve always feared progress.
Remember when we thought printing books was the devil’s work? Later, we embraced it. We feared vaccinations. We feared computers, the internet. And now, again, we fear AI.
It reminds me of the old proverb about a young shepherd who loved to prank his friends by shouting “Wolves! Wolves!” until one day, the wolves really came. He shouted, but no one believed him anymore.
We’ve been shouting “wolves” for centuries. And now, I’m this close to shouting it again, but I’m not sure.
Isobel: You said you wondered if this was the devil’s work when you saw that picture of your mother. It’s quite interesting. In Silicon Valley, people talk a lot about AI bringing about the rapture, apocalypse, hell.
They talk about the real possibility that AI is going to kill us all, what the endgame or extinction risk of building superintelligent models will be. Some people working in AI are predicting we’ll all be dead by 2030.
On the other side, people say, “We’re building utopia. We’re building heaven on Earth. A world where no one has to work or suffer. We’ll spread into the stars. We’ll be freed from death. We’ll become immortal.”
I’m not a religious person, but what struck me is the religiosity of these promises. And I wanted to ask you — are we hacking our way back into the Garden of Eden? Should we just follow the light? Is this the serpent talking to us?
Rusudan: I was listening to a Google scientist. He said that in the near future, we’re not heading to utopia but dystopia. It’s going to be hell on Earth. All the world’s wealth will be concentrated in a small circle, and poverty will grow. Terrible things will happen, before we reach utopia.
Listening to him, it really sounded like the Book of Revelation. First the Antichrist comes, and then Christ.
Because of my Protestant upbringing, I’ve heard so many lectures about the exact timeline of the Second Coming. Some people even name the day, hour, place. And when those times pass, they’re frustrated. But they carry on calculating.
It’s hard for me to speak about dystopia, utopia, or the apocalyptic timeline, because I know nothing is going to be exactly as predicted.
The only thing I’m afraid of in this Artificial Intelligence era is my 2-year-old niece. She’s brilliant. You can tell by her eyes. She doesn’t speak our language yet. But phonetically, you can hear Georgian, English, Russian, even Chinese words from the reels she watches non-stop.
That’s what I’m afraid of: us constantly watching our devices and losing human connection. We’re going to have a deeply depressed young generation soon.
I used to identify as a social person. I loved being around people. That’s why I became a priest. But now, I find it terribly difficult to pull myself out of my house to be among people. And it’s not just a technology problem — it’s a human laziness problem.
When we find someone or something to take over our duties, we gladly hand them over. That’s how we’re using this new technology. Yes, I’m in sermon mode now — it’s a Sunday, after all.
I want to tell you an interesting story from my previous life. I used to be a gender expert, training people about gender equality. One example I found fascinating: in a Middle Eastern village without running water, women would carry vessels to the well every morning and evening. It was their duty.
Western gender experts saw this and decided to help. They installed a water supply. Every woman got running water in her kitchen: happy ending. But very soon, the pipeline was intentionally broken by the women. Why? Because that water-fetching routine was the only excuse they had to leave their homes and see their friends. With running water, they became captives to their household duties.
One day, we may also not understand why we’ve become captives to our own devices. We’ll enjoy staying home and not seeing our friends and relatives. I don’t think we’ll break that pipeline and go out again to enjoy real life.
Isobel: It feels like it’s becoming more and more difficult to break that pipeline. It’s not really an option anymore to live without the water, without technology.
Sometimes I talk with people in a movement called the New Luddites. They also call themselves the Dumbphone Revolution. They want to create a five-to-ten percent faction of society which doesn’t have a smartphone, and they say that will help us all, because it will mean the world will still have to cater to people who don’t participate in big tech, who don’t have it in their lives. But is that the answer for all of us? To just smash the pipeline to restore human connection? Or can we have both?
Rusudan: I was a new mom in the nineties in Georgia. I had two children at a time when we didn’t have running water. I had to wash my kids’ clothes in the yard in cold water, summer and winter. I remember when we bought our first washing machine. My husband and I sat in front of it for half an hour, watching it go round and round. It was paradise for me for a while.
Now this washing machine is there and I don't enjoy it anymore. It's just a regular thing in my life. And when I had to wash my son’s and daughter-in-law’s wedding outfits, I didn’t trust the machine. I washed those clothes by hand. There are times when it’s important to do things by hand.
Of course, I don’t want to go back to a time without the internet when we were washing clothes in the yard, but there are things that are important to do without technology.
I enjoy painting, and I paint quite a lot with watercolors. So far, I can tell which paintings are AI and which are real. Every time I look at an AI-made watercolour, I can tell it’s not a human painting. It is a technological painting. And it's beautiful. I know I can never compete with this technology.
But that feeling, when you put your brush in, the water — sometimes I accidentally put it in my coffee cup — and when you put that brush on the paper and the pigment spreads, that feeling can never be replaced by any technology.
Isobel: As a writer, I'm now pretty good, I think, at knowing if something is AI-written or not. I'm sure in the future it will get harder to tell, but right now, there are little clues. There’s this horrible construction that AI loves: something is not just X, it’s Y. For example: “Rusudan is not just a bishop, she’s an oracle for the LGBTQ community in Georgia.” Even if you tell it to stop using that construction, it can’t. Same for the endless em-dashes: I can’t get ChatGPT to stop using them no matter how many times or how adamantly I prompt it. It's just bad writing.
It’s missing that fingerprint of imperfection that a human leaves: whether it’s an unusual sentence construction or an interesting word choice, I’ve started to really appreciate those details in real writing. I've also started to really love typos. My whole life as a journalist I was horrified by them. But now when I see a typo, I feel so pleased. It means a human wrote it. It’s something to be celebrated. It’s the same with the idea that you dip your paintbrush in the coffee pot and there’s a bit of coffee in the painting. Those are the things that make the work we make alive.
There’s a beauty in those imperfections, and that’s something AI has no understanding of. Maybe it’s because the people building these systems want to optimize everything. They are in pursuit of total perfection. But I think that the pursuit of imperfection is such a beautiful thing and something that we can strive for.
Rusudan: Another thing I hope for with this development of AI is that it’ll change the formula of our existence. Right now, we’re constantly competing with each other. The educational system is that way. Business is that way. Everything is that way. My hope is that we can never be as smart as AI. Maybe one day, our smartness, our intelligence, will be defined not by how many books we have read, but by how much we enjoy reading books, enjoy finding new things in the universe, and how well we live life and are happy with what we do. I think there is potential in the idea that we will never be able to compete with AI, so why don’t we enjoy the book from cover to cover, or the painting with the coffee pigment or the paint? That’s what I see in the future, and I’m a very optimistic person. I suppose here you’re supposed to say “Halleluljah!”
Isobel: In our podcast, CAPTURED, we talked with engineers and founders in Silicon Valley whose dream for the future is to install all human knowledge in our brains, so we never have to learn anything again. Everyone will speak every language! We can rebuild the Tower of Babel! They talk about the future as a paradise. But my thought was, what about finding out things? What about curiosity? Doesn’t that belong in paradise? Certainly, as a journalist, for me, some people are in it for the impact and the outcome, but I’m in it for finding out, finding the story—that process of discovery.
Rusudan: It’s interesting —this idea of paradise as a place where we know everything. One of my students once asked me the same thing you just did. “What about the joy of finding new things? Where is that, in paradise?” Because in the Bible, Paul says that right now, we live in a dimension where we know very little, but there will be a time when we know everything.
In the Christian narrative, paradise is a strange, boring place where people dress in funny white tunics and play the harp. And I understand that idea back then was probably a dream for those who had to work hard for everything in their everyday life — they had to chop wood to keep their family warm, hunt to get food for the kids, and of course for them, paradise was the place where they just could just lie around and do nothing.
But I don’t think paradise will be a boring place. I think it will be a place where we enjoy working.
Isobel: Do you think AI will ever replace priests?
Rusudan: I was told that one day there will be AI priests preaching sermons better than I do. People are already asking ChatGPT questions they’re reluctant to ask a priest or a psychologist. Because it’s judgment-free and their secrets are safe…ish. I don’t pretend I have all the answers because I don’t. I only have this human connection. I know there will be questions I cannot answer, and people will go and ask ChatGPT. But I know that human connection — the touch of a hand, eye-contact — can never be replaced by AI. That’s my hope. So we don’t need to break those pipelines. We can enjoy the technology, and the human connection too.
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
Skyeton, the Ukrainian developer of the well-known long-range Raybirdunmanned aerial vehicles, which have logged over 350,000 hours of combat flights, has become a target of technological espionage by unscrupulous European companies, The Telegraph reports.
The Raybird vehicles are capable of carrying different types of payloads, such as reconnaissance cameras, radio frequency locators, and other equipment, and flying up to 2,500km on missions up to 28 hours long.
Roman Kniazhenko, the company’s CEO, reveals this. According to him, Western manufacturers visit “as guests” with alleged proposals for cooperation, but instead they are trying to steal production secrets.
“Then they do beautiful pitch books, beautiful presentations about how they’re operating in Ukraine. But actually they’ve done just a couple of flights in Lviv [the western city more than 1,000km from the front line],” he says.
Sometimes, Kniazhenko continues, he sees in their presentations, “literally my own words, without any change.”
He also emphasizes that while Ukrainian drones withstand real combat conditions, taking off even from puddles, European governments are spending billions on products that merely simulate combat effectiveness.
“The big problem, after that, is that billions of dollars go to the companies that still don’t have any idea what they’re doing,” says Kniazhenko.
Meanwhile, the Skyeton team, currently 500 people strong, works 24/7 developing drones for the toughest frontline conditions.
One example of its effectiveness was an operation in the Black Sea: Ukrainian special forces went missing at night, and a Raybird, with its lights on, was able to locate them in the dark waters.
“From one side, everything looks perfect for us. But it was like hell, a night of hell. When you are destroying something you feel good for a couple seconds. But when you know that you saved someone. Like, it’s a totally different feeling,” explains Kniazhenko.
He also urges the West to fund the production of Ukrainian drones on its territory instead of starting a startup from scratch. Every country has the technologies it is good at, he stresses, adding that for Ukraine, it is clear that it is drones.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
If not for a Spanish company, Russia could have run out of new artillery barrels. Barcelona-based Forward Technical Trade SL supplied at least one radial forging machine to Russia, initially built by the Austrian firm Gesellschaft für Fertigungstechnik und Maschinenbau (GFM), the Insider reports.
Since early 2024, Ukraine has destroyed over 19,000 Russian artillery systems, contributing to a total loss of nearly 30,000 systems over the entire war. The attacks have reduced Russia’s artillery superiority from a 10:1 ratio to about 2:1. Meanwhile, Kyiv and Moscow are turning to drones for faster, more precise strikes, reshaping how the war is fought.
The equipment, valued at $1.3 million, weighs 110 tons and was manufactured in 1983. The transfer reportedly occurred via a Hong Kong-based intermediary, Scorpion’s Holding Group Limited.
GFM denies any direct business ties with either the Spanish supplier or the Hong Kong firm. However, the UK’s Royal United Services Institute (RUSI) confirms that GFM machines are crucial for barrel manufacturing in Russia — and that the Russian defense industry remains entirely dependent on them.
According to US-based expert Pavel Luzin, Russia cannot produce these forging machines domestically. Facing severe shortages, Russian forces have already begun “cannibalizing” old Soviet stockpiles, endangering frontline performance.
Earlier, Ukraine’s Defense Intelligence reported that the West still had not sanctioned 70 Russian companies behind the production of missiles that struck Kyiv’s largest children’s cancer hospital.
The Okhmatdyt strike occurred the same day Indian Prime Minister Narendra Modi met with Russian President Vladimir Putin in Moscow on 8-9 July 2024, calling for a peaceful resolution to the war. While the two leaders spoke of peace, Russian missiles rained down across Ukraine, killing 47 people, including 33 in Kyiv.
Technology is Ukraine’s chance to win the war. This is why we’re launching theDavid vs. Goliath defense blog to support Ukrainian engineers who are creating innovative battlefield solutions and are inviting you to join us on the journey.
Our platform will showcase the Ukrainian defense tech underdogs who are Ukraine’s hope to win in the war against Russia, giving them the much-needed visibility to connect them with crucial expertise, funding, and international support. Together, we can give David the best fighting chance he has.
Join us in building this platform—become a Euromaidan Press Patron. As little as $5 monthly will boost strategic innovations that could succeed where traditional approaches have failed.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"
Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.
During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.
OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade.
For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?
Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person. The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard.
As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up.
The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.
A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd.
It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.
"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."
Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.”
As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.
Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system.
Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?
I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIVsaid: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."
Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.
She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.
“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
In early April, I found myself in the breathtaking Chiesa di San Francesco al Prato in Perugia, Italy talking about men who are on a mission to achieve immortality.
As sunlight filtered through glass onto worn stone walls, Cambridge Analytica whistleblower Christopher Wylie recounted a dinner with a Silicon Valley mogul who believes drinking his son's blood will help him live forever.
"We've got it wrong," Bryan Johnson told Chris. "God didn't create us. We're going to create God and then we're going to merge with him."
This wasn't hyperbole. It's the worldview taking root among tech elites who have the power, wealth, and unbounded ambition to shape our collective future.
Working on “Captured: The Secret Behind Silicon Valley's AI Takeover” podcast, which we presented in that church in Perugia, we realized we weren't just investigating technology – we were documenting a fundamentalist movement with all the trappings of prophecy, salvation, and eternal life. And yet, talking about it from the stage to my colleagues in Perugia, I felt, for a second at least, like a conspiracy theorist. Discussing blood-drinking tech moguls and godlike ambitions in a journalism conference felt jarring, even inappropriate. I felt, instinctively, that not everyone was willing to hear what our reporting had uncovered. The truth is, these ideas aren’t fringe at all – they are the root of the new power structures shaping our reality.
“Stop being so polite,” Chris Wylie urged the audience, challenging journalists to confront the cultish drive for transcendence, the quasi-religious fervor animating tech’s most powerful figures.
We've ignored this story, in part at least, because the journalism industry had chosen to be “friends” with Big Tech, accepting platform funding, entering into “partnerships,” and treating tech companies as potential saviors instead of recognizing the fundamental incompatibility between their business models and the requirements of a healthy information ecosystem, which is as essential to journalism as air is to humanity.
In effect, journalism has been complicit in its own capture. That complicity has blunted our ability to fulfil journalism's most basic societal function: holding power to account.
As tech billionaires have emerged as some of the most powerful actors on the global stage, our industry—so eager to believe in their promises—has struggled to confront them with the same rigor and independence we once reserved for governments, oligarchs, or other corporate powers.
This tension surfaced most clearly during a panel at the festival when I challenged Alan Rusbridger, former editor-in-chief of “The Guardian” and current Meta Oversight Board member, about resigning in light of Meta's abandonment of fact-checking. His response echoed our previous exchanges: board membership, he maintains, allows him to influence individual cases despite the troubling broader direction.
This defense exposes the fundamental trap of institutional capture. Meta has systematically recruited respected journalists, human rights defenders, and academics to well-paid positions on its Oversight Board, lending it a veneer of credibility. When board members like Rusbridger justify their participation through "minor victories," they ignore how their presence legitimizes a business model fundamentally incompatible with the public interest.
What once felt like slow erosion now feels like a landslide, accelerated by broligarchs who claim to champion free speech while their algorithms amplify authoritarians.
Imagine a climate activist serving on an Exxon-established climate change oversight board, tasked with reviewing a handful of complaints while Exxon continues to pour billions into fossil fuel expansion and climate denial.
Meta's oversight board provides cover for a platform whose design and priorities fundamentally undermine our shared reality. The "public square" - a space for listening and conversation that the internet once promised to nurture but is now helping to destroy - isn't merely a metaphor, it's the essential infrastructure of justice and open society.
Trump's renewed attacks on the press, the abrupt withdrawal of U.S. funding for independent media around the world, platform complicity in spreading disinformation, and the normalization of hostility toward journalists have stripped away any illusions about where we stand. What once felt like slow erosion now feels like a landslide, accelerated by broligarchs who claim to champion free speech while their algorithms amplify authoritarians.
The Luxury of Neutrality
If there is one upside to the dire state of the world, it’s that the fog has lifted. In Perugia, the new sense of clarity was palpable. Unlike last year, when so many drifted into resignation, the mood this time was one of resolve. The stakes were higher, the threats more visible, and everywhere I looked, people were not just lamenting what had been lost – they were plotting and preparing to defend what matters most.
One unintended casualty of this new clarity is the old concept of journalistic objectivity. For decades, objectivity was held up as the gold standard of our profession – a shield against accusations of bias. But as attacks on the media intensify and the very act of journalism becomes increasingly criminalized and demonized around the world, it’s clear that objectivity was always a luxury, available only to a privileged few. For many who have long worked under threat – neutrality was never an option. Now, as the ground shifts beneath all of us, their experience and strategies for survival have become essential lessons for the entire field.
That was the spirit animating our “Am I Black Enough?” panel in Perugia, which brought together three extraordinary Black American media leaders, with me as moderator.
“I come out of the Black media tradition whose origins were in activism,” said Sara Lomax, co-founder of URL Media and head of WURD, Philadelphia’s oldest Black talk radio station. She reminded us that the first Black newspaper in America was founded in 1827 - decades before emancipation - to advocate for the humanity of people who were still legally considered property.
Karen McMullen, festival director of Urbanworld, spoke to the exhaustion and perseverance that define the Black American experience: “We would like to think that we could rest on the successes that our parents and ancestors have made towards equality, but we can’t. So we’re exhausted but we will prevail.”
And as veteran journalist and head of the Maynard Institute Martin Reynolds put it, “Black struggle is a struggle to help all. What’s good for us tends to be good for all. We want fair housing, we want education, we want to be treated with respect.”
Near the end of our session, an audience member challenged my role as a white moderator on a panel about Black experiences. This moment crystallized how the boundaries we draw around our identities can both protect and divide us. It also highlighted exactly why we had organized the panel in the first place: to remind us that the tools of survival and resistance forged by those long excluded from "objectivity" are now essential for everyone facing the erosion of old certainties.
Sara Lomax (WURD/URL Media), Karen McMullen (Urbanworld) & Martin Reynolds (Maynard Institute) discuss how the Black press in America was born from activism, fighting for the humanity of people who were still legally considered property - a tradition of purpose-driven journalism that offers critical lessons today. Ascanio Pepe/Creative Commons (CC BY ND 4.0)
The Power of Protected Spaces
If there’s one lesson from those who have always lived on the frontlines and who never had the luxury of neutrality – it’s that survival depends on carving out spaces where your story, your truth, and your community can endure, even when the world outside is hostile.
That idea crystallized for me one night in Perugia, when during a dinner with colleagues battered by layoffs, lawsuits, and threats far graver than those I face, someone suggested we play a game: “What gives you hope?” When it was my turn, I found myself talking about finding hope in spaces where freedom lives on. Spaces that can always be found, no matter how dire the circumstances.
I mentioned my parents, dissidents in the Soviet Union, for whom the kitchen was a sanctuary for forbidden conversations. And Georgia, my homeland – a place that has preserved its identity through centuries of invasion because its people fought, time and again, for the right to write their own story. Even now, as protesters fill the streets to defend the same values my parents once whispered about in the kitchen, their resilience is a reminder that survival depends on protecting the spaces where you can say who you are.
But there’s a catch: to protect the spaces where you can say who you are, you first have to know what you stand for – and who stands with you. Is it the tech bros who dream of living forever, conquering Mars, and who rush to turn their backs on diversity and equity at the first opportunity? Or is it those who have stood by the values of human dignity and justice, who have fought for the right to be heard and to belong, even when the world tried to silence them?
As we went around the table, each of us sharing what gave us hope, one of our dinner companions, a Turkish lawyer, offered a metaphor in response to my point about the need to protect spaces. “In climate science,” she said, “they talk about protected areas – patches of land set aside so that life can survive when the ecosystem around it collapses. They don’t stop the storms, but they give something vital a chance to endure, adapt, and, when the time is right, regenerate.”
That's what we need now: protected areas for uncomfortable truths and complexity. Not just newsrooms, but dinner tables, group chats, classrooms, gatherings that foster unlikely alliances - anywhere we can still speak honestly, listen deeply, and dare to imagine.
More storms will come. More authoritarians will rise. Populist strongmen and broligarchs will keep fragmenting our shared reality.
But if history has taught us anything – from Soviet kitchens to Black newspapers founded in the shadow of slavery - it’s that carefully guarded spaces where stories and collective memory are kept alive have always been the seedbeds of change.
When we nurture these sanctuaries of complex truth against all odds, we aren't just surviving. We're quietly cultivating the future we wish to see.
And in times like these, that's not just hope - it's a blueprint for renewal.
oogle has a plan to make all reCAPTCHA users migrate to reCAPTCHA Enterprise on Google Cloud by the end of 2025. This means a cost increase for many users. I’m writing this post to provide you with a heads-up about this move. (...) we plan to introduce the integration module for Cloudflare Turnstile, an alternative CAPTCHA solution, to Contact Form 7 6.1. Cloudflare Turnstile is available for free (at least for now), and we have found that it has potential to work more effectively than Google reCAPTCHA.
— Permalien
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors.
If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect.
We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.
It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough.
In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.
This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.
I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.
I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony.
Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there.
I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become."
As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?”
Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me.
But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.
One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”
I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens.
So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?
I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions.
Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.
The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.
I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.
In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."
He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die.
These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.
I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.
In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.
The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.
We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.
Participants connected over Discord, comparing notes, and posting about our progress.
Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.
There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?
Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.
On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."
So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.
Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.
We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.
The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.
When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.
Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.
I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.
Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.
This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.
We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.
I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.
I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.
We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.
I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.
But until then my ears are still ringing.
This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.
“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”
A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call.
“You're wading through knee-deep water, people are screaming everywhere, and then… What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”
Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world. We looked at how the rest of us — journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back.
Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.
Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.
One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta.
Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video – training the company’s system so it can automatically filter out such content before the rest of us are exposed to it.
She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward.
Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe.
In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. “We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”
I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?
“You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.”
As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join.
In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.
What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.
“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.”
The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future.
There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now.
In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.
Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.”
Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.
This conversation has been edited for length and clarity.
Q: Rafael, can you tell me how your journey into neurorights started?
Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.
Q: How did that work?
Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it.
These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain.
We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there.
Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?
Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment.
Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars?
Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.
Q: It must have been a huge breakthrough.
Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.
And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”
I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.
Q: What do you mean by that?
Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.
Q: Jared, can you tell me how you came into this?
Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Q: What was your reaction when you heard of the mouse experiment?
Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.
Q: Can you talk me through some of the other implications of this technology?
Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.
That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.
In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting.
Rafael: I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information.
Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.
But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces.
And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.
How did you start thinking about ways to build rights and guardrails around neurotechnology?
Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.
So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data.
Jared: If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.
There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.
So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.
Rafael: We identified five areas of concern where neurotechnology could impact human rights:
The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.
The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.
The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.
The fourth is the right to equal access to neural augmentation. Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.
And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.
Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.
We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work.
Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:
“The world is an evil place, and nobody's coming to save you.”
It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.
It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world.
“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”
There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.
These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.
This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.
Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?”
We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister.
It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta.
Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards.
She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own.
And then, she switched to working for a different client: Meta.
“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death.
“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said.
Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said.
After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.
Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.
Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.
Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.
“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty.
And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.
Kibera is Africa's largest urban slum. Hundreds of young people living here were recruited to work on projects for Big Tech. Becky Lipscombe. Simone Boccaccio/SOPA Images/LightRocket via Getty Images.
But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.
When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI.
Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”
You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.
Despite their defense of their record, Sama is facing legal action in Kenya.
“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.
“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?”
As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.
“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.”
“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”
At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator.
Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”
For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.
A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?
Artificial intelligence may well go down in history as one of humanity’s greatest triumphs. Future generations may look back at this moment as the time we truly entered the future.
And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.
So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen. But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.
Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?