The war in Ukraine is bringing revolutionary changes to modern military strategy. After Ukrainian soldiers destroyed Russia’s Black Sea Fleet flagship, the Moskva cruiser, it became clear: the era of large warships is over, UkrInform reports.
Even without a full-fledged navy, Ukraine has managed to destroy 30% of Russia’s naval fleet. Among the most notable losses is the Moskva, a guided missile cruiser that was sunk in April 2022. The Ukrainian army used Neptune anti-ship missiles and Magura
The war in Ukraine is bringing revolutionary changes to modern military strategy. After Ukrainian soldiers destroyed Russia’s Black Sea Fleet flagship, the Moskva cruiser, it became clear: the era of large warships is over, UkrInform reports.
Even without a full-fledged navy, Ukraine has managed to destroy 30% of Russia’s naval fleet. Among the most notable losses is the Moskva, a guided missile cruiser that was sunk in April 2022. The Ukrainian army used Neptune anti-ship missiles and Magura V5/V7 kamikaze sea drones, unmanned systems now being studied by other countries, including the US.
Ukraine’s naval drone Magura. Photo: Screenshot from the video
The Moskva proved that a large warship in the Black Sea is an easy target, says Mykola Shcherbakov, commander of a State Border Guard Sea Guard vessel.
“That’s why we need to be small, fast, and maneuverable. I think swarm tactics are what the future holds for us,” he believes.
Shcherbakov is convinced that Ukraine’s navy’s future lies in high automation, mobility, and modular platforms that can be reconfigured for various missions.
“A fleet is always very expensive. But small platforms with modular weaponry, missiles, air defense — that’s the path to success. And support from drones is essential,” he emphasizes.
He adds that the Sea Guard can also assign some tasks to unmanned systems—for example, during reconnaissance missions or in high-risk zones.
Russia was the first to use Iranian-made Shahed drones to target Ukraine. After more than 3 years of the war, Ukraine has not only developed a large number of sea, ground, and aerial drones to respond, but has also used them to target Russia’s nuclear triad—41 aircraft in the Operation Spiderweb, which has been highly assessed by Western experts, NATO, and US President Donald Trump.
“Not all of our tasks require people on board. When it’s about documenting violations or communicating with fishermen, drones can’t replace a human. But when it comes to scouting or assessing the situation, maritime drones would be very appropriate,” Shcherbakov explains.
He says Ukraine has already shown the world its capabilities in unmanned maritime technology.
“There are already sea-based FPV drone variants, vessels equipped with air defense systems — even something resembling mini aircraft carriers that can carry reconnaissance or strike systems,” Shcherbakov notes.
Ukraine is currently at the forefront of using unmanned systems at sea, the Ukrainian commander emphasizes.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next.
Become a patron or see other ways to support.
A Russian Molniya-2 kamikaze drone was brought down by a Ukrainian FPV drone reportedly using electronic warfare capabilities, footage shared on 9 June showed, according to Militarnyi.
Drone warfare innovations have become a defining feature of the Russo-Ukrainian war. Unmanned vehicles of various sizes, operating in the air, on land, and at sea, play a central role, with technology advancing rapidly. Meanwhile, anti-drone electronic warfare is rapidly evolving as well, as both sides advance th
A Russian Molniya-2 kamikaze drone was brought down by a Ukrainian FPV drone reportedly using electronic warfare capabilities, footage shared on 9 June showed, according to Militarnyi.
Drone warfare innovations have become a defining feature of the Russo-Ukrainian war. Unmanned vehicles of various sizes, operating in the air, on land, and at sea, play a central role, with technology advancing rapidly. Meanwhile, anti-drone electronic warfare is rapidly evolving as well, as both sides advance their technologies.
The video shows the Russian Molniya-2 drone losing control as a Ukrainian interceptor approached. Militarnyi reports that this suggests the use of an onboard electronic warfare (EW) system, which jammed the UAV’s control signals and forced it to crash. The operators of Ukraine’s Southern Defense Forces reportedly executed this interception using a non-contact approach.
Rising use of EW against cheap Russian drones
This is not the first known incident of a Ukrainian drone using EW methods to down a Russian UAV. Similar interceptions of Molniya drones were previously noted starting in mid-March, with a growing frequency through April and May.
One likely vulnerability in the Molniya-2 drones is the use of ERLS control systems with active telemetry, allowing the detection of the UAV’s control frequencies. Ukrainian forces have reportedly exploited this flaw by emitting targeted jamming in narrow frequency bands. This method does not require high-power systems and can be deployed directly from the intercepting drone.
Cheap design and battlefield adaptability of Molniya-2
The Molniya-2 is a fixed-wing kamikaze drone developed as a low-cost, mass-produced weapon. Its construction involves foam, plastic, aluminum tubing, and wooden components. Electronics and engines are mostly standardized with FPV drones.
The Molniya-2 can fly up to 60 kilometers and reach speeds of 120 km/h. Its payload varies depending on the launch method. The drone can carry explosive charges or a TM-62 mine weighing up to 10 kilograms, according to Russian state media.
Militarnyi had earlier reported that Russian forces began adapting Molniya drones to serve as carriers for FPV drones.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support.Become a Patron!
Ukraine’s new ballistic missile may already be used on the battlefield. In May 2025, the Ukrainian Armed Forces sharply increased the number of destroyed Russian command posts, indicating new strike capabilities, including ballistic ones, says military expert Valery Ryabykh, Espreso reported.
Russia has escalated its air assaults on Ukrainian cities, ignoring all calls for a ceasefire. In response, Ukrainian President Volodymyr Zelenskyy has ordered separate funding to be allocated to Ukraine’s
Ukraine’s new ballistic missile may already be used on the battlefield. In May 2025, the Ukrainian Armed Forces sharply increased the number of destroyed Russian command posts, indicating new strike capabilities, including ballistic ones, says military expert Valery Ryabykh, Espreso reported.
Russia has escalated its air assaults on Ukrainian cities, ignoring all calls for a ceasefire. In response, Ukrainian President Volodymyr Zelenskyy has ordered separate funding to be allocated to Ukraine’s ballistic missile program.
The expert says that remarkably interesting developments are happening on the battlefield. Ukraine has expanded its ability to strike Russian occupiers.
“This includes the successful operation to destroy a division of three Iskander missile systems. All points to the fact that Ukraine has acquired all the necessary elements for such strikes,” Ryabykh continues.
In addition, all elements have been linked together using the Link system for F-16 aircraft.
Ukraine’s F-16 and Mirage 2000 jets have become a part of a unified digital network alongside NATO air defense systems, enabling real-time exchange of critical information. It ensures maximum coordination of actions in the air.
He suggests that the Ukrainian Armed Forces have likely been using ballistic missiles for about a year, as experts could not clearly identify the strike means in many cases.
“This system, apparently, is already undergoing real combat testing. We are talking either about the start of serial production or it just begins now,” the expert explains.
In 2024, Zelenskyy announced that Ukraine had successfully tested its first domestically produced ballistic missile. However, no more details on timing, production, and the number of missiles have been disclosed.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support.Become a Patron!
Renault, the French automotive giant, has been identified as the company set to produce drones in Ukraine. France Info says production lines could be located “a few dozen or hundred kilometers from the front line.”
This comes amid the ongoing Russo-Ukrainian war, as the EU is rearming and announcing massive investments in the defense industry. Drone warfare innovations have become a defining feature of the ongoing Russo-Ukrainian war. Unmanned vehicles—operating in the air, on land, and at sea—n
Renault, the French automotive giant, has been identified as the company set to produce drones in Ukraine. France Info says production lines could be located “a few dozen or hundred kilometers from the front line.”
This comes amid the ongoing Russo-Ukrainian war, as the EU is rearming and announcing massive investments in the defense industry. Drone warfare innovations have become a defining feature of the ongoing Russo-Ukrainian war. Unmanned vehicles—operating in the air, on land, and at sea—now play a central role, with both sides rapidly advancing their technologies.
Renault to build drones in Ukraine near frontline zones
France Info reported on 8 June that Renault plans to enter the defense sector by partnering with a French SME specializing in defense technology to produce drones in Ukraine.
The French Minister of the Armed Forces, Sébastien Lecornu, initiallydisclosed on 6 June that a “major French car manufacturer” would produce drones in Ukraine, without naming the company.
Renault confirmed to France Info that it had been contacted by the French government about the drone production project, but added that “no decision has been made at this stage.”
Lecornu earlier noted that there is no current need for French workers to staff the production facilities in Ukraine. He emphasized Ukrainian expertise, stating that Ukrainians are “better than us at imagining drones and especially at developing the doctrine around them.”
Drones for Ukraine and France
The drones are intended for use by both the Ukrainian Armed Forces and the French military. France lags behind in drone capabilities and sees this partnership as an opportunity to benefit from Ukraine’s battlefield innovation and experience.
Militarnyi says that the project will begin with Renault joining small and medium-sized French defense businesses, followed by the establishment of production capacities on Ukrainian territory.
On 6 June, Lecornu also noted that other companies connected to France’s defense industry are already operating in Ukraine.
French military-industrial strategy shifts
This development aligns with earlier announcements in February about France planning to adapt its civilian industry to respond to large-scale military demands. Militarnyi reported that one representative of the French auto industry had already been approached to help launch drone production, particularly for kamikaze-type drones similar to those used in Ukraine. The Ministry of Armed Forces and the French defense procurement agency reportely aim to reach production speeds of several thousand drones within a few months.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support.Become a Patron!
Thousands of scientists, academics, physicians and researchers have responded to the administration’s executive order about “restoring a gold standard for science.”
Thousands of scientists, academics, physicians and researchers have responded to the administration’s executive order about “restoring a gold standard for science.”
Keeling flasks used to measure carbon dioxide in the atmosphere in a research laboratory at the Scripps Institution of Oceanography in California in April.
Mathias Unberath, a computer scientist at Johns Hopkins University, has many students from abroad. “My whole team, including those who were eager to apply for more permanent positions in the U.S., have no more interest,” he said.
Ukraine presented weapons that have already changed the rules of war at the European Defence Innovation Days 2025 (EDID25) exhibition in late May 2025, according to ArmyInform.
Kyiv aims to strengthen its position within European defense production and security frameworks. This integration is beneficial as it allows Ukraine to contribute its battle-tested expertise and enhance Europe’s collective defense capabilities, particularly at a time when the US role in European security is decreasing.
A
Ukraine presented weapons that have already changed the rules of war at the European Defence Innovation Days 2025 (EDID25) exhibition in late May 2025, according to ArmyInform.
Kyiv aims to strengthen its position within European defense production and security frameworks. This integration is beneficial as it allows Ukraine to contribute its battle-tested expertise and enhance Europe’s collective defense capabilities, particularly at a time when the US role in European security is decreasing.
Among the highlights were autonomous FPV drones, a new class of naval drones, and robotic ground systems transforming logistics on the front lines.
The EDID25 forum was hosted by the European Defence Agency (EDA) in Kraków, Poland. The event brought together developers, military personnel, scientists, and industry leaders from across Europe.
Twelve Ukrainian companies showcased their innovations. According to Anatolii Khrapchynskyi, deputy director of an electronic warfare company and military expert, Ukraine did not come with concepts, but with real, battle-tested technology.
“These are not mock-ups. These are technologies that save lives and are changing the rules of modern warfare,” Khrapchynskyi emphasized.
Among the systems demonstrated:
FPV drones with autonomous targeting, capable of striking without an operator, due to computer vision.
Naval drones that have learned to intercept airborne targets — effectively a new class of weapons
Mavic- and Matrice-type drones, fully assembled with Ukrainian-made electronics
Ground robotic systems that revolutionize frontline logistics, remotely mine terrain, and establish new firing positions
“Our technologies are not just innovation. They are combat experience transformed into solutions. We know how to turn challenges into breakthroughs,” said Khrapchynskyi.
A key takeaway for European partners is that Ukraine is becoming not just a production hub but a source of experience, flexibility, and strategic thinking.
That is why Khrapchynskyi stressed the need to establish an Engineering Command Center in Ukraine, a permanent hub for military innovation staffed by Ukrainian and European experts.
This center should:
Translate battlefield experience into technical specifications;
Anticipate the needs of future wars;
Coordinate cross-sector development of systems and platforms.
“Europe is searching for solutions. And Ukraine has the answers — practical, combat-proven, and scalable,” the expert concluded.
Earlier, the Security Service of Ukraine reported that a total of 34% of Russia’s strategic missile carriers based at their main airfields were hit in the operation SpiderWeb, which targeted at least four airfields.
Ukraine used smart FPV-drones launched from cargo trucks to target the aircraft.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support.Become a Patron!
With the welcome mat withdrawn for promising researchers from around the world, America is at risk of losing its longstanding pre-eminence in the sciences.
With the welcome mat withdrawn for promising researchers from around the world, America is at risk of losing its longstanding pre-eminence in the sciences.
Among the canceled awards was a $331 million to Exxon Mobil, which had been planning to replace natural gas with hydrogen at a chemical facility in Baytown, Texas.
Ukrainian startup launches fully autonomous drone strikes deep into Russian territory, rewriting the rules of modern warfare, Forbes reports.
In a historic military breakthrough, Ukrainian defense startup Strategy Force Solutions has successfully deployed autonomous drone motherships in real combat operations against Russian forces — a world first that could reshape global defense strategies.
Their breakthrough system, GOGOL-M, swaps out traditional $3–$5 million missile strikes for AI-dr
Ukrainian startup launches fully autonomous drone strikes deep into Russian territory, rewriting the rules of modern warfare, Forbes reports.
In a historic military breakthrough, Ukrainian defense startup Strategy Force Solutions has successfully deployed autonomous drone motherships in real combat operations against Russian forces — a world first that could reshape global defense strategies.
Their breakthrough system, GOGOL-M, swaps out traditional $3–$5 million missile strikes for AI-driven missions costing just $10,000.
Ukraine surges ahead in drone warfare innovation
While global powers like the US and China continue testing autonomous weapons, Ukraine has leapfrogged ahead, deploying AI-powered drone swarms on the battlefield today, not years from now.
The GOGOL-M mothership, boasting a 20-foot (6-meter) wingspan, can autonomously fly up to 300 km behind enemy lines. It then releases two smaller attack drones that identify and destroy targets without human control.
GOGOL-M: Ukraine’s $10K AI drone mothership with laser vision is replacing $5M missiles.
It flies itself, sees in 3D, and strikes Russian targets 300km away.
The idea came from a boy watching a woman walk train tracks, checking for cracks.
— Euromaidan Press (@EuromaidanPress) May 29, 2025
How it works: AI-powered precision at scale
At the core of the system is SmartPilot, an onboard AI that mirrors the instincts of a human fighter pilot. It uses multi-sensor fusion — combining cameras, LIDAR, and communications — to navigate and strike in environments where GPS and radio signals are jammed.
“In some ways, it’s like a self-driving car,”says CTO Andrii.
He explains that while there aren’t many obstacles in the air, the system still needs to remain lightweight. To achieve that balance, the team engineered a streamlined setup using cameras, LIDAR, and communication tools to enable real-time navigation and coordination.
LIDAR, which acts like laser radar, generates a detailed 3D map of the surroundings and functions in all lighting and weather — essential for reliable autonomous missions in hostile conditions.
This gives the drones the ability to:
Destroy parked jets and air defenses
Hit oil depots and infrastructure
Strike deep into Russia with precision
One military operator described the experience:
“It feels like a video game. I set the waypoints and watch it work.”
Silent and deadly: Drones that wait to strike
In one of its most striking features, the drone can land near enemy targets, remain hidden, and wait for hours before launching a surprise strike — a capability described as “autonomous ambush mode.”
This gives Ukrainian forces a powerful edge in asymmetric warfare, allowing for stealth operations previously thought impossible with drone tech.
Ukraine beats US and China to real-world AI combat
While the Pentagon’s Defense Innovation Unit and China’s drone makers remain in testing phases, Ukraine is already in full-scale production. Strategy Force Solutions now builds 50 GOGOL-Ms and 400 attack drones per month, constrained only by military demand.
The company’s software-first approach also allows easy adaptation to new platforms — from flying drones to unmanned boats and ground vehicles.
Russia faces a new kind of threat
Military analysts suggest that Russia must now defend against autonomous swarms that don’t need GPS, live control, or constant communication — a nightmare for traditional air defense systems.
As Forbes tech correspondent David Hambling notes:
“The crucial first step — long-range autonomous drone delivery — has now been taken. It may be Version 1.0, but it’s already a problem for Russia.”
A childhood idea that sparked a military revolution
The origin of this breakthrough? A childhood memory. As a boy, the system’s creator Andrii saw a woman walking railway tracks to check for defects. He thought, “This should be done by a robot.”
That early insight grew into AI systems for infrastructure inspection — and later, with the onset of war in 2022, a pivot to battlefield autonomy.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support.Become a Patron!
As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few tech
As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"
Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.
During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.
OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade.
For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?
Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person. The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard.
As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up.
The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.
A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd.
It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.
"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."
Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.”
As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.
Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system.
Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?
I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIVsaid: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."
Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.
She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.
“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
In early April, I found myself in the breathtaking Chiesa di San Francesco al Prato in Perugia, Italy talking about men who are on a mission to achieve immortality.
As sunlight filtered through glass onto worn stone walls, Cambridge Analytica whistleblower Christopher Wylie recounted a dinner with a Silicon Valley mogul who believes drinking his son's blood will help him live forever.
"We've got it wrong," Bryan Johnson told Chris. "God didn't create us. We're going to create God and the
In early April, I found myself in the breathtaking Chiesa di San Francesco al Prato in Perugia, Italy talking about men who are on a mission to achieve immortality.
As sunlight filtered through glass onto worn stone walls, Cambridge Analytica whistleblower Christopher Wylie recounted a dinner with a Silicon Valley mogul who believes drinking his son's blood will help him live forever.
"We've got it wrong," Bryan Johnson told Chris. "God didn't create us. We're going to create God and then we're going to merge with him."
This wasn't hyperbole. It's the worldview taking root among tech elites who have the power, wealth, and unbounded ambition to shape our collective future.
Working on “Captured: The Secret Behind Silicon Valley's AI Takeover” podcast, which we presented in that church in Perugia, we realized we weren't just investigating technology – we were documenting a fundamentalist movement with all the trappings of prophecy, salvation, and eternal life. And yet, talking about it from the stage to my colleagues in Perugia, I felt, for a second at least, like a conspiracy theorist. Discussing blood-drinking tech moguls and godlike ambitions in a journalism conference felt jarring, even inappropriate. I felt, instinctively, that not everyone was willing to hear what our reporting had uncovered. The truth is, these ideas aren’t fringe at all – they are the root of the new power structures shaping our reality.
“Stop being so polite,” Chris Wylie urged the audience, challenging journalists to confront the cultish drive for transcendence, the quasi-religious fervor animating tech’s most powerful figures.
We've ignored this story, in part at least, because the journalism industry had chosen to be “friends” with Big Tech, accepting platform funding, entering into “partnerships,” and treating tech companies as potential saviors instead of recognizing the fundamental incompatibility between their business models and the requirements of a healthy information ecosystem, which is as essential to journalism as air is to humanity.
In effect, journalism has been complicit in its own capture. That complicity has blunted our ability to fulfil journalism's most basic societal function: holding power to account.
As tech billionaires have emerged as some of the most powerful actors on the global stage, our industry—so eager to believe in their promises—has struggled to confront them with the same rigor and independence we once reserved for governments, oligarchs, or other corporate powers.
This tension surfaced most clearly during a panel at the festival when I challenged Alan Rusbridger, former editor-in-chief of “The Guardian” and current Meta Oversight Board member, about resigning in light of Meta's abandonment of fact-checking. His response echoed our previous exchanges: board membership, he maintains, allows him to influence individual cases despite the troubling broader direction.
This defense exposes the fundamental trap of institutional capture. Meta has systematically recruited respected journalists, human rights defenders, and academics to well-paid positions on its Oversight Board, lending it a veneer of credibility. When board members like Rusbridger justify their participation through "minor victories," they ignore how their presence legitimizes a business model fundamentally incompatible with the public interest.
What once felt like slow erosion now feels like a landslide, accelerated by broligarchs who claim to champion free speech while their algorithms amplify authoritarians.
Imagine a climate activist serving on an Exxon-established climate change oversight board, tasked with reviewing a handful of complaints while Exxon continues to pour billions into fossil fuel expansion and climate denial.
Meta's oversight board provides cover for a platform whose design and priorities fundamentally undermine our shared reality. The "public square" - a space for listening and conversation that the internet once promised to nurture but is now helping to destroy - isn't merely a metaphor, it's the essential infrastructure of justice and open society.
Trump's renewed attacks on the press, the abrupt withdrawal of U.S. funding for independent media around the world, platform complicity in spreading disinformation, and the normalization of hostility toward journalists have stripped away any illusions about where we stand. What once felt like slow erosion now feels like a landslide, accelerated by broligarchs who claim to champion free speech while their algorithms amplify authoritarians.
The Luxury of Neutrality
If there is one upside to the dire state of the world, it’s that the fog has lifted. In Perugia, the new sense of clarity was palpable. Unlike last year, when so many drifted into resignation, the mood this time was one of resolve. The stakes were higher, the threats more visible, and everywhere I looked, people were not just lamenting what had been lost – they were plotting and preparing to defend what matters most.
One unintended casualty of this new clarity is the old concept of journalistic objectivity. For decades, objectivity was held up as the gold standard of our profession – a shield against accusations of bias. But as attacks on the media intensify and the very act of journalism becomes increasingly criminalized and demonized around the world, it’s clear that objectivity was always a luxury, available only to a privileged few. For many who have long worked under threat – neutrality was never an option. Now, as the ground shifts beneath all of us, their experience and strategies for survival have become essential lessons for the entire field.
That was the spirit animating our “Am I Black Enough?” panel in Perugia, which brought together three extraordinary Black American media leaders, with me as moderator.
“I come out of the Black media tradition whose origins were in activism,” said Sara Lomax, co-founder of URL Media and head of WURD, Philadelphia’s oldest Black talk radio station. She reminded us that the first Black newspaper in America was founded in 1827 - decades before emancipation - to advocate for the humanity of people who were still legally considered property.
Karen McMullen, festival director of Urbanworld, spoke to the exhaustion and perseverance that define the Black American experience: “We would like to think that we could rest on the successes that our parents and ancestors have made towards equality, but we can’t. So we’re exhausted but we will prevail.”
And as veteran journalist and head of the Maynard Institute Martin Reynolds put it, “Black struggle is a struggle to help all. What’s good for us tends to be good for all. We want fair housing, we want education, we want to be treated with respect.”
Near the end of our session, an audience member challenged my role as a white moderator on a panel about Black experiences. This moment crystallized how the boundaries we draw around our identities can both protect and divide us. It also highlighted exactly why we had organized the panel in the first place: to remind us that the tools of survival and resistance forged by those long excluded from "objectivity" are now essential for everyone facing the erosion of old certainties.
Sara Lomax (WURD/URL Media), Karen McMullen (Urbanworld) & Martin Reynolds (Maynard Institute) discuss how the Black press in America was born from activism, fighting for the humanity of people who were still legally considered property - a tradition of purpose-driven journalism that offers critical lessons today. Ascanio Pepe/Creative Commons (CC BY ND 4.0)
The Power of Protected Spaces
If there’s one lesson from those who have always lived on the frontlines and who never had the luxury of neutrality – it’s that survival depends on carving out spaces where your story, your truth, and your community can endure, even when the world outside is hostile.
That idea crystallized for me one night in Perugia, when during a dinner with colleagues battered by layoffs, lawsuits, and threats far graver than those I face, someone suggested we play a game: “What gives you hope?” When it was my turn, I found myself talking about finding hope in spaces where freedom lives on. Spaces that can always be found, no matter how dire the circumstances.
I mentioned my parents, dissidents in the Soviet Union, for whom the kitchen was a sanctuary for forbidden conversations. And Georgia, my homeland – a place that has preserved its identity through centuries of invasion because its people fought, time and again, for the right to write their own story. Even now, as protesters fill the streets to defend the same values my parents once whispered about in the kitchen, their resilience is a reminder that survival depends on protecting the spaces where you can say who you are.
But there’s a catch: to protect the spaces where you can say who you are, you first have to know what you stand for – and who stands with you. Is it the tech bros who dream of living forever, conquering Mars, and who rush to turn their backs on diversity and equity at the first opportunity? Or is it those who have stood by the values of human dignity and justice, who have fought for the right to be heard and to belong, even when the world tried to silence them?
As we went around the table, each of us sharing what gave us hope, one of our dinner companions, a Turkish lawyer, offered a metaphor in response to my point about the need to protect spaces. “In climate science,” she said, “they talk about protected areas – patches of land set aside so that life can survive when the ecosystem around it collapses. They don’t stop the storms, but they give something vital a chance to endure, adapt, and, when the time is right, regenerate.”
That's what we need now: protected areas for uncomfortable truths and complexity. Not just newsrooms, but dinner tables, group chats, classrooms, gatherings that foster unlikely alliances - anywhere we can still speak honestly, listen deeply, and dare to imagine.
More storms will come. More authoritarians will rise. Populist strongmen and broligarchs will keep fragmenting our shared reality.
But if history has taught us anything – from Soviet kitchens to Black newspapers founded in the shadow of slavery - it’s that carefully guarded spaces where stories and collective memory are kept alive have always been the seedbeds of change.
When we nurture these sanctuaries of complex truth against all odds, we aren't just surviving. We're quietly cultivating the future we wish to see.
And in times like these, that's not just hope - it's a blueprint for renewal.
oogle has a plan to make all reCAPTCHA users migrate to reCAPTCHA Enterprise on Google Cloud by the end of 2025. This means a cost increase for many users. I’m writing this post to provide you with a heads-up about this move. (...) we plan to introduce the integration module for Cloudflare Turnstile, an alternative CAPTCHA solution, to Contact Form 7 6.1. Cloudflare Turnstile is available for free (at least for now), and we have found that it has potential to work more effectively than Google re
oogle has a plan to make all reCAPTCHA users migrate to reCAPTCHA Enterprise on Google Cloud by the end of 2025. This means a cost increase for many users. I’m writing this post to provide you with a heads-up about this move. (...) we plan to introduce the integration module for Cloudflare Turnstile, an alternative CAPTCHA solution, to Contact Form 7 6.1. Cloudflare Turnstile is available for free (at least for now), and we have found that it has potential to work more effectively than Google reCAPTCHA.
— Permalien
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors.
If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors.
If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect.
We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.
It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough.
In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.
This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.
I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.
I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony.
Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there.
I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become."
As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?”
Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me.
But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.
One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”
I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens.
So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?
I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions.
Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.
The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.
I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.
In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."
He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die.
These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.
I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.
In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.
The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.
We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.
Participants connected over Discord, comparing notes, and posting about our progress.
Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.
There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?
Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.
On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."
So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.
Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.
We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.
The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.
When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.
Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.
I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.
Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.
This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.
We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.
I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.
I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.
We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.
I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.
But until then my ears are still ringing.
This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.
“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”
A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call.
“You're wading through knee-deep water, people are screaming everywhere, and then… What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”
Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world. We looked at how the rest of us — journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back.
Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.
Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.
One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta.
Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video – training the company’s system so it can automatically filter out such content before the rest of us are exposed to it.
She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward.
Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe.
In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. “We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”
I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?
“You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.”
As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join.
In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.
What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.
“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.”
The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future.
There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now.
In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.
Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.”
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.”
Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.
This conversation has been edited for length and clarity.
Q: Rafael, can you tell me how your journey into neurorights started?
Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.
Q: How did that work?
Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it.
These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain.
We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there.
Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?
Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment.
Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars?
Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.
Q: It must have been a huge breakthrough.
Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.
And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”
I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.
Q: What do you mean by that?
Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.
Q: Jared, can you tell me how you came into this?
Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Q: What was your reaction when you heard of the mouse experiment?
Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.
Q: Can you talk me through some of the other implications of this technology?
Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.
That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.
In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting.
Rafael: I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information.
Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.
But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces.
And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.
How did you start thinking about ways to build rights and guardrails around neurotechnology?
Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.
So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data.
Jared: If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.
There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.
So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.
Rafael: We identified five areas of concern where neurotechnology could impact human rights:
The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.
The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.
The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.
The fourth is the right to equal access to neural augmentation. Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.
And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.
Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.
We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring
This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.
We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work.
Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:
“The world is an evil place, and nobody's coming to save you.”
It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.
It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world.
“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”
There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.
These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.
This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.
Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?”
We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister.
It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta.
Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards.
She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own.
And then, she switched to working for a different client: Meta.
“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death.
“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said.
Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said.
After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.
Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.
Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.
Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.
“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty.
And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.
Kibera is Africa's largest urban slum. Hundreds of young people living here were recruited to work on projects for Big Tech. Becky Lipscombe. Simone Boccaccio/SOPA Images/LightRocket via Getty Images.
But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.
When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI.
Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”
You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.
Despite their defense of their record, Sama is facing legal action in Kenya.
“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.
“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?”
As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.
“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.”
“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”
At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator.
Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”
For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.
A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?
Artificial intelligence may well go down in history as one of humanity’s greatest triumphs. Future generations may look back at this moment as the time we truly entered the future.
And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.
So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen. But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.
Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.
Your Early Warning System
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
This week, as DeepSeek, a free AI-powered chatbot from China, embarrassed American tech giants and panicked investors, sending global markets tumbling, investor Marc Andreessen described its emergence as "AI's Sputnik moment." That is, the moment when self-belief and confidence tips over into hubris. It was not just stock prices that plummeted. The carefully constructed story of American technological supremacy also took a deep plunge.
But perhaps the real shock should be that Silicon Valley
This week, as DeepSeek, a free AI-powered chatbot from China, embarrassed American tech giants and panicked investors, sending global markets tumbling, investor Marc Andreessen described its emergence as "AI's Sputnik moment." That is, the moment when self-belief and confidence tips over into hubris. It was not just stock prices that plummeted. The carefully constructed story of American technological supremacy also took a deep plunge.
But perhaps the real shock should be that Silicon Valley was shocked at all.
For years, Silicon Valley and its cheerleaders spread the narrative of inevitable American dominance of the artificial intelligence industry. From the "Why China Can't Innovate" cover story in the Harvard Business Review to the breathless reporting on billion-dollar investments in AI, U.S. media spent years building an image of insurmountable Western technological superiority. Even this week, when Wired reported on the "shock, awe, and questions" DeepSeek had sparked, the persistent subtext seemed to be that technological efficiency from unexpected quarters was somehow fundamentally illegitimate.
“In the West, our sense of exceptionalism is truly our greatest weakness,” says data analyst Christopher Wylie, author of MindF*ck, who famously blew the whistle on Cambridge Analytica in 2017.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
That arrogance was on full display just last year when OpenAI's Sam Altman, speaking to an audience in India, declared: "It's totally hopeless to compete with us. You can try and it's your job to try but I believe it is hopeless." He was dismissing the possibility that teams outside Silicon Valley could build substantial AI systems with limited resources.
There are still questions over whether DeepSeek had access to more computing power than it is admitting. Scale AI chief executive Alexandr Wong said in a recent interview that the Chinese company had access to thousands more of the highest grade chips than people know about, despite U.S. export controls. What's clear, though, is that Altman didn't anticipate that a competitor would simply refuse to play by the rules he was trying to set and would instead reimagine the game itself.
By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight. While companies like OpenAI have poured hundreds of billions into massive data centers—with the Stargate project alone pledging an “initial investment” of $100 billion—DeepSeek demonstrated a fundamentally different path to innovation.
"For the first time in public, they've provided an efficient way to train reasoning models," explains Thomas Cao, professor of technology policy at Tufts University. "The technical detail is that they've come up with a way to do reinforcement learning without supervision. You don't have to hand-label a lot of data. That makes training much more efficient."
By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight.
For the American media, which has drunk the Silicon Valley Kool Aid, the DeepSeek story is a hard one to stomach. For a long time, Wylie argues, while countries in Asia made massive technological breakthroughs, the story commonly told to the American people focused on American tech exceptionalism.
An alternative approach, Wylie says, would be to see and “acknowledge that China is doing good things we can learn from without meaning that we have to adopt their system. Things can exist in parallel.” But instead, he adds, the mainstream media followed the politicians down the rabbit hole of focusing on the "China threat."
These geopolitical fears have helped Big Tech shield itself from genuine competition and regulatory scrutiny. The narrative of a Cold War style “AI race” with China has also fed the assumption that a major technological power can be bullied into submission through trade restrictions.
That assumption has also crumpled. The U.S. has spent the past two years attempting to curtail China's AI development through increasingly strict controls on advanced semiconductors. These restrictions, which began under Biden in 2022 and were significantly expanded last week under Trump, were designed to prevent Chinese companies from accessing the most advanced chips needed for AI development.
DeepSeek developed its model using older generation chips stockpiled before the restrictions took effect, and its breakthrough has been held up as an example of genuine, bootstrap innovation. But Professor Cao cautions against reading too much into how export controls have catalysed development and innovation at DeepSeek. "If there had been no export control requirements,” he said, “DeepSeek could have been able to do things even more efficiently and faster. We don't see the counterfactual."
DeepSeek is a direct rebuke to both Western assumptions about Chinese innovation and the methods the West has used to curtail it.
As millions of Americans downloaded DeepSeek, making it the most downloaded app in the U.S., OpenAI’s Steven Heidel peevishly claimed that using it would mean giving away data to the Chinese Communist Party. Lawmakers too have warned about national security risks and dozens of stories like this one echoed suggestions that the app could be sending U.S. data to China.
Security concers aside, what really sets DeepSeek apart from its Western counterparts is not just efficiency of the model, but also the fact that it is open source. Which, counter-intuitively, makes a Beijing-funded app more democratic than its Silicon Valley predecessors.
In the heated discourse surrounding technological innovation, "open source" has become more than just a technical term—it's a philosophy of transparency. Unlike proprietary models where code is a closely guarded corporate secret, open source invites global scrutiny and collective improvement.
DeepSeek is a direct rebuke to Western assumptions about Chinese innovation and the methods the West has used to curtail it.
At its core, open source means that the source code of a software is made freely available for anyone to view, modify, and distribute. When a technology is open source, users can download the entire code, run it on their own servers, and verify every line of its functionality. For consumers and technologists alike, open source means the ability to understand, modify, and improve technology without asking permission. It's a model that prioritizes collective advancement over corporate control. Already, for instance, the Chinese tech behemoth Alibaba has released a new version of its own large language model that it says is an upgrade on DeepSpeak.
Unlike ChatGPT or any other Western AI system, DeepSource can be run locally without giving away any data. "Despite the media fear-mongering, the irony is DeepSeek is now open source and could be implemented in a far more privacy-preserving way than anything offered by Meta or OpenAI," Wylie says. “If Sam Altman open sourced OpenAI, we wouldn’t look at it with the same skepticism, he would be nominated for the Nobel Peace Prize."
The open-source nature of DeepSeek is a huge part of the disruption it has caused. It challenges Silicon Valley's entire proprietary model and challenges our collective assumptions about both AI development and global competition. Not surprisingly, part of Silicon Valley’s response has been to complain that Chinese companies are using American companies’ intellectual property, even as their own large language models have been built by consuming vast amounts of information without permission.
This counterintuitive strategy of openness coming from an authoritarian state also gives China a massive soft power win that it will translate into geopolitical brownie points. Just as TikTok's algorithms outmaneuvered Instagram and YouTube by focusing on accessibility over profit, DeepSeek, which is currently topping iPhone downloads, represents another moment where what's better for users—open-source, efficient, privacy-preserving—challenges what's better for the boardroom.
We are yet to see how DeepSeek will reroute the development of AI, but just as the original Sputnik moment galvanized American scientific innovation during the Cold War, DeepSeek could shake Silicon Valley out of its complacency. For Professor Cao the immediate lesson is that the US must reinvest in fundamental research or risk falling behind. For Wylie, the takeaway of the DeepSeek fallout in the US is more meta: There is no need for a new Cold War, he argues. “There will only be an AI war if we decide to have one.”
It's time to acknowledge an uncomfortable truth. The internet, as we've known it for the last 15 years, is breaking apart. This is not just true in the sense of, say, China or North Korea not having access to Western services and apps. Across the planet, more and more nations are drawing clear lines of sovereignty between their internet and everyone else's. Which means it's time to finally ask ourselves an even more uncomfortable question: what happens when the World Wide Web is no longer worldw
It's time to acknowledge an uncomfortable truth. The internet, as we've known it for the last 15 years, is breaking apart. This is not just true in the sense of, say, China or North Korea not having access to Western services and apps. Across the planet, more and more nations are drawing clear lines of sovereignty between their internet and everyone else's. Which means it's time to finally ask ourselves an even more uncomfortable question: what happens when the World Wide Web is no longer worldwide?
Over the last few weeks the US has been thrown into a tailspin over the impending divest-or-ban law that might possibly block the youth of America from accessing their favorite short-form video app. But if you've only been following the Supreme Court's hearing on TikTok you may have totally missed an entirely separate Supreme Court hearing on whether or not southern American states like Texas are constitutionally allowed to block porn sites like Pornhub. As of this month, 17 US states have blocked Pornhub for refusing to adhere to "age-verification laws" that would force Pornhub to collect users' IDs before browsing the site, thus making sensitive, personal information vulnerable to security breaches.
But it's not just US lawmakers that are questioning what's allowed on their corner of the web.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Following a recent announcement that Meta would be relaxing their fact checking standards Brazilian regulators demanded a thorough explanation of how this would impact the country's 100 million users. Currently the Brazilian government is "seriously concerned" about these changes. Which itself is almost a verbatim repeat of how Brazilian lawmakers dealt with X last year, when they banned the platform for almost two months over how the platform handled misinformation about the country's 2023 attempted coup.
Speaking of X, the European Union seems to have finally had enough of Elon Musk's digital megaphone. They've been investigating the platform since 2023 and have given Musk a February deadline to explain exactly how the platform's algorithm works. To say nothing of the French and German regulators grappling with how to deal with Musk's interference in their national politics.
And though the aforementioned Chinese Great Firewall has always blocked the rest of the world from the country's internet users, last week there was a breach that Chinese regulators are desperately trying to patch. Americans migrated to a competing app called RedNote, which has now caught the attention of both lawmakers in China, who are likely to wall off American users from interacting with Chinese users, and lawmakers in the US, who now want to ban it once they finally deal with TikTok.
All of this has brought us to a stark new reality, where we can no longer assume that the internet is a shared global experience, at least when it comes to the web's most visible and mainstream apps. New digital borders are being drawn and they will eventually impact your favorite app. Whether you're an activist, a journalist, or even just a normal person hoping to waste some time on their phone (and maybe make a little money), the spaces you currently call home online are not permanent.
Time to learn how a VPN works. At least until the authorities restrict and regulate access to VPNs too, as they already do in countries such as China, Iran, Russia and India.
A version of this story was published in this week’s Coda Currents newsletter.Sign up here.