Elon Musk’s A.I. Company Faces Lawsuit Over Gas-Burning Turbines
© Karen Pulfer Focht, via Reuters
© Karen Pulfer Focht, via Reuters
© Peter Catterall/Agence France-Presse — Getty Images
Swedish company Saab and German defense startup Helsing have conducted combat trials of a Gripen E fighter jet piloted by artificial intelligence, pitted against a real-life human pilot, The War Zone reports.
The first of these test flights took place on 28 May. By the third combat sortie on 3 June, the AI agent, dubbed Centaur, was ready to engage in a beyond-visual-range (BVR) air battle against a crewed Gripen D fighter.
During the process, the AI agent rapidly accumulated experience and improved its decision-making skills in BVR combat, a battlefield Saab describes as “like playing chess in a supersonic with advanced missiles.”
Saab has confirmed that the Centaur AI system could potentially be expanded to close-range dogfights within visual range (WVR) as well. However, the initial focus remains on BVR engagements, which the company describes as the most critical aspect of air combat, a point reinforced by the ongoing air war in Ukraine.
In a series of dynamic BVR scenarios, the Gripen E’s sensors received target data, and the Centaur AI autonomously executed complex maneuvers on behalf of the test pilot. The culmination of these scenarios saw the AI agent providing the pilot with firing cues for simulated air-to-air weapon launches.
Meanwhile, Marcus Wandt, Saab’s Chief Innovation Officer and a test pilot himself, remarked that the test flights “so far point to the fact that ‘it is not a given’ that a pilot will be able to win in aerial combat against an AI-supported opponent.”
“This is an important achievement for Saab, demonstrating our qualitative edge in sophisticated technologies by making AI deliver in the air,” said Peter Nilsson, head of Advanced Programs within Saab’s Aeronautics business area.
Insights gained from this program will feed into Sweden’s future fighter program, which aims to select one or more next-generation air combat platforms by 2031.
© Tolga Akmen/EPA, via Shutterstock
Russian air force suffers devastating blow it will not recover from. The loss of strategic missile-carrying bombers destroyed or damaged today is a blow Russia will not be able to compensate for, according to military analyst Oleksandr Kovalenko.
Today, Russia lost over 40 aircraft, either destroyed or damaged, including valuable strategic bombers of various types. The Ukrainian strikes hit four military airfields, including the Olenya airbase near Murmansk and the Belaya airbase in Irkutsk Oblast.
The unique feature of this operation was that the drones didn’t fly from Ukraine, instead, they were transported by truck closer to the targets and launched from minimal distance. They were controlled by artificial intelligence, which selected targets autonomously.
Kovalenko stresses that aircraft like the Tu-95MS, Tu-22M3, and Tu-160 are no longer manufactured in modern Russia. What Russian propaganda calls “new” aircraft are merely refurbished Soviet-era units.
“To this day, Russia has not produced a single brand-new Tu-22M3 or Tu-160 from scratch — only reassembled legacy models from the Soviet era. In fact, everything that was damaged or destroyed today is beyond restoration and certainly can’t be replaced by new production,” Kovalenko says.
The loss of the Tu-160 is especially painful for Russia. It is the most expensive and unique aircraft in the Russian Aerospace Forces, a true “unicorn,” as Kovalenko puts it.
“Sadly, it’s not the last unicorn. If there’s a true last unicorn, it would be the A-50 early warning aircraft. I think even more spectacular news about that might be coming soon!” he adds.
Earlier, Ukrainian journalist Yurii Butusov said the Security Service smuggled 150 small strike drones and 300 munitions into Russia, 116 of which took off during the latest operation against Russian aircraft.
At least 150 AI-guided Ukrainian drones strike 41 Russian aircraft in historic truck-smuggled strike
Control was conducted via Russian telecom networks using auto-targeting.
Russia bombs Ukrainian schools and children’s hospitals. Kyiv hits back with strikes on Moscow’s aircraft.
According to the Security Service of Ukraine (SBU), the estimated value of the aircraft destroyed or damaged during the special operation “Web” is approximately $7 billion.
The agency notes that a total of 34% of Russia’s strategic missile carriers based at their main airfields were hit in the operation. The SBU promises to release more details about the mission later.
“You thought Ukraine was that simple? Ukraine is super. Ukraine is unique. It has endured the steamrollers of history. In today’s world, it is priceless,” the SBU press service stated, quoting prominent Ukrainian writer Lina Kostenko.
Meanwhile, CBS News reports that the White House has not been informed in advance of Ukraine’s plans to carry out a large-scale strike on Russian strategic aviation. It summarizes the day’s events and adds that White House spokespeople declined to comment on the Ukrainian strike. This information has also been confirmed by Axios, citing an unnamed Ukrainian official.
Never before have drones with artificial intelligence executed such precise strikes on Russian military airbases as in the operation Web by Ukrainian forces, writes Clash Report.
On 1 June, Ukrainian drones featuring artificial intelligence attacked several Russian military airfields across different regions. Over 40 aircraft were destroyed or damaged, including strategic bombers used by Russia to kill civilians. Unlike previous attacks, the drones did not fly thousands of kilometers from Ukraine. Instead, they were transported in the Russian territory by trucks, then launched into the air for sudden strikes.
“Last year, Ukrainian military intelligence scanned Russian bomber aircraft and trained AI to recognize them and execute automatic dive attack algorithms. Today, we’ve seen the results,” reports Clash Report.
Two types of drones were used — vertical takeoff quadcopters and “wing-type” drones launched from mini catapults.
At the same time, Ukrainian journalist Yurii Butusov emphasizes the uniqueness of the Security Service operation, calling it a historic military textbook case, noting that 41 aircraft were hit across four airbases.
“Some drones attacked using auto-targeting. Results will be confirmed by satellite imagery,” Butusov adds.
According to him, the Security Service smuggled 150 small strike drones and 300 munitions into Russia, 116 of which took off. Control was conducted via Russian telecom networks using auto-targeting.
“The drones attacked from close range during daylight deep in enemy rear areas… the Russians did not expect small quadcopters to strike in daylight,” the journalist says.
The most successful attack was on Olenya airfield, where drones hit fuel tanks, causing a large number of aircraft to burn completely. All Ukrainian agents have returned safely home without losses.
As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence.
Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"
Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.
During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.
OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade.
For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?
Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person. The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard.
As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up.
The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.
A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd.
It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.
"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."
Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.”
As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.
Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system.
Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?
I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIV said: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."
Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.
She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.
“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
The post The Vatican challenges AI’s god complex appeared first on Coda Story.
Whoever becomes the next Pope will inherit not just the leadership of the Catholic Church but a remarkably sophisticated approach to technology — one that in many ways outpaces governments worldwide. While Silicon Valley preaches Artificial Intelligence as a quasi-religious force capable of saving humanity, the Vatican has been developing theological arguments to push back against this narrative.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
In the hours after Pope Francis died on Easter Monday, I went, like thousands of others in Rome, straight to St Peter's Square to witness the city in mourning as the basilica's somber bell tolled.
Just three days before, on Good Friday, worshippers in the eternal city proceeded, by candlelight, through the ruins of the Colosseum, as some of the Pope's final meditations were read to them. "When technology tempts us to feel all powerful, remind us," the leader of the service called out. "We are clay in your hands," the crowd responded in unison.
As our world becomes ever more governed by tech, the Pope's meditations are a reminder of our flawed, common humanity. We have built, he warned, "a world of calculation and algorithms, of cold logic and implacable interests." These turned out to be his last public words on technology. Right until the end, he called on his followers to think hard about how we're being captured by the technology around us. "How I would like for us to look less at screens and look each other in the eyes more!"
Faith vs. the new religion
Unlike politicians who often struggle to grasp AI's technical complexity, the Vatican has leveraged its centuries of experience with faith, symbols, and power to recognize AI for what it increasingly represents: not just a tool, but a competing belief system with its own prophets, promises of salvation, and demands for devotion.
In February 2020, the Vatican's Pontifical Academy for Life published the Rome Call for AI ethics, arguing that "AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live." And in January of this year, the Vatican released a document called Antiqua et Nova – one of its most comprehensive statements to date on AI – that warned we're in danger of worshipping AI as a God, or as an idol.
Our investigation into Silicon Valley's cult-like movement
I first became interested in the Vatican's perspective on AI while working on our Audible podcast series "Captured" with Cambridge Analytica whistleblower Christopher Wylie. In our year-long investigation, we discovered how Silicon Valley's AI pioneers have adopted quasi-religious language to describe their products and ambitions — with some tech leaders explicitly positioning themselves as prophets creating a new god.
In our reporting, we documented tech leaders like Bryan Johnson speaking literally about "creating God in the form of superintelligence," billionaire investors discussing how to "live forever" through AI, and founders talking about building all-knowing, all-powerful machines that will free us from suffering and propel us into utopia. One founder told us their goal was to install "all human knowledge into every human" through brain-computer interfaces — in other words, make us all omniscient.
Nobel laureate Maria Ressa, whom I spoke with recently, told me she had warned Pope Francis about the dangers of algorithms designed to promote lies and disinformation. "Francis understood the impact of lies," she said. She explained to the Pope how Facebook had destroyed the political landscape in the Philippines, where the platform’s engagement algorithms allowed disinformation to spread like wildfire. "I said — 'this is literally an incentive structure that is rewarding lies.'"
According to Ressa, AI evangelists in Silicon Valley are acquiring "the power of gods without the wisdom of God." It is power, she said, "that is in the hands of men whose arrogance prevents them from seeing the impact of rolling out technology that's not safe for their kids."
The battle for humanity's future
The Vatican has always understood how to use technology, engineering and spectacle to harness devotion and wield power — you only have to walk into St Peter’s Basilica to understand that. I spoke to a Vatican priest, on his way to Rome to pay his respects to the Pope. He told me why the Vatican understands the growing power of artificial intelligence so well. "We know perfectly well," he said, "that certain structures can become divinities. In the end, technology should be a tool for living — it should not be the end of man."
A version of this story was published in this week’s Coda Currents newsletter. Sign up here.
The post Pope Francis’s final warning appeared first on Coda Story.
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors.
If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect.
We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.
It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough.
In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.
This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.
I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.
I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony.
Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there.
I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.
There's a belief in Mormonism: "As man is, God once was. As God is, man may become."
As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?”
Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me.
But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.
One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”
I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens.
So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?
I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions.
Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.
The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.
I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.
In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."
He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die.
These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.
I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.
In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.
The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.
We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.
Participants connected over Discord, comparing notes, and posting about our progress.
Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.
There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?
Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.
On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."
So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.
Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.
We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.
The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.
When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.
Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.
I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.
Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.
This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.
We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact.
The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.
I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.
I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.
We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.
I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.
But until then my ears are still ringing.
This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?
The post When I’m 125? appeared first on Coda Story.
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal.
It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.
“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”
A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call.
“You're wading through knee-deep water, people are screaming everywhere, and then… What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”
Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world. We looked at how the rest of us — journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back.
Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.
Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.
One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta.
Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video – training the company’s system so it can automatically filter out such content before the rest of us are exposed to it.
She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward.
Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe.
In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. “We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”
I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?
“You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.”
As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join.
In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.
What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.
“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.”
The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future.
There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now.
In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.
Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.
This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.
The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.
This week, as DeepSeek, a free AI-powered chatbot from China, embarrassed American tech giants and panicked investors, sending global markets tumbling, investor Marc Andreessen described its emergence as "AI's Sputnik moment." That is, the moment when self-belief and confidence tips over into hubris. It was not just stock prices that plummeted. The carefully constructed story of American technological supremacy also took a deep plunge.
But perhaps the real shock should be that Silicon Valley was shocked at all.
For years, Silicon Valley and its cheerleaders spread the narrative of inevitable American dominance of the artificial intelligence industry. From the "Why China Can't Innovate" cover story in the Harvard Business Review to the breathless reporting on billion-dollar investments in AI, U.S. media spent years building an image of insurmountable Western technological superiority. Even this week, when Wired reported on the "shock, awe, and questions" DeepSeek had sparked, the persistent subtext seemed to be that technological efficiency from unexpected quarters was somehow fundamentally illegitimate.
“In the West, our sense of exceptionalism is truly our greatest weakness,” says data analyst Christopher Wylie, author of MindF*ck, who famously blew the whistle on Cambridge Analytica in 2017.
Subscribe to our Coda Currents newsletter
Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.
That arrogance was on full display just last year when OpenAI's Sam Altman, speaking to an audience in India, declared: "It's totally hopeless to compete with us. You can try and it's your job to try but I believe it is hopeless." He was dismissing the possibility that teams outside Silicon Valley could build substantial AI systems with limited resources.
There are still questions over whether DeepSeek had access to more computing power than it is admitting. Scale AI chief executive Alexandr Wong said in a recent interview that the Chinese company had access to thousands more of the highest grade chips than people know about, despite U.S. export controls. What's clear, though, is that Altman didn't anticipate that a competitor would simply refuse to play by the rules he was trying to set and would instead reimagine the game itself.
By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight. While companies like OpenAI have poured hundreds of billions into massive data centers—with the Stargate project alone pledging an “initial investment” of $100 billion—DeepSeek demonstrated a fundamentally different path to innovation.
"For the first time in public, they've provided an efficient way to train reasoning models," explains Thomas Cao, professor of technology policy at Tufts University. "The technical detail is that they've come up with a way to do reinforcement learning without supervision. You don't have to hand-label a lot of data. That makes training much more efficient."
By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight.
For the American media, which has drunk the Silicon Valley Kool Aid, the DeepSeek story is a hard one to stomach. For a long time, Wylie argues, while countries in Asia made massive technological breakthroughs, the story commonly told to the American people focused on American tech exceptionalism.
An alternative approach, Wylie says, would be to see and “acknowledge that China is doing good things we can learn from without meaning that we have to adopt their system. Things can exist in parallel.” But instead, he adds, the mainstream media followed the politicians down the rabbit hole of focusing on the "China threat."
These geopolitical fears have helped Big Tech shield itself from genuine competition and regulatory scrutiny. The narrative of a Cold War style “AI race” with China has also fed the assumption that a major technological power can be bullied into submission through trade restrictions.
That assumption has also crumpled. The U.S. has spent the past two years attempting to curtail China's AI development through increasingly strict controls on advanced semiconductors. These restrictions, which began under Biden in 2022 and were significantly expanded last week under Trump, were designed to prevent Chinese companies from accessing the most advanced chips needed for AI development.
DeepSeek developed its model using older generation chips stockpiled before the restrictions took effect, and its breakthrough has been held up as an example of genuine, bootstrap innovation. But Professor Cao cautions against reading too much into how export controls have catalysed development and innovation at DeepSeek. "If there had been no export control requirements,” he said, “DeepSeek could have been able to do things even more efficiently and faster. We don't see the counterfactual."
DeepSeek is a direct rebuke to both Western assumptions about Chinese innovation and the methods the West has used to curtail it.
As millions of Americans downloaded DeepSeek, making it the most downloaded app in the U.S., OpenAI’s Steven Heidel peevishly claimed that using it would mean giving away data to the Chinese Communist Party. Lawmakers too have warned about national security risks and dozens of stories like this one echoed suggestions that the app could be sending U.S. data to China.
Security concers aside, what really sets DeepSeek apart from its Western counterparts is not just efficiency of the model, but also the fact that it is open source. Which, counter-intuitively, makes a Beijing-funded app more democratic than its Silicon Valley predecessors.
In the heated discourse surrounding technological innovation, "open source" has become more than just a technical term—it's a philosophy of transparency. Unlike proprietary models where code is a closely guarded corporate secret, open source invites global scrutiny and collective improvement.
DeepSeek is a direct rebuke to Western assumptions about Chinese innovation and the methods the West has used to curtail it.
At its core, open source means that the source code of a software is made freely available for anyone to view, modify, and distribute. When a technology is open source, users can download the entire code, run it on their own servers, and verify every line of its functionality. For consumers and technologists alike, open source means the ability to understand, modify, and improve technology without asking permission. It's a model that prioritizes collective advancement over corporate control. Already, for instance, the Chinese tech behemoth Alibaba has released a new version of its own large language model that it says is an upgrade on DeepSpeak.
Unlike ChatGPT or any other Western AI system, DeepSource can be run locally without giving away any data. "Despite the media fear-mongering, the irony is DeepSeek is now open source and could be implemented in a far more privacy-preserving way than anything offered by Meta or OpenAI," Wylie says. “If Sam Altman open sourced OpenAI, we wouldn’t look at it with the same skepticism, he would be nominated for the Nobel Peace Prize."
The open-source nature of DeepSeek is a huge part of the disruption it has caused. It challenges Silicon Valley's entire proprietary model and challenges our collective assumptions about both AI development and global competition. Not surprisingly, part of Silicon Valley’s response has been to complain that Chinese companies are using American companies’ intellectual property, even as their own large language models have been built by consuming vast amounts of information without permission.
This counterintuitive strategy of openness coming from an authoritarian state also gives China a massive soft power win that it will translate into geopolitical brownie points. Just as TikTok's algorithms outmaneuvered Instagram and YouTube by focusing on accessibility over profit, DeepSeek, which is currently topping iPhone downloads, represents another moment where what's better for users—open-source, efficient, privacy-preserving—challenges what's better for the boardroom.
We are yet to see how DeepSeek will reroute the development of AI, but just as the original Sputnik moment galvanized American scientific innovation during the Cold War, DeepSeek could shake Silicon Valley out of its complacency. For Professor Cao the immediate lesson is that the US must reinvest in fundamental research or risk falling behind. For Wylie, the takeaway of the DeepSeek fallout in the US is more meta: There is no need for a new Cold War, he argues. “There will only be an AI war if we decide to have one.”
Additional reporting by Masho Lomashvili.
The post DeepSeek shatters Silicon Valley’s invincibility delusion appeared first on Coda Story.