Vue lecture

Ukraine launches AI startup battle, backed by German Defense Ministry, to outpace Russia’s war machine

The Ministry of Defense of Ukraine is launching a project to ensure technological superiority on the battlefield. With support from the German Federal Ministry of Defense and the Office of Effective Regulation (BRDO), Kyiv has announced a program to develop defense startups focused on artificial intelligence technology.

In 2025, former Commander-in-Chief of the Armed Forces of Ukraine Valerii Zaluzhnyi said that hoping for a return to the 1991 borders without a technological leap is pointless. According to Zaluzhnyi, Russia has been waging a war of attrition since the end of 2023. In this situation, Ukraine’s victory is only possible by completely destroying Russia’s ability to wage it.

The launch of K4 Startup Studio promises a new format of cooperation between the state, developers, military experts, and global mentors.

Participating teams will compete to solve four key military challenges and can also submit their own ideas. Selected startups will undergo combat testing and receive mentoring support.

After a four-month intensive program, teams will have the opportunity to attract investments or sign government contracts. The four best startups will receive grants of $250,000 each.

“We are not just adapting — we are setting trends,” emphasizes Deputy Defense Minister Kateryna Chernohorenko.

Applications are open on k4.mod.gov.ua until 15 August 2025. Developers and investors interested in battlefield-proven technologies, both Ukrainian and international, are invited to apply.

You could close this page. Or you could join our community and help us produce more materials like this. We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society. Become a patron or see other ways to support

Senate Democrats Rail at ‘Sloppy, Rushed’ State Dept. Firings

Republicans on the Senate Foreign Relations Committee said the changes were needed and called the outrage overblown.

© Tierney L. Cross/The New York Times

State Department employees gathered on Friday to support their colleagues after layoffs.

Trump Hails $90 Billion in AI Infrastructure Investments at Pennsylvania Summit

Trump administration officials say winning the artificial intelligence race with China is a top priority.

© Doug Mills/The New York Times

President Trump at the Pennsylvania Energy and Innovation Summit at Carnegie Mellon University in Pittsburgh on Tuesday.

“It’s a devil’s machine.”

Tech leaders say AI will bring us eternal life, help us spread out into the stars, and build a utopian world where we never have to work. They describe a future free of pain and suffering, in which all human knowledge will be wired into our brains. Their utopian promises sound more like proselytizing than science, as if AI were the new religion and the tech bros its priests. So how are real religious leaders responding?

As Georgia's first female Baptist bishop, Rusudan Gotsiridze challenges the doctrines of the Orthodox Church, and is known for her passionate defence of women’s and LGBTQ+ rights. She stands at the vanguard of old religion, an example of its attempts to modernize — so what does she think of the new religion being built in Silicon Valley, where tech gurus say they are building a superintelligent, omniscient being in the form of Artificial General Intelligence?

Gotsiridze first tried to use AI a few months ago. The result chilled her to the bone. It made her wonder if Artificial Intelligence was in fact a benevolent force, and to think about how she should respond to it from the perspective of her religious beliefs and practices.

In this conversation with Coda’s Isobel Cockerell, Bishop Gotsiridze discusses the religious questions around AI: whether AI can really help us hack back into paradise, and what to make of the outlandish visions of Silicon Valley’s powerful tech evangelists.

Bishop Rusudan Gotsiridze and Isobel Cockerell in conversation at the ZEG Storytelling Festival in Tbilisi last month. Photo: Dato Koridze.

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025. It has been lightly edited and condensed for clarity. 

Isobel: Tell me about your relationship with AI right now. 

Rusudan: Well, I’d like to say I’m an AI virgin. But maybe that’s not fully honest. I had one contact with ChatGPT. I didn’t ask it to write my Sunday sermon. I just asked it to draw my portrait. How narcissistic of me. I said, “Make a portrait of Bishop Rusudan Gotsiridze.” I waited and waited. The portrait looked nothing like me. It looked like my mom, who passed away ten years ago. And it looked like her when she was going through chemo, with her puffy face. It was really creepy. So I will think twice before asking ChatGPT anything again. I know it’s supposed to be magical... but that wasn’t the best first date. 

AI-generated image via ChatGPT / OpenAI.

Isobel: What went through your mind when you saw this picture of your mother? 

Rusudan: I thought, “Oh my goodness, it’s really a devil’s machine.” How could it go so deep? Find my facial features and connect them with someone who didn’t look like me? I take more after my paternal side. The only thing I could recognize was the priestly collar and the cross. Okay. Bishop. Got it. But yes, it was really very strange.

Isobel: I find it so interesting that you talk about summoning the dead through Artificial Intelligence. That’s something happening in San Francisco as well. When I was there last summer, we heard about this movement that meets every Sunday. Instead of church, they hold what they call an “AI séance,” where they use AI to call up the spirit world. To call up the dead. They believe the generative art that AI creates is a kind of expression of the spirit world, an expression of a greater force.

They wouldn’t let us attend. We begged, but it was a closed cult. Still, a bunch of artists had the exact same experience you had: they called up these images and felt like they were summoning them, not from technology, but from another realm. 

Rusudan: When you’re a religious person dealing with new technologies, it’s uncomfortable. Religion — Christianity, Protestantism, and many others — has earned a very cautious reputation throughout history because we’ve always feared progress.

Remember when we thought printing books was the devil’s work? Later, we embraced it. We feared vaccinations. We feared computers, the internet. And now, again, we fear AI.

 It reminds me of the old proverb about a young shepherd who loved to prank his friends by shouting “Wolves! Wolves!” until one day, the wolves really came. He shouted, but no one believed him anymore.

We’ve been shouting “wolves” for centuries. And now, I’m this close to shouting it again, but I’m not sure. 

Isobel: You said you wondered if this was the devil’s work when you saw that picture of your mother. It’s quite interesting. In Silicon Valley, people talk a lot about AI bringing about the rapture, apocalypse, hell.

They talk about the real possibility that AI is going to kill us all, what the endgame or extinction risk of building superintelligent models will be. Some people working in AI are predicting we’ll all be dead by 2030.

On the other side, people say, “We’re building utopia. We’re building heaven on Earth. A world where no one has to work or suffer. We’ll spread into the stars. We’ll be freed from death. We’ll become immortal.”

I’m not a religious person, but what struck me is the religiosity of these promises. And I wanted to ask you — are we hacking our way back into the Garden of Eden? Should we just follow the light? Is this the serpent talking to us?

Rusudan: I was listening to a Google scientist. He said that in the near future, we’re not heading to utopia but dystopia. It’s going to be hell on Earth. All the world’s wealth will be concentrated in a small circle, and poverty will grow. Terrible things will happen, before we reach utopia.

Listening to him, it really sounded like the Book of Revelation. First the Antichrist comes, and then Christ.

Because of my Protestant upbringing, I’ve heard so many lectures about the exact timeline of the Second Coming. Some people even name the day, hour, place. And when those times pass, they’re frustrated. But they carry on calculating. 

It’s hard for me to speak about dystopia, utopia, or the apocalyptic timeline, because I know nothing is going to be exactly as predicted.

The only thing I’m afraid of in this Artificial Intelligence era is my 2-year-old niece. She’s brilliant. You can tell by her eyes. She doesn’t speak our language yet. But phonetically, you can hear Georgian, English, Russian, even Chinese words from the reels she watches non-stop.

That’s what I’m afraid of: us constantly watching our devices and losing human connection. We’re going to have a deeply depressed young generation soon. 

I used to identify as a social person. I loved being around people. That’s why I became a priest. But now, I find it terribly difficult to pull myself out of my house to be among people. And it’s not just a technology problem — it’s a human laziness problem.

When we find someone or something to take over our duties, we gladly hand them over. That’s how we’re using this new technology. Yes, I’m in sermon mode now — it’s a Sunday, after all. 

I want to tell you an interesting story from my previous life. I used to be a gender expert, training people about gender equality. One example I found fascinating: in a Middle Eastern village without running water, women would carry vessels to the well every morning and evening. It was their duty.

Western gender experts saw this and decided to help. They installed a water supply. Every woman got running water in her kitchen: happy ending. But very soon, the pipeline was intentionally broken by the women. Why? Because that water-fetching routine was the only excuse they had to leave their homes and see their friends. With running water, they became captives to their household duties.

One day, we may also not understand why we’ve become captives to our own devices. We’ll enjoy staying home and not seeing our friends and relatives. I don’t think we’ll break that pipeline and go out again to enjoy real life.

Isobel: It feels like it’s becoming more and more difficult to break that pipeline. It’s not really an option anymore to live without the water, without technology. 

Sometimes I talk with people in a movement called the New Luddites. They also call themselves the Dumbphone Revolution. They want to create a five-to-ten percent faction of society which doesn’t have a smartphone, and they say that will help us all, because it will mean the world will still have to cater to people who don’t participate in big tech, who don’t have it in their lives. But is that the answer for all of us? To just smash the pipeline to restore human connection? Or can we have both?

Rusudan: I was a new mom in the nineties in Georgia. I had two children at a time when we didn’t have running water. I had to wash my kids’ clothes in the yard in cold water, summer and winter. I remember when we bought our first washing machine.  My husband and I sat in front of it for half an hour, watching it go round and round. It was paradise for me for a while. 

Now this washing machine is there and I don't enjoy it anymore. It's just a regular thing in my life. And when I had to wash my son’s and daughter-in-law’s wedding outfits, I didn’t trust the machine. I washed those clothes by hand. There are times when it’s important to do things by hand.

Of course, I don’t want to go back to a time without the internet when we were washing clothes in the yard, but there are things that are important to do without technology.

I enjoy painting, and I paint quite a lot with watercolors. So far, I can tell which paintings are AI and which are real. Every time I look at an AI-made watercolour, I can tell it’s not a human painting. It is a technological painting. And it's beautiful. I know I can never compete with this technology. 

But that feeling, when you put your brush in, the water — sometimes I accidentally put it in my coffee cup — and when you put that brush on the paper and the pigment spreads, that feeling can never be replaced by any technology. 

Isobel:
As a writer, I'm now pretty good, I think, at knowing if something is AI-written or not. I'm sure in the future it will get harder to tell, but right now, there are little clues. There’s this horrible construction that AI loves: something is not just X, it’s Y. For example: “Rusudan is not just a bishop, she’s an oracle for the LGBTQ community in Georgia.” Even if you tell it to stop using that construction, it can’t. Same for the endless em-dashes: I can’t get ChatGPT to stop using them no matter how many times or how adamantly I prompt it. It's just bad writing.

It’s missing that fingerprint of imperfection that a human leaves: whether it’s an unusual sentence construction or an interesting word choice, I’ve started to really appreciate those details in real writing. I've also started to really love typos. My whole life as a journalist I was horrified by them. But now when I see a typo, I feel so pleased. It means a human wrote it. It’s something to be celebrated. It’s the same with the idea that you dip your paintbrush in the coffee pot and there’s a bit of coffee in the painting. Those are the things that make the work we make alive. 

There’s a beauty in those imperfections, and that’s something AI has no understanding of. Maybe it’s because the people building these systems want to optimize everything. They are in pursuit of total perfection. But I think that the pursuit of imperfection is such a beautiful thing and something that we can strive for.

Rusudan: Another thing I hope for with this development of AI is that it’ll change the formula of our existence. Right now, we’re constantly competing with each other. The educational system is that way. Business is that way. Everything is that way. My hope is that we can never be as smart as AI. Maybe one day, our smartness, our intelligence, will be defined not by how many books we have read, but by how much we enjoy reading books, enjoy finding new things in the universe, and how well we live life and are happy with what we do. I think there is potential in the idea that we will never be able to compete with AI, so why don’t we enjoy the book from cover to cover, or the painting with the coffee pigment or the paint? That’s what I see in the future, and I’m a very optimistic person. I suppose here you’re supposed to say “Halleluljah!” 

Isobel: In our podcast, CAPTURED, we talked with engineers and founders in Silicon Valley whose dream for the future is to install all human knowledge in our brains, so we never have to learn anything again. Everyone will speak every language! We can rebuild the Tower of Babel! They talk about the future as a paradise. But my thought was, what about finding out things? What about curiosity? Doesn’t that belong in paradise? Certainly, as a journalist, for me, some people are in it for the impact and the outcome, but I’m in it for finding out, finding the story—that process of discovery.

Rusudan: It’s interesting —this idea of paradise as a place where we know everything. One of my students once asked me the same thing you just did. “What about the joy of finding new things? Where is that, in paradise?” Because in the Bible, Paul says that right now, we live in a dimension where we know very little, but there will be a time when we know everything. 

In the Christian narrative, paradise is a strange, boring place where people dress in funny white tunics and play the harp. And I understand that idea back then was probably a dream for those who had to work hard for everything in their everyday life — they had to chop wood to keep their family warm, hunt to get food for the kids, and of course for them, paradise was the place where they just could just lie around and do nothing. 

But I don’t think paradise will be a boring place. I think it will be a place where we enjoy working.

Isobel: Do you think AI will ever replace priests?

Rusudan: I was told that one day there will be AI priests preaching sermons better than I do. People are already asking ChatGPT questions they’re reluctant to ask a priest or a psychologist. Because it’s judgment-free and their secrets are safe…ish. I don’t pretend I have all the answers because I don’t. I only have this human connection. I know there will be questions I cannot answer, and people will go and ask ChatGPT. But I know that human connection — the touch of a hand, eye-contact — can never be replaced by AI. That’s my hope. So we don’t need to break those pipelines. We can enjoy the technology, and the human connection too. 

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025.

The post “It’s a devil’s machine.” appeared first on Coda Story.

Ukraine’s drone face Rubikon, Russia’s deadly new unit targeting drone operators

Ukrainian Deputy Prime Minister and Minister of Digital Transformation Mykhailo Fedorov says Russia has created a unit called “Rubikon” to hunt Ukrainian drone operators. However, due to increased autonomy powered by artificial intelligence, Ukrainian operators can gradually be removed from the front lines, RBC reports.

Drone warfare innovations have become a hallmark of the ongoing Russo-Ukrainian war, with unmanned vehicles of various sizes operating across air, land, and sea. 

He explains that the current task is to ensure maximum remote control of drones so that operators can manage them from any city in the country. The next step is to implement full drone autonomy.

Fedorov also notes that full drone autonomy requires significant development and investment, which may take years. Meanwhile, artificial intelligence technologies are already actively used in the military sphere for decoding images, target guidance, and operating FPV drones.

Ukraine is even launching a special grant program to develop military technologies based on artificial intelligence, which, according to the minister, will become “the future battlefield.”

Earlier, former Commander-in-Chief of the Armed Forces of Ukraine Valerii Zaluzhnyi said that hoping for a return to the 1991 borders without a technological leap is pointless. He added that Kyiv could expect reaching victory only in the case of waging a high-tech war of survival, the one that uses minimal human resources and minimal economic means to achieve maximum effect.

You could close this page. Or you could join our community and help us produce more materials like this. We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society. Become a patron or see other ways to support

Denmark Aims to Use Copyright Law to Protect People From Deepfakes

A pioneering bill would give citizens the right to demand that social media platforms remove digital forgeries of themselves.

© Ida Marie Odgaard/Ritzau Scanpix Denmark, via Reuters

Referring to the effort to counter internet fakes, the Danish minister of culture, Jakob Engel-Schmidt, said, “Technology has outpaced our current legislation.”

Colorado Judge Fines MyPillow Founder’s Lawyers for Error-Filled Court Filing

The judge said the lawyers had not explained how such errors could have been filed “absent the use of generative artificial intelligence or gross carelessness by counsel.”

© Jack Dempsey/Associated Press

A federal judge found that lawyers for Mike Lindell had violated a federal rule that requires them to certify that the claims they are making in court filings are grounded in the law.

Marco Rubio Impersonation Under State Dept. Investigation

A person or people imitating the secretary of state used artificial intelligence to send text and voice messages to foreign diplomats and U.S. officials, the department said in a cable to employees.

© Haiyun Jiang for The New York Times

The person or people impersonating Secretary of State Marco Rubio sent messages to at least five people outside the State Department.

Marco Rubio Impersonation Under State Dept. Investigation

A person or people imitating the secretary of state used artificial intelligence to send text and voice messages to foreign diplomats and U.S. officials, the department said in a cable to employees.

© Haiyun Jiang for The New York Times

The person or people impersonating Secretary of State Marco Rubio sent messages to at least five people outside the State Department.

Ukraine leads NATO into future of combat medicine

Ukraine army losses how to cut

When the enemy targets evacuation vehicles carrying the wounded, every second can save a life. Ukraine’s Defense Ministry reports that its representatives took part in a conference of NATO’s Centre of Excellence for Military Medicine (NATO MILMED COE), held at JATEC in Poland. 

JATEC is a joint analytical and educational center between NATO and Ukraine, the first such project with a non-member of the Alliance.

The main focus was held on effective battlefield casualty evacuation, integration of civil-military planning, and the implementation of innovations in military medicine: unmanned aerial vehicles, telemedicine, and artificial intelligence.

“During the conference, JATEC reviewed Ukraine’s unique combat experience and enhanced cooperation with NATO to strengthen the medical capabilities of both the Alliance and Ukraine,” says JATEC Commander Brigadier General Wojciech Ozga.

The meeting brought together over 230 participants from 18 NATO countries and partners.

The Ukrainian representatives emphasized that the enemy deliberately targets evacuation teams along with the wounded in order to demoralize units. This makes evacuation one of the most dangerous tasks in field medicine.

“Effective evacuation of the wounded is one of the most difficult aspects of field medicine under conditions of large-scale aggression,” said Valerii Vyshnivskyi, Director of Program Implementation at JATEC.

Key challenges include interoperability, communications, and planning. These areas require revision of evacuation and pre-medical aid mechanisms to save more lives under real combat conditions.

You could close this page. Or you could join our community and help us produce more materials like this. We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society. Become a patron or see other ways to support

Argentina’s President Javier Milei Undermines Press Freedom With Online Attacks

Journalists face an increasing number of attacks from Argentina’s highest office, raising concerns about the undermining of press freedom.

© Magali Druscovich for The New York Times

President Javier Milei of Argentina and his allies have ratcheted up their aggressive denunciation of journalists, particularly women.

Senate Votes to Allow State A.I. Laws, a Blow to Tech Companies

All but a handful of states have some laws regulating artificial intelligence.

© Eric Lee for The New York Times

A provision introduced in the Senate by Senator Ted Cruz, Republican of Texas, would have imposed a decade-long moratorium on state laws governing artificial intelligence.

Ukraine’s war is just beginning, next battlefield will be invisible

Wars of the future are not about nuclear strikes. The world is on the verge of a new logic of confrontation, says Andrii Kovalenko, head of Ukraine’s Center for Countering Disinformation.

Ukraine has already redefined modern warfare. On 1 June, Ukraine’s Security Service carried out a special operation that struck 41 aircraft, part of Russia’s nuclear triad. The mission has become a symbol of a new era of asymmetric approach, where innovative drone systems and high-tech solutions allow a non-nuclear nation to effectively challenge a nuclear power state.

 

Outdated politicians keep talking about nuclear weapons, and the Kremlin tries to intimidate with its intercontinental ballistic Oreshnik, but future wars will not be won with missiles.

According to him, “the future lies not in nuclear deterrence, but in the doctrine of AI deterrence.” And already now, “Ukraine must adapt the logic of the nuclear age to the age of AI.”

“Wars of the future are not an exchange of nuclear strikes, as old politicians keep saying. They are attacks on systems that will carry out and make the decisions of the future,” Kovalenko emphasizes.

These include AI centers, generative models, LLMs that will analyze political scenarios, and AI systems that manage energy and vital city infrastructure.

“And if one state deliberately disables another’s AI systems, the risk of losing control increases manifold,” the expert warned.

To avoid catastrophe, Ukraine must initiate:

  • classification of critical AI systems as strategic security assets,
  • development of “secure decision-making” protocols under hybrid destabilization,
  • creation of an international platform for rules of conduct in the sphere of military AI.

“We are on the verge of a new logic of confrontation, competition, and war. This concerns the whole world,” Kovalenko concludes. 

Earlier, reports emerged that Ukraine was building a new class of weapons — drone-powered cruise missiles that are small, cheap, and deadly. 

These weapons use mini jet engines, aviation-model components, and advanced guidance systems, yet weigh a fraction of traditional cruise missiles and cost exponentially less. 

You could close this page. Or you could join our community and help us produce more materials like this. We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society. Become a patron or see other ways to support

The Vatican challenges AI’s god complex

As Rome prepared to select a new pope, few beyond Vatican insiders were focused on what the transition would mean for the Catholic Church's stance on artificial intelligence. 

Yet Pope Francis has established the Church as an erudite, insightful voice on AI ethics. "Does it serve to satisfy the needs of humanity to improve the well-being and integral development of people?”” he asked G7 leaders last year, “Or does it, rather, serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?"

Francis – and the Vatican at large – had called for meaningful regulation in a world where few institutions dared challenge the tech giants.

During the last months of Francis’s papacy, Silicon Valley, aided by a pliant U.S. government, has ramped up its drive to rapidly consolidate power.

OpenAI is expanding globally, tech CEOs are becoming a key component of presidential diplomatic missions, and federal U.S. lawmakers are attempting to effectively deregulate AI for the next decade. 

For those tracking the collision between technological and religious power, one question looms large: Will the Vatican continue to be one of the few global institutions willing to question Silicon Valley's vision of our collective future?

Memories of watching the chimney on television during Pope Benedict’s election had captured my imagination as a child brought up in a secular, Jewish-inflected household. I longed to see that white smoke in person.  The rumors in Rome last Thursday morning were that the matter wouldn’t be settled that day. So I was furious when I was stirred from my desk in the afternoon by the sound of pealing bells all over Rome. “Habemus papam!” I heard an old nonna call down to her husband in the courtyard. 

As I heard the bells of Rome hailing a new pope toll last Thursday I sprinted out onto the street and joined people streaming from all over the city in the direction of St. Peter’s. In recent years, the time between white smoke and the new pope’s arrival on the balcony was as little as forty-five minutes. People poured over bridges and up the Via della Conciliazione towards the famous square. Among the rabble I spotted a couple of friars darting through the crowd, making speedier progress than anyone, their white cassocks flapping in the wind. Together, the friars and I made it through the security checkpoints and out into the square just as a great roar went up. 

The initial reaction to the announcement that Robert Francis Prevost would be the next pope, with the name Leo XIV, was subdued. Most people around me hadn’t heard of him — he wasn’t one of the favored cardinals, he wasn’t Italian, and we couldn’t even Google him, because there were so many people gathered that no one’s phones were working. A young boy managed to get on the phone to his mamma, and she related the information about Prevost to us via her son. Americano, she said. From Chicago.

A nun from an order in Tennessee piped up that she had met Prevost once. She told us that he was mild-mannered and kind, that he had lived in Peru, and that he was very internationally-minded. “The point is, it’s a powerful American voice in the world, who isn’t Trump,” one American couple exclaimed to our little corner of the crowd. 

It only took a few hours before Trump supporters, led by former altar boy Steve Bannon, realized this American pope wouldn’t be a MAGA pope. Leo XIV had posted on X in February, criticizing JD Vance, the Trump administration’s most prominent Catholic.

"I mean it's kind of jaw-dropping," Bannon told the BBC. "It is shocking to me that a guy could be selected to be the Pope that had had the Twitter feed and the statements he's had against American senior politicians."

Laura Loomer, a prominent far-right pro-Trump activist aired her own misgivings on X: “He is anti-Trump, anti-MAGA, pro-open borders, and a total Marxist like Pope Francis.” 

As I walked home with everybody else that night – with the friars, the nuns, the pilgrims, the Romans, the tourists caught up in the action – I found myself thinking about our "Captured" podcast series, which I've spent the past year working on. In our investigation of AI's growing influence, we documented how tech leaders have created something akin to a new religion, with its own prophets, disciples, and promised salvation.

Walking through Rome's ancient streets, the dichotomy struck me: here was the oldest continuous institution on earth selecting its leader, while Silicon Valley was rapidly establishing what amounts to a competing belief system. 

Would this new pope, taking the name of Leo — deliberately evoking Leo XIII who steered the church through the disruptions of the Industrial Revolution — stand against this present-day technological transformation that threatens to reshape what it means to be human?

I didn't have to wait long to find out. In his address to the College of Cardinals on Saturday, Pope Leo XIV said: "In our own day, the Church offers to everyone the treasury of her social teaching, in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labor."

 Hours before the new pope was elected, I spoke with Molly Kinder, a fellow at the Brookings institution who’s an expert in AI and labor policy. Her research on the Vatican, labour, and AI was published with Brookings following Pope Francis’s death.

She described how the Catholic Church has a deep-held belief in the dignity of work — and how AI evangelists’ promise to create a post-work society with artificial intelligence is at odds with that.

“Pope John Paul II wrote something that I found really fascinating. He said, ‘work makes us more human.’ And Silicon Valley is basically racing to create a technology that will replace humans at work,” Kinder, who was raised Catholic, told me. “What they're endeavoring to do is disrupt some of the very core tenets of how we've interpreted God's mission for what makes us human.”

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post The Vatican challenges AI’s god complex appeared first on Coda Story.

Pope Francis’s final warning

Whoever becomes the next Pope will inherit not just the leadership of the Catholic Church but a remarkably sophisticated approach to technology — one that in many ways outpaces governments worldwide. While Silicon Valley preaches Artificial Intelligence as a quasi-religious force capable of saving humanity, the Vatican has been developing theological arguments to push back against this narrative.

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

In the hours after Pope Francis died on Easter Monday, I went, like thousands of others in Rome, straight to St Peter's Square to witness the city in mourning as the basilica's somber bell tolled. 

Just three days before, on Good Friday, worshippers in the eternal city proceeded, by candlelight, through the ruins of the Colosseum, as some of the Pope's final meditations were read to them. "When technology tempts us to feel all powerful, remind us," the leader of the service called out. "We are clay in your hands," the crowd responded in unison.

As our world becomes ever more governed by tech, the Pope's meditations are a reminder of our flawed, common humanity. We have built, he warned, "a world of calculation and algorithms, of cold logic and implacable interests." These turned out to be his last public words on technology. Right until the end, he called on his followers to think hard about how we're being captured by the technology around us. "How I would like for us to look less at screens and look each other in the eyes more!" 

Faith vs. the new religion 

Unlike politicians who often struggle to grasp AI's technical complexity, the Vatican has leveraged its centuries of experience with faith, symbols, and power to recognize AI for what it increasingly represents: not just a tool, but a competing belief system with its own prophets, promises of salvation, and demands for devotion.

In February 2020, the Vatican's Pontifical Academy for Life published the Rome Call for AI ethics, arguing that "AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live." And in January of this year, the Vatican released a document called Antiqua et Nova – one of its most comprehensive statements to date on AI – that warned we're in danger of worshipping AI as a God, or as an idol.

Our investigation into Silicon Valley's cult-like movement 

I first became interested in the Vatican's perspective on AI while working on our Audible podcast series "Captured" with Cambridge Analytica whistleblower Christopher Wylie. In our year-long investigation, we discovered how Silicon Valley's AI pioneers have adopted quasi-religious language to describe their products and ambitions — with some tech leaders explicitly positioning themselves as prophets creating a new god.

In our reporting, we documented tech leaders like Bryan Johnson speaking literally about "creating God in the form of superintelligence," billionaire investors discussing how to "live forever" through AI, and founders talking about building all-knowing, all-powerful machines that will free us from suffering and propel us into utopia. One founder told us their goal was to install "all human knowledge into every human" through brain-computer interfaces — in other words, make us all omniscient.

Nobel laureate Maria Ressa, whom I spoke with recently, told me she had warned Pope Francis about the dangers of algorithms designed to promote lies and disinformation. "Francis understood the impact of lies," she said. She explained to the Pope how Facebook had destroyed the political landscape in the Philippines, where the platform’s engagement algorithms allowed disinformation to spread like wildfire. "I said — 'this is literally an incentive structure that is rewarding lies.'"

According to Ressa, AI evangelists in Silicon Valley are acquiring "the power of gods without the wisdom of God." It is power, she said, "that is in the hands of men whose arrogance prevents them from seeing the impact of rolling out technology that's not safe for their kids."

The battle for humanity's future 

The Vatican has always understood how to use technology, engineering and spectacle to harness devotion and wield power — you only have to walk into St Peter’s Basilica to understand that. I spoke to a Vatican priest, on his way to Rome to pay his respects to the Pope. He told me why the Vatican understands the growing power of artificial intelligence so well. "We know perfectly well," he said, "that certain structures can become divinities. In the end, technology should be a tool for living — it should not be the end of man."

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post Pope Francis’s final warning appeared first on Coda Story.

When I’m 125?

I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors. 

If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect. 

We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.

It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough. 

In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.

This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.

I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.

I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony. 

Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there. 

I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become."

As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?” 

Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me. 

But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.

One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”

I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens. 

 So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?

I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions. 

Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.

The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.

I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.

In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."

He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die. 

These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.

I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.

In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.

The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.

We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.

Participants connected over Discord, comparing notes, and posting about our progress. 

Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.

There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?

Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.

On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."

So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.

Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.

We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.

The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.

When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.

Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.

I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.

Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.

This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.

We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact. 

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.

I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.

I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.

We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.

I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.

But until then my ears are still ringing.

This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post When I’m 125? appeared first on Coda Story.

Captured: how Silicon Valley is building a future we never chose

In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal. 

It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.

“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”

A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call. 

“You're wading through knee-deep water, people are screaming everywhere, and then…  What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”

Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world.  We looked at how the rest of us —  journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back. 

Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.

Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.

One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta. 

Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video –  training the company’s system so it can automatically filter out such content before the rest of us are exposed to it. 

She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward. 

Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe. 

In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me. 

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”

I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?

You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.” 

As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join. 

In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.

What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.

“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.” 

The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future. 

There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now. 

In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.

Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

❌