Vue lecture

“It’s a devil’s machine.”

Tech leaders say AI will bring us eternal life, help us spread out into the stars, and build a utopian world where we never have to work. They describe a future free of pain and suffering, in which all human knowledge will be wired into our brains. Their utopian promises sound more like proselytizing than science, as if AI were the new religion and the tech bros its priests. So how are real religious leaders responding?

As Georgia's first female Baptist bishop, Rusudan Gotsiridze challenges the doctrines of the Orthodox Church, and is known for her passionate defence of women’s and LGBTQ+ rights. She stands at the vanguard of old religion, an example of its attempts to modernize — so what does she think of the new religion being built in Silicon Valley, where tech gurus say they are building a superintelligent, omniscient being in the form of Artificial General Intelligence?

Gotsiridze first tried to use AI a few months ago. The result chilled her to the bone. It made her wonder if Artificial Intelligence was in fact a benevolent force, and to think about how she should respond to it from the perspective of her religious beliefs and practices.

In this conversation with Coda’s Isobel Cockerell, Bishop Gotsiridze discusses the religious questions around AI: whether AI can really help us hack back into paradise, and what to make of the outlandish visions of Silicon Valley’s powerful tech evangelists.

Bishop Rusudan Gotsiridze and Isobel Cockerell in conversation at the ZEG Storytelling Festival in Tbilisi last month. Photo: Dato Koridze.

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025. It has been lightly edited and condensed for clarity. 

Isobel: Tell me about your relationship with AI right now. 

Rusudan: Well, I’d like to say I’m an AI virgin. But maybe that’s not fully honest. I had one contact with ChatGPT. I didn’t ask it to write my Sunday sermon. I just asked it to draw my portrait. How narcissistic of me. I said, “Make a portrait of Bishop Rusudan Gotsiridze.” I waited and waited. The portrait looked nothing like me. It looked like my mom, who passed away ten years ago. And it looked like her when she was going through chemo, with her puffy face. It was really creepy. So I will think twice before asking ChatGPT anything again. I know it’s supposed to be magical... but that wasn’t the best first date. 

AI-generated image via ChatGPT / OpenAI.

Isobel: What went through your mind when you saw this picture of your mother? 

Rusudan: I thought, “Oh my goodness, it’s really a devil’s machine.” How could it go so deep? Find my facial features and connect them with someone who didn’t look like me? I take more after my paternal side. The only thing I could recognize was the priestly collar and the cross. Okay. Bishop. Got it. But yes, it was really very strange.

Isobel: I find it so interesting that you talk about summoning the dead through Artificial Intelligence. That’s something happening in San Francisco as well. When I was there last summer, we heard about this movement that meets every Sunday. Instead of church, they hold what they call an “AI séance,” where they use AI to call up the spirit world. To call up the dead. They believe the generative art that AI creates is a kind of expression of the spirit world, an expression of a greater force.

They wouldn’t let us attend. We begged, but it was a closed cult. Still, a bunch of artists had the exact same experience you had: they called up these images and felt like they were summoning them, not from technology, but from another realm. 

Rusudan: When you’re a religious person dealing with new technologies, it’s uncomfortable. Religion — Christianity, Protestantism, and many others — has earned a very cautious reputation throughout history because we’ve always feared progress.

Remember when we thought printing books was the devil’s work? Later, we embraced it. We feared vaccinations. We feared computers, the internet. And now, again, we fear AI.

 It reminds me of the old proverb about a young shepherd who loved to prank his friends by shouting “Wolves! Wolves!” until one day, the wolves really came. He shouted, but no one believed him anymore.

We’ve been shouting “wolves” for centuries. And now, I’m this close to shouting it again, but I’m not sure. 

Isobel: You said you wondered if this was the devil’s work when you saw that picture of your mother. It’s quite interesting. In Silicon Valley, people talk a lot about AI bringing about the rapture, apocalypse, hell.

They talk about the real possibility that AI is going to kill us all, what the endgame or extinction risk of building superintelligent models will be. Some people working in AI are predicting we’ll all be dead by 2030.

On the other side, people say, “We’re building utopia. We’re building heaven on Earth. A world where no one has to work or suffer. We’ll spread into the stars. We’ll be freed from death. We’ll become immortal.”

I’m not a religious person, but what struck me is the religiosity of these promises. And I wanted to ask you — are we hacking our way back into the Garden of Eden? Should we just follow the light? Is this the serpent talking to us?

Rusudan: I was listening to a Google scientist. He said that in the near future, we’re not heading to utopia but dystopia. It’s going to be hell on Earth. All the world’s wealth will be concentrated in a small circle, and poverty will grow. Terrible things will happen, before we reach utopia.

Listening to him, it really sounded like the Book of Revelation. First the Antichrist comes, and then Christ.

Because of my Protestant upbringing, I’ve heard so many lectures about the exact timeline of the Second Coming. Some people even name the day, hour, place. And when those times pass, they’re frustrated. But they carry on calculating. 

It’s hard for me to speak about dystopia, utopia, or the apocalyptic timeline, because I know nothing is going to be exactly as predicted.

The only thing I’m afraid of in this Artificial Intelligence era is my 2-year-old niece. She’s brilliant. You can tell by her eyes. She doesn’t speak our language yet. But phonetically, you can hear Georgian, English, Russian, even Chinese words from the reels she watches non-stop.

That’s what I’m afraid of: us constantly watching our devices and losing human connection. We’re going to have a deeply depressed young generation soon. 

I used to identify as a social person. I loved being around people. That’s why I became a priest. But now, I find it terribly difficult to pull myself out of my house to be among people. And it’s not just a technology problem — it’s a human laziness problem.

When we find someone or something to take over our duties, we gladly hand them over. That’s how we’re using this new technology. Yes, I’m in sermon mode now — it’s a Sunday, after all. 

I want to tell you an interesting story from my previous life. I used to be a gender expert, training people about gender equality. One example I found fascinating: in a Middle Eastern village without running water, women would carry vessels to the well every morning and evening. It was their duty.

Western gender experts saw this and decided to help. They installed a water supply. Every woman got running water in her kitchen: happy ending. But very soon, the pipeline was intentionally broken by the women. Why? Because that water-fetching routine was the only excuse they had to leave their homes and see their friends. With running water, they became captives to their household duties.

One day, we may also not understand why we’ve become captives to our own devices. We’ll enjoy staying home and not seeing our friends and relatives. I don’t think we’ll break that pipeline and go out again to enjoy real life.

Isobel: It feels like it’s becoming more and more difficult to break that pipeline. It’s not really an option anymore to live without the water, without technology. 

Sometimes I talk with people in a movement called the New Luddites. They also call themselves the Dumbphone Revolution. They want to create a five-to-ten percent faction of society which doesn’t have a smartphone, and they say that will help us all, because it will mean the world will still have to cater to people who don’t participate in big tech, who don’t have it in their lives. But is that the answer for all of us? To just smash the pipeline to restore human connection? Or can we have both?

Rusudan: I was a new mom in the nineties in Georgia. I had two children at a time when we didn’t have running water. I had to wash my kids’ clothes in the yard in cold water, summer and winter. I remember when we bought our first washing machine.  My husband and I sat in front of it for half an hour, watching it go round and round. It was paradise for me for a while. 

Now this washing machine is there and I don't enjoy it anymore. It's just a regular thing in my life. And when I had to wash my son’s and daughter-in-law’s wedding outfits, I didn’t trust the machine. I washed those clothes by hand. There are times when it’s important to do things by hand.

Of course, I don’t want to go back to a time without the internet when we were washing clothes in the yard, but there are things that are important to do without technology.

I enjoy painting, and I paint quite a lot with watercolors. So far, I can tell which paintings are AI and which are real. Every time I look at an AI-made watercolour, I can tell it’s not a human painting. It is a technological painting. And it's beautiful. I know I can never compete with this technology. 

But that feeling, when you put your brush in, the water — sometimes I accidentally put it in my coffee cup — and when you put that brush on the paper and the pigment spreads, that feeling can never be replaced by any technology. 

Isobel:
As a writer, I'm now pretty good, I think, at knowing if something is AI-written or not. I'm sure in the future it will get harder to tell, but right now, there are little clues. There’s this horrible construction that AI loves: something is not just X, it’s Y. For example: “Rusudan is not just a bishop, she’s an oracle for the LGBTQ community in Georgia.” Even if you tell it to stop using that construction, it can’t. Same for the endless em-dashes: I can’t get ChatGPT to stop using them no matter how many times or how adamantly I prompt it. It's just bad writing.

It’s missing that fingerprint of imperfection that a human leaves: whether it’s an unusual sentence construction or an interesting word choice, I’ve started to really appreciate those details in real writing. I've also started to really love typos. My whole life as a journalist I was horrified by them. But now when I see a typo, I feel so pleased. It means a human wrote it. It’s something to be celebrated. It’s the same with the idea that you dip your paintbrush in the coffee pot and there’s a bit of coffee in the painting. Those are the things that make the work we make alive. 

There’s a beauty in those imperfections, and that’s something AI has no understanding of. Maybe it’s because the people building these systems want to optimize everything. They are in pursuit of total perfection. But I think that the pursuit of imperfection is such a beautiful thing and something that we can strive for.

Rusudan: Another thing I hope for with this development of AI is that it’ll change the formula of our existence. Right now, we’re constantly competing with each other. The educational system is that way. Business is that way. Everything is that way. My hope is that we can never be as smart as AI. Maybe one day, our smartness, our intelligence, will be defined not by how many books we have read, but by how much we enjoy reading books, enjoy finding new things in the universe, and how well we live life and are happy with what we do. I think there is potential in the idea that we will never be able to compete with AI, so why don’t we enjoy the book from cover to cover, or the painting with the coffee pigment or the paint? That’s what I see in the future, and I’m a very optimistic person. I suppose here you’re supposed to say “Halleluljah!” 

Isobel: In our podcast, CAPTURED, we talked with engineers and founders in Silicon Valley whose dream for the future is to install all human knowledge in our brains, so we never have to learn anything again. Everyone will speak every language! We can rebuild the Tower of Babel! They talk about the future as a paradise. But my thought was, what about finding out things? What about curiosity? Doesn’t that belong in paradise? Certainly, as a journalist, for me, some people are in it for the impact and the outcome, but I’m in it for finding out, finding the story—that process of discovery.

Rusudan: It’s interesting —this idea of paradise as a place where we know everything. One of my students once asked me the same thing you just did. “What about the joy of finding new things? Where is that, in paradise?” Because in the Bible, Paul says that right now, we live in a dimension where we know very little, but there will be a time when we know everything. 

In the Christian narrative, paradise is a strange, boring place where people dress in funny white tunics and play the harp. And I understand that idea back then was probably a dream for those who had to work hard for everything in their everyday life — they had to chop wood to keep their family warm, hunt to get food for the kids, and of course for them, paradise was the place where they just could just lie around and do nothing. 

But I don’t think paradise will be a boring place. I think it will be a place where we enjoy working.

Isobel: Do you think AI will ever replace priests?

Rusudan: I was told that one day there will be AI priests preaching sermons better than I do. People are already asking ChatGPT questions they’re reluctant to ask a priest or a psychologist. Because it’s judgment-free and their secrets are safe…ish. I don’t pretend I have all the answers because I don’t. I only have this human connection. I know there will be questions I cannot answer, and people will go and ask ChatGPT. But I know that human connection — the touch of a hand, eye-contact — can never be replaced by AI. That’s my hope. So we don’t need to break those pipelines. We can enjoy the technology, and the human connection too. 

This conversation took place at ZEG Storytelling Festival in Tbilisi in June 2025.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post “It’s a devil’s machine.” appeared first on Coda Story.

  •  

Who owns the rights to your brain?

Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.” 

Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.

This conversation has been edited for length and clarity.

Q: Rafael, can you tell me how your journey into neurorights started?

Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse  brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.

Q: How did that work? 

Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it. 

These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain. 

We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there. 

Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?

Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment. 

Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars? 

Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.

Q: It must have been a huge breakthrough

Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.

 And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”

I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.

Q: What do you mean by that?

Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.

Q: Jared, can you tell me how you came into this? 

Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.

Subscribe to our Coda Currents newsletter

Weekly insights from our global newsroom. Our flagship newsletter connects the dots between viral disinformation, systemic inequity, and the abuse of technology and power. We help you see how local crises are shaped by global forces.

Q: What was your reaction when you heard of the mouse experiment?

Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.

Q: Can you talk me through some of the other implications of this technology? 

Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.

That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.

In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting. 

Rafael:  I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information. 

Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.

But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces. 

And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.

How did you start thinking about ways to build rights and guardrails around neurotechnology?

Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.

So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data. 

Jared:  If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.

There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.

So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.

Rafael: We identified five areas of concern where neurotechnology could impact human rights:

The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.

The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.

The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.

The fourth is the right to equal access to neural augmentation.  Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.

And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.

Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post Who owns the rights to your brain? appeared first on Coda Story.

  •