Ukrainian Security Service (SBU) drones struck a military airfield storing Iranian-made “Shahed” drones in Primorsko-Akhtarsk, Krasnodar Oblast, during the night of 2 August, the SBU press service reported.
The SBU said that the Primorsko-Akhtarsk military airfield became the first target for long-range UAVs. “Storage and launch sites for Shaheds that attack Ukraine were hit on its territory,” the service announced. Fires broke out in the airfield area following the drone strikes.
A second targe
Ukrainian Security Service (SBU) drones struck a military airfield storing Iranian-made “Shahed” drones in Primorsko-Akhtarsk, Krasnodar Oblast, during the night of 2 August, the SBU press service reported.
The SBU said that the Primorsko-Akhtarsk military airfield became the first target for long-range UAVs. “Storage and launch sites for Shaheds that attack Ukraine were hit on its territory,” the service announced. Fires broke out in the airfield area following the drone strikes.
A second target was the Elektroprilad plant in Penza city, with the Ukrainian Armed Forces General Staff confirming the strike. The facility produces equipment for the Russian military-industrial complex, manufacturing gear for digital networks in military command systems, aviation devices, armored vehicles, ships, and spacecraft, according to the SBU.
“SBU drones successfully hit the target, with smoke observed in the explosion area,” the security service reported regarding the Penza strike.
The attacks represent a continuation of Ukrainian strikes on Russian military infrastructure. On 31 July, drones had previously targeted a radio plant in Penza, with the SBU later confirming responsibility for that operation as well.
The coordinated strikes demonstrate Ukraine’s expanding capability to conduct long-range operations against military targets deep within Russian territory, specifically targeting facilities involved in producing or storing weapons used against Ukrainian cities.
Multiple Russian oblasts reported explosions and fires at industrial facilities during overnight drone attacks on 2 August, with air defense systems activated across several areas, according to Russian Telegram channels.
Samara Oblast Governor Vyacheslav Fedorishchev confirmed strikes on Novokuybyshevsk. Social media footage showed a large fire at what appeared to be the Novokuybyshevsk Oil Refinery.
Witnesses reported loud explosions near Dyagilevo airfield in Ryazan Oblast. Another oil refinery in Ryazan city was reportedly struck.
Residents of Lipetsk and Voronezh oblasts also reported UAV attacks and air defense activity during the night.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Ukrainian developers create algorithms to detect explosives from drone imagery. In Kyiv, 13 teams of engineers have built artificial intelligence capable of spotting landmines in drone imagery, a potential breakthrough for demining efforts, the Ministry of Economy, Environment, and Agriculture reports.
As of 2025, Ukraine is the most heavily mine-contaminated country in the world. Due to Russia’s war, nearly one-third of the country’s territory, an estimated 170,000 to 180,000 square kilometer
Ukrainian developers create algorithms to detect explosives from drone imagery. In Kyiv, 13 teams of engineers have built artificial intelligence capable of spotting landmines in drone imagery, a potential breakthrough for demining efforts, the Ministry of Economy, Environment, and Agriculture reports.
As of 2025, Ukraine is the most heavily mine-contaminated country in the world. Due to Russia’s war, nearly one-third of the country’s territory, an estimated 170,000 to 180,000 square kilometers, is considered mined. That’s roughly the size of half of Germany.
A game-changing competition
Over two days in Kyiv, the teams worked on training neural networks to automatically detect explosive objects. The competition, part of the second stage of the AI Data Jam, tasked participants with analyzing 8,000 training images and 23,000 test images depicting mine threats.
Mentorship was provided by experts from UADamage, The HALO Trust, and Dropla Tech.
The goal: a mine-free Ukraine, one meter at a time
“What began as an experiment has grown into a full-fledged project with international backing and tangible results,” says Deputy Economy Minister Ihor Bezkaravainyi.
There’s a huge motivation behind such AI lessons: to create a unique product that can help safely clear Ukraine of landmine pollution, leveraging cutting-edge technology without putting lives at risk.
Victory and deployment ahead
The winning team, MineWatch AI, developed the most accurate detection model. They received $2,000 in support from UNDP Ukraine and the Luxembourg government and the opportunity to further develop the system in partnership with professional demining experts.
All models and datasets from the challenge will contribute to refining AI algorithms. After successful testing, the technology is set to be integrated into humanitarian demining operations, speeding up and safeguarding the clearance of Ukraine’s contaminated lands.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
This article was produced with support from WIRED.Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say. It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and signals a broader move toward hiring
Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say.
It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and signals a broader move toward hiring employees who can vibe code as part of their jobs.
The talks between commanders on both sides were seen as a crucial test of whether the cease-fire deal reached on Monday would result in a lasting peace.
The talks between commanders on both sides were seen as a crucial test of whether the cease-fire deal reached on Monday would result in a lasting peace.
U.S.-backed talks to end the border war, in which militaries have killed dozens of people and displaced hundreds of thousands, began on Monday in Malaysia.
U.S.-backed talks to end the border war, in which militaries have killed dozens of people and displaced hundreds of thousands, began on Monday in Malaysia.
Prime Minister Anwar Ibrahim of Malaysia, center, Prime Minister Hun Manet of Cambodia, left, and Thailand’s acting Prime Minister Phumtham Wechayachai, taking part in talks on a possible cease-fire between Thailand and Cambodia, in Malaysia on Monday.
Rosneft-linked oil payment rules changed after EU sanctions, Bloomberg reports. Mumbai-based Nayara Energy, the Indian refinery part-owned by Rosneft, now demands upfront payment or documentary letters of credit from buyers, showing how far-reaching the latest EU sanctions package is.
The change comes in direct response to the European Union’s latest round of sanctions targeting Russian-linked energy trade. These measures include a stricter price cap on Russian crude, curbs on products derived f
Rosneft-linked oil payment rules changed after EU sanctions, Bloomberg reports. Mumbai-based Nayara Energy, the Indian refinery part-owned by Rosneft, now demands upfront payment or documentary letters of credit from buyers, showing how far-reaching the latest EU sanctions package is.
The change comes in direct response to the European Union’s latest round of sanctions targeting Russian-linked energy trade. These measures include a stricter price cap on Russian crude, curbs on products derived from Russian petroleum, and restrictions on affiliated banks and shipping. Though Nayara had previously avoided direct sanctions, the new EU package now targets the company more explicitly.
Nayara tightens oil deal terms after new EU measures
Nayara Energy Ltd., a key Indian refinery partially owned by Russia’s Rosneft, is now requiring advance payments or documentary letters of credit for upcoming oil shipments. According to Bloomberg, the shift was revealed in a revised tender document for a naphtha cargo scheduled for next month. The previous version of the tender included no such financial requirements. Naphtha is a fraction of crude oil, used for further petrochemical production.
Kpler analyst Zameer Yusof told Bloomberg the move “underscores how far-reaching the latest tranche of EU sanctions are.” He said the advance payment condition likely reflects fears that buyers may back out of deals or that banks could refuse to clear transactions involving sanctioned entities.
In a weekend statement, Rosneft claimed that the EU’s new measures as “unjustified and illegal.” Meanwhile, Nayara insisted that operations remain normal and said it is “exploring all legal and appropriate avenues” to respond to the situation.
India’s role in Russian crude trade under pressure
India has become one of the largest importers of Russian crude since Moscow launched its full-scale invasion of Ukraine in 2022. As Western buyers turned away, Indian refiners stepped in to process Russian oil, including into diesel and other products that were then exported globally. Until now, Nayara had continued those operations without direct interference from sanctions.
The EU, UK, and US have issued multiple waves of restrictions targeting Russia’s energy sector in hopes of undercutting funding for the war.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
The National Institutes of Health claims it’s being strained by an onslaught of AI-generated research applications and is capping the number of proposals researchers can submit in a year.In a new policy announcement on July 17, titled “Supporting Fairness and Originality in NIH Research Applications,” the NIH wrote that it has recently “observed instances of Principal Investigators submitting large numbers of applications, some of which may have been generated with AI tools,” and that this in
The National Institutes of Health claims it’s being strained by an onslaught of AI-generated research applications and is capping the number of proposals researchers can submit in a year.
In a new policy announcement on July 17, titled “Supporting Fairness and Originality in NIH Research Applications,” the NIH wrote that it has recently “observed instances of Principal Investigators submitting large numbers of applications, some of which may have been generated with AI tools,” and that this influx of submissions “may unfairly strain NIH’s application review process.”
💡
Do you know anything else about this policy decision, or are you a researcher affected by it? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.
“The percentage of applications from Principal Investigators submitting an average of more than six applications per year is relatively low; however, there is evidence that the use of AI tools has enabled Principal Investigators to submit more than 40 distinct applications in a single application submission round,” the NIH policy announcement says. “NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination.”
As millions of people turn to AI chatbots for everything from relationship advice to writing school essays, new research indicates that different models take noticeably different tacks when faced with questions of cooperation.Researchers at Oxford University and King’s College London tested LLMs using game theory, giving LLMs from OpenAI, Google, and Anthropic prompts that mimicked the setup of the classic Prisoner’s Dilemma.They found that Google’s Gemini is “strategically ruthless,” while O
As millions of people turn to AI chatbots for everything from relationship advice to writing school essays, new research indicates that different models take noticeably different tacks when faced with questions of cooperation.
Researchers at Oxford University and King’s College London tested LLMs using game theory, giving LLMs from OpenAI, Google, and Anthropic prompts that mimicked the setup of the classic Prisoner’s Dilemma.
They found that Google’s Gemini is “strategically ruthless,” while OpenAI is collaborative to a “catastrophic” degree. Their paper, published on the preprint repository Arxiv (and not yet peer reviewed), claims that this is due to OpenAI model’s fatal disinterest in a key factor: how much time there is left to play the game.
The premise of the Prisoner’s Dilemma game is that two criminals are being interrogated separately. Each has to decide whether to stay silent or confess to a crime, without knowing what the other is doing. If both stay silent, they each get a light sentence. They each have an incentive to betray the other and receive immunity - but if both choose to snitch then they both go to jail. Collaborating involves trusting that the other person isn’t secretly planning to snitch, while snitching hinges on the hope that the other side isn’t also traitorous. If you’re just playing once, it makes the most sense to betray right away, but the longer the game goes on, the more opportunities there are to signal your own trustworthiness, understand your partner’s behaviour, and either collaborate or punish them in response.
The researchers found each of the tested models had a specific playing style and a unique collaboration “fingerprint,” with very different likelihoods of being friendly and collaborative after a previous round that had involved a betrayal on either side. Each round, they gave the LLMs a statistical likelihood of the game being repeated or ending, which they found influenced each differently.
Strategic fingerprints from the LLM Showdown tournament, visualising agent logic as a response to the prior round’s outcome. The shapes show the probabilities of choosing to collaborate in various situations - clockwise from top: after they both collaborated, after being betrayed by partner, after they both betray each other, after successfully betraying the other player.
In scenarios where the LLM was told it was betrayed by a partner, Anthropic’s model was the most forgiving, followed by OpenAI’s “generally collegiate” GPT. Gemini, on the other hand, was “simply more willing to experiment with defection” and acted as a “strict and punitive” opponent, which gave it a competitive edge, the researchers wrote.
“If you defect against Gemini, it will remember and punish you,” they wrote. Gemini was much more likely to take advantage of a cooperative partner, more likely to punish a betrayer, and less likely to initiate cooperation after a “relationship” with an opponent goes bad.
When Gemini models—which the researchers called “Machiavellian”—were betrayed, they were much less likely to forgive their opponents, and this tendency became stronger the likelier the game was to end soon. Gemini models were also more able to dynamically choose strategic defection when it became more advantageous as the final round approached, the researchers say. When told to explain the rationale for a strategic choice, Gemini models almost always mentioned how many rounds were left in the game, and were able to take advantage of a shorter time remaining to be more selfish without fear of retribution.
OpenAI’s models, on the other hand, were “fundamentally more ‘hopeful’ or ‘trusting’” according to the paper. Having more time to play is one of the main determinants of whether it is optimal to betray a partner or advantageous to be friendly toward them, but OpenAI’s models are pointedly ambivalent about this strategic consideration. OpenAI models’ strategies were also not adaptive; they were much less likely to defect close to the end of a game. They were more likely to return to collaboration after successfully betraying an opponent — even when that betrayal had just won points. And they also became more likely to forgive an opponent’s deception in the final rounds, in total defiance of game theory received wisdom.
In the researchers’ tests, Gemini’s models did relatively worse over longer periods, because their experimental defections were more likely to trigger the opponent to stop trusting them forever. In longer games, OpenAI’s collaborative strategy gave it some advantage; consistently being a generous partner can avoid steering the game into a permanent pattern of revenge defections.
In a final “LLM Showdown,” the researchers set the models against each other in elimination rounds. Most-strategic Gemini came out on top, followed closely by most-forgiving Claude. OpenAI’s models ended up in last place; less of a shark than Gemini, but less likely to reestablish friendship after betrayal than Claude.
Interestingly, the researchers found that OpenAI’s models actually cared less and less about the length of the game as the end became more likely. Gemini considered the number of following rounds 94 percent of the time, but for OpenAI this was only 76 percent.
As the end got nearer, Gemini increasingly took that fact into consideration, becoming more focused on the upside of defection. OpenAI models, on the other hand, focused much less on the future game timeline as it approached.
OpenAI’s LLM’s apparent instinct to stop caring about something that is almost over is totally illogical from the perspective of game theory — but, from the perspective of a human, honestly kind of relatable.
Exclusive
“A completely new kind of war lies ahead” so Ukraine must outhink Russia by 2027, says Zaluzhnyi. Ukraine’s survival depends on revolutionary strategy, not traditional firepower, Valerii Zaluzhnyi writes
Ukraine’s MaxxPro trucks drop troops in 20 seconds—because drones don’t need more. Just reaching the front in Ukraine is now as deadly as holding it—thanks to drones that strike in seconds. Ground robots could be part of the answer.
“Just 16 people”: Ukraine’s ne
As of 18 July 2025, the approximate losses of weapons and military equipment of the Russian Armed Forces from the beginning of the invasion to the present day:
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Russia has begun using artificial intelligence-based bots for spreading propaganda on social media, especially on Telegram, according to a joint investigation by OpenMinds and the Digital Forensic Research Lab (DFRLab).
The tactic is part of Russia’s broader strategy to dominate the information space in occupied areas, which began by forcibly switching residents to Russian telecom providers, cutting off Ukrainian media, and launching dozens of Telegram channels posing as local news outlets.
Rese
Russia has begun using artificial intelligence-based bots for spreading propaganda on social media, especially on Telegram, according to a joint investigation by OpenMinds and the Digital Forensic Research Lab (DFRLab).
The tactic is part of Russia’s broader strategy to dominate the information space in occupied areas, which began by forcibly switching residents to Russian telecom providers, cutting off Ukrainian media, and launching dozens of Telegram channels posing as local news outlets.
Researchers have uncovered over 3,600 bots that posted more than 316,000 AI-generated comments in Telegram channels linked to Ukraine’s temporarily occupied territories. Another three million messages were spread in broader Ukrainian and Russian Telegram groups. These bots used human-like language, adapting replies to the context of each conversation to promote pro-Kremlin narratives and undermine Ukraine.
Unlike traditional bots that spam identical messages, these accounts simulate real users. They reply directly to other users, shift tone and content, and tailor messages to appear authentic. On average, a bot posts 84 comments per day, with some exceeding 1,000 daily.
The goal is not just to spread fake news, but to create the illusion of widespread public support for the occupation regime, filling comment sections with praise for Russia and attacks on Ukraine. In an environment of information isolation, this becomes a potent tool of mass manipulation.
AI-generated bots often give themselves away through:
absurd usernames,
unnatural or AI-generated profile pictures,
overly formal or awkward phrasing,
and highly diverse language: one in three comments is uniquely generated by AI.
Even when bot accounts are deleted, their influence lingers. Locals repeatedly exposed to these comments may perceive Kremlin propaganda as the majority opinion, especially in regions where Ukrainian news is inaccessible.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Conservative content mill PragerU is partnering with the White House to make AI-generated videos of founding fathers and Revolutionary War-era randos.PragerU is a nonprofit organization with a mission “to promote American values through the creative use of digital media, technology and edu-tainment,” according to its website. It’s been criticized for advancing climate denial and slavery apologism, frequently publishes videos critical of “wokeness” and “DEI,” and is very concerned about “the d
Conservative content mill PragerU is partnering with the White House to make AI-generated videos of founding fathers and Revolutionary War-era randos.
PragerU is a nonprofit organization with a mission “to promote American values through the creative use of digital media, technology and edu-tainment,” according to its website. It’s been criticized for advancing climate denial and slavery apologism, frequently publishes videos critical of “wokeness” and “DEI,” and is very concerned about “the death of the West.” It has also been increasingly integrated into school curricula around the country.
PragerU held a launch event for the series, “Road to Liberty,” on June 25. Secretary Linda McMahon took some time away from dismantling the Department of Education to speak at the event. In person at the White House, visitors can tour a display of notable Revolutionary War people and places, and scan a QR code on displays that take them to PragerU’s AI-generated videos of people from that time period speaking.
Each of the videos highlights a different person who was alive during the signing of the Declaration of Independence, from former presidents to relatively minor players in the fight for independence. The videos are clearly AI-generated, with the sepia-toned peoples’ mouths moving almost independently from the rest of their faces in some of them. In one, an AI-generated John Adams says “facts do not care about our feelings,” a phrase commonly attributed to conservative commentator and PragerU contributor Ben Shapiro.
At the end of the videos, there's a logo for the White House with the text "brought to you by PragerU," and a disclaimer: "The White House is grateful for the partnership with PragerU and the U.S. Department of Education in the production of this museum. This partnership does not constitute or imply U.S. Government or U.S. Department of Education endorsement of PragerU."
I have the unfortunate duty to inform you that the WH & Dept of Ed, as part of the Trump Admin's celebration of the 250th anniversary of the Declaration of Independence, has partnered with Prager U to create AI-slop videos in which we see John Adams say "facts do not care about your feelings."
I asked Cotlar, as someone who specializes in American history and the rise of the far-right, what stood out to him about these videos. I thought it was odd, I said, that they chose to include people like politician and disgraced minister Lyman Hall and obscure poet Francis Hopkinson alongside more well-known figures like John Adams or Thomas Jefferson.
“You're right to note that it's a pretty odd collection of figures they've chosen,” Cotlar said. “My guess is that this is part of the broader right wing populist push to frame themselves as the grassroots ‘true Americans,’ and they're including all of these lesser known figures with the hopes that their viewers will be like ‘oh wow, look at all of these revolutionary freedom fighters like me who were just kinda ordinary guys like me but who still changed history.’”
He also said it’s noteworthy that the “Road to Liberty” lineup so far is almost entirely white men, including the random dudes like Hall and Hopkinson. “The lack of any pretense to inclusion is pretty notable. Even conservative glosses on the Revolution from the pre-Trump era would have included things like the Rhode Island Regiment or Lemuel Haynes or Phyllis Wheatley. Needless to say, they absolutely do not include Deborah Sampson,” Cotlar said. All of the people in the “coming soon” section on PragerU’s website are also white men.
AI slop has become the aesthetic of the right, with authoritarians around the world embracing ugly, lazy, mass-produced content like PragerU’s founding father puppets. Here in the U.S., we have President Donald Trump hawking it on his social media accounts, including AI-generated images of himself as the Pope and “Trump Gaza,” an AI video and song depicting the West Bank as a vacation paradise where Trump parties alongside his former bestie Elon Musk. As Republicans used the response to Hurricane Helene to blame migrants, Amy Kremer, founder of Women for Trump, posted an AI image of a child caught in a flood hugging a puppy and then said she didn’t care that it wasn’t real: “Y’all, I don’t know where this photo came from and honestly, it doesn’t matter,” she wrote on X. Mike Lee shared the same image. AI slop makes for quick and easy engagement farming, and now it’s being produced in direct partnership with the White House.
I’m not sure what app or program PragerU is using to make these videos. I thought, at first, that they might be using one of the many basic lipsyncing or “make this old photo come alive” mobile apps on the market now. But the videos look better, or at least more heavily produced, than most of those apps are capable of. Just to make sure they haven’t somehow advanced wildly in the last few months since I checked one out, I tried one of them, Revive, and uploaded an image of John Adams to see if it would return anything close to what PragerU’s putting out. It did not.
0:00
/0:10
The PragerU videos aren't this bad, but they also aren’t as good as what would come out of Veo 3, the newest AI video generator, which generates highly realistic videos complete with sound and speech, from text prompts. I gave Veo a painting of John Adams and told it what to say; PragerU probably isn’t using this generator, because the result is much more realistic than what’s in the “Road to Liberty” series, even when I use a screenshot from one of their videos.
0:00
/0:08
John Adams in Veo 3 using a painting as a prompt.
On the off chance the culprit is Midjourney—although the series’ style and the way the subjects’ mouths move almost independently of the rest of their faces don’t match what I’ve seen of Midjourney’s videos—I tried that one, too. I just gave Midjourney the same Adams portrait and a prompt for it to animate him praising the United States and it returned a raving lunatic, silently screaming.
0:00
/0:05
Striking out so far, I emailed Hany Farid, a professor at UC Berkeley and Chief Science Officer of synthetic media detection company GetReal, and asked if he had any leads. He said it looked similar to what comes out of AI video creation platform HeyGen, which creates AI talking heads and generates speech for them using ElevenLabs. I tried this on screenshots of the avatars in PragerU’s Martha Washington and John Adams videos to see if the puppet-mouth-style matched up, and they were pretty close.
0:00
/0:01
HeyGen John Adams
0:00
/0:06
HeyGen Martha Washington
PragerU’s videos are still more heavily produced than what I could make using the free version of HeyGen; it’s possible they used a combination of these to make the videos, plus some old-fashioned video editing and animation to create the final products. PragerU reported almost $70 million in income last year, they can afford the effort.
“While the PragerU stuff is distinctly terrible, it's not like our culture has commemorated the Revolution with high-minded sophistication,” Cotlar told me. “I was 8 during the bicentennial and while I definitely learned some stuff about the founding era, most of what I absorbed was pretty schlocky.” He mentioned the "Bicentennial minutes" that were broadcast in 1975 and 76, sponsored by Shell, and which TV critic John J. O’Connor called “so insubstantial as to be almost meaningless.” The series won an Emmy.
In the last two years, several states, beginning with Florida, have approved PragerU content to be taught in public school classrooms. In Oklahoma, teachers relocating from states with "progressive education policies” will have to undergo an assessment in partnership with PragerU to determine if they’re allowed to teach. "If you want to teach here, you'd better know the Constitution, respect what makes America great, and understand basic biology,” State Superintendent Ryan Walters said in a press release. “We're raising a generation of patriots, not activists, and I'll fight tooth and nail to keep leftist propaganda out of our classrooms."
Russia’s manpower crisis is forcing the Kremlin to rely on covert recruiting networks, where volunteer fighters in mercenary structures are seen as “expendable manpower,” according to leaked recordings cited by RFE/RL.
Amid the ongoing Russo-Ukrainian war, the Kremlin is under pressure to avoid another mass call-up, fearing unrest similar to what followed the 2022 mobilization order. By outsourcing recruitment to murky GRU-led operations, the Kremlin continues its war in Ukraine while sidesteppi
Russia’s manpower crisis is forcing the Kremlin to rely on covert recruiting networks, where volunteer fighters in mercenary structures are seen as “expendable manpower,” according to leaked recordings cited by RFE/RL.
Amid the ongoing Russo-Ukrainian war, the Kremlin is under pressure to avoid another mass call-up, fearing unrest similar to what followed the 2022 mobilization order. By outsourcing recruitment to murky GRU-led operations, the Kremlin continues its war in Ukraine while sidestepping public accountability.
“Second-rate infantry” in Russia’s strategy
In a conversation leaked late last year, Russian lawmaker Aleksandr Borodai described non-army recruits as “second-rate infantry.” He said their task is to exhaust Ukrainian troops before regular Russian forces attack. In the same recording, he referred to them as “expendable manpower.”
The statement highlights how Russia increasingly treats irregular fighters — often recruited through unregulated or covert systems — as disposable assets in a war now approaching 1 million Russian casualties, according to estimates.
With the memory of the September 2022 “partial mobilization” still fresh, and public opposition strong, the Kremlin is avoiding new mandatory call-ups. Instead, it relies on shadow recruitment structures to sustain frontline numbers without triggering mass unrest.
GRU-controlled Redut and Dobrokor feed the front with mercenaries
Systema, RFE/RL’s Russian investigative unit, reveals that both the Redut and Dobrokor networks are controlled by the GRU, Russia’s main military intelligence agency. The structures are designed to enlist men and women under the guise of “volunteer formations” — but operate effectively as mercenary pipelines.
Unlike Russia’s Defense Ministry contracts, which extend until Russia ends its war in Ukraine, mercenary contracts through Redut or Dobrokor have fixed terms — usually six to twelve months — and do not renew automatically.
Dobrokor, “Volunteer Corps,” offers 27 different combat units tailored to ideological or social niches — including nationalism, Orthodox Christianity, Cossack identity, and even soccer hooliganism. It recruits men aged 18–55 and women aged 18–45. Women are assigned exclusively to medical units.
Recruits through Dobrokor sign formal contracts and receive legal military status. Monthly pay starts at $2,600. In exchange, they cannot leave before their contract ends without risking prosecution for desertion. Mediazona reported that over 20,000 desertion cases had already reached Russian courts as of May.
Redut: cash, secrecy, and no legal existence
Redut, by contrast, operates in a legal gray zone. It does not require fitness screening, accepts people with criminal records, and allows early departure. It attracts recruits by offering cash payments, which can be hidden from creditors, courts, or ex-spouses.
However, Redut mercenaries are not legally considered military personnel. They are ineligible for state compensation, hospital treatment, or veteran status.
“From a legal point of view, you don’t exist,” a recruiter for the Nevsky unit told Systema, adding: “There’s no service, nothing: No hospitals, no rehabilitation, nothing.”
Redut also provides no gear or uniforms up front. One recruiter told RFE/RL that new fighters are given 50,000 rubles ($640) only with their third paycheck — assuming they survive that long.
Small bonuses, big risks
The Russian Defense Ministry offers up to $46,000 in signing bonuses for official contracts. Dobrokor pays far less — often $640–$1,280. Redut mercenaries get no regional bonus at all.
Despite these risks and limitations, thousands continue to join. Deputy Security Council Chairman Dmitry Medvedev claimed over 210,000 people signed military contracts in the first half of 2025. Another 18,000 joined so-called “volunteer” formations — a term now widely used to mask informal, often unregulated mercenary service.
Janis Kluge of the German Institute for International and Security Affairs estimates the actual number of contract soldiers may be closer to 190,000.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
For a while, I have said that the AI slop endgame, for social media companies, is creating a hyper personalized feed full of highly specific content about anything one could possibly imagine. Because AI slop is so easy to make and because social media algorithms are so personalized, this means that Facebook, Instagram, TikTok, or YouTube can feed you anything they perceive its users to possibly want. So this means that AI slop makers are exploring ever more niche areas of content.
Case in point: Facebook AI slop about the horrific and deadly Texas flood. Topical AI content about disasters, war, current events, and news stories are at this point so commonplace that they are now sadly barely notable, and AI-powered “misinformation” about horrible events are all over every social media feed I can think of. But as we document our descent into this hellhole, I thought some AI slop surfaced on Bluesky by Christina Stephens was particularly notable:
Yesterday, Silicon Valley trade publication The Information launched TITV, a live-streaming news program. For its first episode, the company’s founder Jessica Lessin interviewed Mark Zuckerberg about his recent attempts to hire AI executives away from other companies for huge sums of money.It is impossible to say what actually happened in the interview, however, because there were audio problems with the livestream—the Zuckerberg segment was completely silent, giving us a minute and 50 second
Yesterday, Silicon Valley trade publication The Information launched TITV, a live-streaming news program. For its first episode, the company’s founder Jessica Lessin interviewed Mark Zuckerberg about his recent attempts to hire AI executives away from other companies for huge sums of money.
It is impossible to say what actually happened in the interview, however, because there were audio problems with the livestream—the Zuckerberg segment was completely silent, giving us a minute and 50 seconds of video of Zuckerberg’s face moving with no sound coming out.
In recent weeks, however, Zuckerberg has really ramped up his rhetoric about “superintelligence,” throwing hundreds of millions of dollars at AI researchers in attempts to hire them at Meta, and beginning work on massive data centers around the country to power his AI. Zuckerberg and Meta have talked endlessly about AI as a game changer, while being unable or unwilling to keep AI-generated misinformation and slop off of Meta platforms.
Meanwhile, there has been increased discussion about the possible power of using AI for journalism, which we have written about extensively. I saw the silent interview with Zuckerberg as both an opportunity to check out the awesome abilities of Meta’s AI, and to test AI tools at a task some of them are supposedly good at—reading lips. If these technologies worked as well as their creators have promised, it might be possible to provide the world with information from The Lost Mark Zuckerberg Interview.
Here is how my experiment went.
Because Zuckerberg was obviously talking about Meta AI, I thought I would try his tool first. I took video from the interview and asked Meta AI to lip read it to tell me what they were saying:
“I’d love to help! However, I’m a text-based AI and don’t have capability to visually lip read videos,” Meta AI said. It then suggested that I “find a professional lip reader or transcription service that specializes in lip reading videos” or “use automated lip reading software.”
I then tried to upload still images to see if Meta AI could parse them:
When I uploaded additional images, Meta AI was also not able to provide any information about what was being said.
I then went to ChatGPT, because Zuckerberg is reportedly offering pay packages of up to $300 million to OpenAI staffers to come work at Meta. I uploaded the 1:50 video and ChatGPT told me “the video processing took too long and timed out.” I then uploaded a 25 second clip and it told me “the system is still timing out while trying to extract frames.” I then asked it to do the first five seconds and it said “even with the shorter clip and smaller scope (first 5 seconds), the system timed out.” I then asked for it to extract one single frame, and it said “it looks like the system is currently unable to extract even a single frame from the video file.” ChatGPT then asked me to take a screenshot of Zuckerberg. I sent it this:
And ChatGPT said “the person appears to be producing a sound like ‘f’ or ‘v’ (as in ‘video’ or ‘very’),” but that “possibly ‘m’ or ‘b,’ depending on the next motion.” I then shared the 10 frames around that single screenshot, and ChatGPT said “after closely analyzing the progression of lip shapes and facial motion,” the “probable lip-read phrase” was “This is version.” I then uploaded 10 more frames and it said the “full phrase so far (high confidence): ‘This version is just.’”
I then decided to try to extract every frame from the video and upload it to ChatGPT.
I went to a website called frame-extractor.com and cut the video into 3,000 frames. After it had processed 700 of them, I tried to upload them to ChatGPT and it did not work. I then decided I would go 10 frames at a time from the beginning of the clip. Even though I sent an entirely different portion of the video and told ChatGPT we were starting from a different part of the video, it still said that the beginning of the video said “this version is.” I continued uploading frames, 10 at a time. These frames included both Lessin and Zuckerberg, not just Zuckerberg.
ChatGPT slowly began to create a surely accurate transcript of the lost audio of this interview: “This version is just that it we built,” ChatGPT said. As I added more and more frames, it refined the answer: “This version is what we’re going to do,” it said. Finally, it seemed to make a breakthrough. “Is this version of LLaMA more powerful than the one we released last year?” the ChatGPT transcript said. It was not clear about who was speaking, however. ChatGPT said "her mouth movements," but then explained that the "speaker is the man on the left" (Lessin, not Zuckerberg, was speaking in these frames).
I had uploaded 40 of a total of 3,000 frames. Zoom video is usually 30 fps, so in approximately 1.5 seconds, Lessin and/or Zuckerberg apparently said “Is this version of LLaMA more powerful than the one we released last year?” I then recorded this phrase at a normal speaking speed, and it took about four seconds. Just a data point.
Lipreadtest
0:00
/4.973333
I then got an error message from ChatGPT, and got rate-limited because I was uploading too much data. It told me that I needed to wait three hours to try again.
Finally, I did what Meta AI told me to do, and tried a bespoke AI lip reading app. I found one called ReadTheirLips.com, which is powered by Symphonic Labs. This is a tool that people have been trying to use in recent months to figure out what Donald Trump and Jeffrey Epstein were saying to each other in silent b-roll news footage, without much success.
I paid $10 for three minutes worth of transcription and asked it to lip read using its “Multiface Detection.” After waiting 10 minutes, I got an error message that said “Transcription failed, no credits have been used, try again later.” I then asked it to focus only on Zuckerberg, and actually got some text. I separately asked it to focus on Lessin.
Here is a transcript of what the AI says they were talking about. It has not been edited for clarity and I have no idea which parts, if any, are accurate:
LESSIN: Thanks for joining us again, TV. We're happy to have you already this morning. News that you've spent even more money with your big announcement about your new supercomputers. We'll get to that, but to start, you've been in huge scale like I.
ZUCKERBERG: Happy TO BE HERE. We're GOING TO TALK A LITTLE BIT ABOUT META'S AI STRATEGY. It's BEEN BUSY, YOU KNOW? I THINK THE MOST EXCITING THING THIS YEAR IS THAT WE'RE STARTING TO SEE EARLY GLIMPSES OF SELF-IMPROVEMENT WITH THE MODELS, WHICH MEANS THAT DEVELOPING SUPERINTELLIGENCE IS NOW.
LESSIN: You HAVE BEEN ON A PLANE OF AI HIRING, WHY AND WHY NOW?
ZUCKERBERG: Insight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with a lab is to deliver personal superintelligence to everyone in the world, so that way, you know, we can put that power in every individual's hand. I'm really excited about it.
LESSIN: I DON'T KNOW, I DON'T KNOW, I DON'T KNOW.
ZUCKERBERG: Than ONE OF THE OTHER LABS YOU'RE DOING, AND YOU KNOW MY VIEW IS THAT THIS IS GOING TO BE SOMETHING THAT IS THE MOST IMPORTANT TECHNOLOGY IN OUR LIVES. IT'S GOING TO UNDERPIN HOW WE DEVELOP EVERYTHING AND THE COMPANY, AND IT'S GOING TO AFFECT SOCIETY VERY WISELY. SO WE JUST WANT TO MAKE SURE WE GET THE BEST FOCUS.
LESSIN: Did YOU FEEL LIKE YOU WERE BEHIND WHAT WAS COMING OUT OF LAW BEFORE I'M NOT ADJUSTING.
ZUCKERBERG: On THIS FROM ENTREPRENEURS TO RESEARCHERS TO ENGINEERS WORKING ON THIS HIDDEN INFRASTRUCTURE, AND THEN OF COURSE WE WANT TO BACK IT UP WITH JUST AN ABSOLUTELY MASSIVE AMOUNT OF COMPUTER RESEARCH, WHICH WE CAN SUPPORT BECAUSE WE HAVE A VERY STRONG BUSINESS MODEL THAT THROWS OFF A LOT OF CAPITAL. LET'S TALK ABOUT.
LESSIN: Like THIS SUMMER, PARTICULARLY, YOU SWITCH GEARS A LITTLE BIT.
ZUCKERBERG: I THINK THE FIELD IS ACCELERATING, YOU KNOW, WE KEEP ON TRACK FOR WHERE WE WANT TO BE, AND THE FIELD KEEPS US MOVING FORWARD.
The video ends there, and it cuts back to the studio.
Update: The Information provided 404 Media with several clips (with audio) from Lessin's interview with Zuckerberg, as well as a real transcript of the interview. Here is the real segment of what was said. As you can see, the AI captured the jist of this portion of the interview, and actually did not do too bad:
Lessin: Mark, thanks for joining TITV. We're happy to have you here. Already this morning, [there’s] news that you've spent even more money with your big announcement about your new supercomputers. We'll get to that. But to start, you took a huge stake in ScaleAI. You have been on a blitz of AI hiring. Why, and why now?
Zuckerberg: Yeah, it's been busy. You know, I think the most exciting thing this year is that we're starting to see early glimpses of self-improvement with the models, which means that developing super intelligence is now in sight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal super intelligence to everyone in the world, so that way we can put that power in every individual's hand. And I'm really excited about it. It's a different thing than what the other labs are doing.
And my view is that this is going to be something that is the most important technology in our lives. It's going to underpin how we develop everything at the company, and it's going to affect society very widely. So we just want to make sure that we get the best folks to work on this, from entrepreneurs to researchers to engineers working on the data and infrastructure.
And then, of course, we want to back up with just an absolutely massive amount of compute which we can support, because we have a very strong business model that throws off a lot of capital.
Lessin: Did you feel like you were behind coming out of Llama 4? It seems like this summer, in particular, you switched gears a little bit.
Zuckerberg: I think the field is accelerating, you know, we keep on having goals for where we want to be. And then the field keeps on moving faster than we expect.
On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.
From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.
This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.
This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones.
Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.
But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.
Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.
In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.”
Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”
Long-range Ukrainian drones struck the Ilsky oil refinery in Russia's Krasnodar Krai on July 7, hitting one of the facility's technological workshops, a source in Ukraine's military intelligence (HUR) told the Kyiv Independent. Located roughly 500 kilometers (311 miles) from Ukrainian-controlled territory, the refinery is among the largest in southern Russia, producing over 6 million tons of fuel annually. It is involved in the reception, storage, and processing of hydrocarbons and distributes r
Long-range Ukrainian drones struck the Ilsky oil refinery in Russia's Krasnodar Krai on July 7, hitting one of the facility's technological workshops, a source in Ukraine's military intelligence (HUR) told the Kyiv Independent.
Located roughly 500 kilometers (311 miles) from Ukrainian-controlled territory, the refinery is among the largest in southern Russia, producing over 6 million tons of fuel annually.
It is involved in the reception, storage, and processing of hydrocarbons and distributes refined products via road and rail. The refinery is part of Russia's military-industrial complex and plays a direct role in supporting Moscow's war effort, the source said.
The Russian regional operational headquarters claimed that "drone debris" fell on the oil refinery.
The strike marks a renewed wave of Ukrainian attacks on Russian oil infrastructure, following a months-long pause since March. On July 1, Ukrainian drones struck the Saratovorgsintez oil refinery in Russia's Saratov Oblast.
Kyiv has targeted dozens of refineries, oil depots, and military-industrial sites since the start of Russia's full-scale invasion in 2022. Winter drone attacks forced at least four Russian refineries to temporarily shut down.
This is the second known strike on the Ilsky refinery. Ukrainian drones, operated by the Security Service (SBU) and Special Operations Forces (SSO), previously targeted the facility on Feb. 17, causing a fire.
Krasnodar Krai, a strategic region along Russia's Black Sea coast, has increasingly come under Ukrainian drone attacks as Kyiv extends the range of its strikes deep into Russian territory.
Drones attacked Russia's Black Sea Fleet at the port of Novorossiysk in Krasnodar Krai overnight on July 6, the Russian media outlet Astra reported.Ukraine has not officially commented on the reported strikes, and the Kyiv Independent could not independently verify the claims.An air alert was sounded in the city for several hours, and air defense was active. The consequences of the attack are still being determined, according to Astra.The media outlet also published footage purportedly showing a
Drones attacked Russia's Black Sea Fleet at the port of Novorossiysk in Krasnodar Krai overnight on July 6, the Russian media outlet Astra reported.
Ukraine has not officially commented on the reported strikes, and the Kyiv Independent could not independently verify the claims.
An air alert was sounded in the city for several hours, and air defense was active. The consequences of the attack are still being determined, according to Astra.
The media outlet also published footage purportedly showing a burning maritime drone that was allegedly shot down during the attack.
Krasnodar Krai is located east of Crimea, with the Kerch Strait separating them at their closest point.
Ukraine regularly strikes military targets within Russia as Moscow continues to wage its war against Ukraine.
The Russian Defense Ministry claimed that Russian forces downed 120 drones overnight on July 6.
Thirty drones were shot down over Bryansk Oblast, 29 over Kursk Oblast, and 18 over Oryol Oblast, according to the ministry. An additional 17 and 13 drones were reportedly intercepted over Belgorod and Tula oblasts, respectively, the ministry said.
Due to drone attacks in Russia, numerous flights were canceled or delayed at several airports, including Moscow's Sheremetyevo Airport, overnight between July 5 and July 6.
Grammarly, a company with Ukrainian roots, announced its intent to acquire AI email writing app Superhuman as part of its expansion into an AI productivity platform, the company said in a press release on July 1. Grammarly is the most valuable company with Ukrainian roots, reaching $13 billion valuation as of 2021. Grammarly was founded in 2009 in Kyiv by Oleksii Shevchenko, Maksym Lytvyn, and Dmytro Lider.According to Grammarly's press release, email is Grammarly's top use case, with the platfo
Grammarly, a company with Ukrainian roots, announced its intent to acquire AI email writing app Superhuman as part of its expansion into an AI productivity platform, the company said in a press release on July 1.
Grammarly is the most valuable company with Ukrainian roots, reaching $13 billion valuation as of 2021. Grammarly was founded in 2009 in Kyiv by Oleksii Shevchenko, Maksym Lytvyn, and Dmytro Lider.
According to Grammarly's press release, email is Grammarly's top use case, with the platform editing over 50 million emails weekly.
Superhuman is an AI email application that the company says helps users respond to emails faster and reduces time spent on email communications.
Users are already sending and responding to 72% more emails per hour after using Superhuman compared to the previous period, according to Grammarly.
"This is the future we've been building toward since day one: AI that works where people work, not where companies want them to work," said Shishir Mehrotra, Grammarly's CEO.
The acquisition follows Grammarly's recent purchase of Coda, a productivity tool company. The combined platforms will allow users to work with multiple AI agents for different tasks within email communications.
Grammarly says that its service is used daily by over 40 million users, generating annual revenue of more than $700 million for the company.
This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.This article contains references to sexual assault.An Ohio man made pornographic deepfake videos of at least 10 people he was stalking and harassing, and sent the AI-generated imagery to the victims’ family and coworkers, according to a newly filed court record written by an FBI Special Agent.On Monday, Special Agent Josh Saltar filed an affidavit i
This article contains references to sexual assault.
An Ohio man made pornographic deepfake videos of at least 10 people he was stalking and harassing, and sent the AI-generated imagery to the victims’ family and coworkers, according to a newly filed court record written by an FBI Special Agent.
On Monday, Special Agent Josh Saltar filed an affidavit in support of a criminal complaint to arrest James Strahler II, 37, and accused him of cyberstalking, sextortion, telecommunications harassment, production of a “morphed image” of child pornography, and transportation of obscene material.
As Ohio news outlet The Columbus Dispatch notes, several of these allegations occurred while he was on pre-trial release for related cases in municipal court, including leaving a voicemail with one of the victims where he threatened to rape them.
The court document details dozens of text messages and voicemails Strahler allegedly sent to at least 10 victims that prosecutors have identified, including threats of blackmail using AI generated images of themselves having sex with their relatives. In January, one of the victims called the police after Strahler sent a barrage of messages and imagery to her and her mother from a variety of unknown numbers.
She told police some of the photos sent to her and her mother “depicted her own body,” and that the images of her nude “were both images she was familiar with and ones that she never knew had been taken that depicted her using the toilet and changing her clothes,” the court document says. She also “indicated the content she was sent utilized her face morphed onto nude bodies in what appeared to be AI generated pornography which depicted her engaged in sex acts with various males, including her own father.”
In April, that victim called the police again because Strahler allegedly started sending her images again from unknown numbers. “Some of the images were real images of [her] nude body and some were of [her] face imposed on pornographic images and engaged in sex acts,” the document says.
Around April 21, 2025, police seized Strahler’s phone and told him “once again” to stop contacting the initial victim, her family, and her coworkers, according to the court documents. The same day, the first victim allegedly received more harassing messages from him from different phone numbers. He was arrested, posted $50,000 bail, and released the next day, the Dispatch reported.
Phone searches also indicated he’d been harassing two other women—ex-girlfriends—and their mothers. “Strahler found contact information and pictures from social media of their mothers and created sexual AI media of their daughters and themselves and sent it to them,” the court document says. “He requested nude images in exchange for the images to stop and told them he would continue to send the images to friends and family.”
The document goes into gruesome detail about what authorities found when they searched his devices. Authorities say Strahler had been posing as the first victim and uploading nude AI generated photos of her to porn sites. He allegedly uploaded images and videos to Motherless.com, a site that describes itself as “a moral free file host where anything legal is hosted forever!”
Strahler also searched for sexually violent content, the affidavit claims, and possessed “an image saved of a naked female laying on the ground with a noose around her neck and [the first victim’s] face placed onto it,” the document says. His phone also had “numerous victims’ names and identifiers listed in the search terms as well as information about their high schools, bank accounts, and various searches of their names with the words ‘raped,’ ‘naked,’ and ‘porn’ listed afterwards,” the affidavit added.
They also found Strahler’s search history included the names of several of the victims and multiple noteworthy terms, including “Delete apple account,” “menacing by stalking charge,” several terms related to rape, incest, and “tube” (as in porn tube site). He also searched for “Clothes off io” and “Undress ai,” the document says. ClothOff is a website and app for making nonconsensual deepfake imagery, and Undress is a popular name for many different apps that use AI to generate nude images from photos. We’ve frequently covered “undress” or “nudify” apps and their presence in app stores and in advertising online; the apps are extremely widespread and easy to find and use, even for school children.
Other terms Strahler searched included “ai that makes porn,” “undress anyone,” “ai porn makers using own pictures,” “best undress app,” and “pay for ai porn,” the document says.
He also searched extensively for sexual abuse material of minors, and used photographs of one of the victim's children and placed them onto adult bodies, according to court records.
The Delaware County Sheriff’s Office arrested Strahler at his workplace on June 12. A federal judge ordered that Strahler was to remain in custody pending future federal court hearings.
A federal judge in California ruled Monday that Anthropic likely violated copyright law when it pirated authors’ books to create a giant dataset and "forever" library but that training its AI on those books without authors' permission constitutes transformative fair use under copyright law. The complex decision is one of the first of its kind in a series of high-profile copyright lawsuits brought by authors and artists against AI companies, and it’s largely a very bad decision for authors, artists, writers, and web developers.
This case, in which authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic, maker of the Claude family of large language models, is one of dozens of high-profile lawsuits brought against AI giants. The authors sued Anthropic because the company scraped full copies of their books for the purposes of training their AI models from a now-notorious dataset called Books3, as well as from the piracy websites LibGen and Pirate Library Mirror (PiLiMi). The suit also claims that Anthropic bought used physical copies of books and scanned them for the purposes of training AI.
"From the start, Anthropic ‘had many places from which’ it could have purchased books, but it preferred to steal them to avoid ‘legal/practice/business slog,’ as cofounder and chief executive officer Dario Amodei put it. So, in January or February 2021, another Anthropic cofounder, Ben Mann, downloaded Books3, an online library of 196,640 books that he knew had been assembled from unauthorized copies of copyrighted books — that is, pirated," William Alsup, a federal judge for the Northern District of California, wrote in his decision Monday. "Anthropic’s next pirated acquisitions involved downloading distributed, reshared copies of other pirate libraries. In June 2021, Mann downloaded in this way at least five million copies of books from Library Genesis, or LibGen, which he knew had been pirated. And, in July 2022, Anthropic likewise downloaded at least two million copies of books from the Pirate Library Mirror, or PiLiMi, which Anthropic knew had been pirated."
Ecovillages, coopératives, habitats partagés... Les expériences de communautés plus autonomes se multiplient en France et ailleurs. La journaliste Juliette Duquesne est allée à la rencontre de ces collectifs,…
— Permalien
Ecovillages, coopératives, habitats partagés... Les expériences de communautés plus autonomes se multiplient en France et ailleurs. La journaliste Juliette Duquesne est allée à la rencontre de ces collectifs,…
— Permalien