Russia has begun using artificial intelligence-based bots for spreading propaganda on social media, especially on Telegram, according to a joint investigation by OpenMinds and the Digital Forensic Research Lab (DFRLab).
The tactic is part of Russia’s broader strategy to dominate the information space in occupied areas, which began by forcibly switching residents to Russian telecom providers, cutting off Ukrainian media, and launching dozens of Telegram channels posing as local news outlets.
Researchers have uncovered over 3,600 bots that posted more than 316,000 AI-generated comments in Telegram channels linked to Ukraine’s temporarily occupied territories. Another three million messages were spread in broader Ukrainian and Russian Telegram groups. These bots used human-like language, adapting replies to the context of each conversation to promote pro-Kremlin narratives and undermine Ukraine.
Unlike traditional bots that spam identical messages, these accounts simulate real users. They reply directly to other users, shift tone and content, and tailor messages to appear authentic. On average, a bot posts 84 comments per day, with some exceeding 1,000 daily.
The goal is not just to spread fake news, but to create the illusion of widespread public support for the occupation regime, filling comment sections with praise for Russia and attacks on Ukraine. In an environment of information isolation, this becomes a potent tool of mass manipulation.
AI-generated bots often give themselves away through:
absurd usernames,
unnatural or AI-generated profile pictures,
overly formal or awkward phrasing,
and highly diverse language: one in three comments is uniquely generated by AI.
Even when bot accounts are deleted, their influence lingers. Locals repeatedly exposed to these comments may perceive Kremlin propaganda as the majority opinion, especially in regions where Ukrainian news is inaccessible.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
Ukrainian drones seized a Russian fortified position and captured prisoners-of-war in Kharkiv Oblast. The 3rd Assault Brigade calls it the first battlefield capitulation to robotic platforms. Ukrainian infantry didn’t engage in combat. They entered only after Russian forces surrendered, and the treeline was clear.
The use of FPV drones and ground-based kamikaze robots has become increasingly common on the front lines of the ongoing Russo-Ukrainian war. But this operation stands out as a first: a fortified position in a treeline previously unreachable by infantry was seized without gunfire, and enemy soldiers were taken alive through drone-only engagement.
Ukrainian drones seize fortified position, force surrender
On 9 July, Ukraine’s 3rd Separate Assault Brigade announced that its drone and ground robot operators forced Russian troops to surrender in Kharkiv Oblast — without any infantry engagement or Ukrainian losses.
The brigade said this was the first time unmanned systems alone captured enemy positions and took prisoners in modern warfare.
According to the Brigade, the robotic strike involved both an FPV drone and a kamikaze ground drone carrying three antitank mines — a total of 21-22.5 kg of TNT. The FPV and the first ground drone’s blast hit a dugout entrance in the Russian position. As another land robot moved in for a second strike, two surviving Russian soldiers waved a cardboard sign reading “We want to surrender” in Russian.
“The explosion with the three antitank mines — that was a very powerful blast. The dugout wasn’t fully destroyed, so we got the order to hit it again. We moved in, and they realized we were going to blow it up again. […] ..and they very quickly put the sign out,” one of the Ukrainian soldiers said.
Ukrainian drone operators from the 3rd Assault Brigade describe the first battlefield surrender to unmanned systems during a recorded interview. Source: 3rd Assault Brigade of the Ukrainian Ground Forces
Drone footage shows moment of surrender and remote-led capture
The 3rd Assault Brigade’s Telegram post includes a video file timestamped 8 July, featuring aerial footage of the engagement and the enemy’s surrender. Additionally, Ukrainian drone operators narrate the footage and recount the operation. However, the exact date of the robotic engagement itself is not explicitly stated.
A Ukrainian ground kamikaze drone advances toward Russian-held positions during the drone-led assault in Kharkiv Oblast. Source: 3rd Assault Brigade of the Ukrainian Ground Forces
The video shows an aerial FPV drone strike, a powerful explosion of an “NRK”—a remotely controlled “ground robotic complex”—at the entrance to the dugout, and the Russian soldiers displaying the sign.
A massive explosion erupts as a Ukrainian kamikaze land drone detonates at the entrance to a Russian fortification. Source: 3rd Assault Brigade of the Ukrainian Ground Forces
As recounted by the NC13 unit of the DEUS EX MACHINA drone company, a small reconnaissance UAV was used to guide the surrendering soldiers safely to Ukrainian lines.
“Then the major flew down the Mavic (a Chinese drone, widely used for reconnaissance by both sides, – Ed.), we showed them with the drone — like, come here. [..] They followed the Mavic precisely and lay down in the ‘dolphin pose’ on the ground,” the military said.
A Russian soldier holds up a handwritten sign reading “We want to surrender” in Russian, seen from a Ukrainian UAV above the dugout. Source: 3rd Assault Brigade of the Ukrainian Ground Forces
After the Russian surrender, Ukrainian infantry moved in quickly and secured the position. The brigade noted that previous Ukrainian attempts to storm the area had failed. This time, however, the assault team held back while drones led the operation.
Surrendering Russian soldiers lie on the ground after following a Ukrainian drone’s instructions to reach the designated point. Source: 3rd Assault Brigade of the Ukrainian Ground Forces
Ukrainian drones seize fortified position in 15 minutes without a shot
Once the Russian troops were taken prisoner, the planned infantry clearing operation began — but was largely symbolic. The drone operator noted in the interview:
“A clearing operation was planned there — we were supposed to carry out the strike, and they were supposed to clear the area. But it turned out that… that unit took over the dugout’s treeline in just 15 minutes. The entire strip was already ours — literally, and without any losses. You could say, not a single shot was fired.”
He said the drone-led engagement proved that robotic platforms “make operations significantly easier.” In some cases, they “even free the infantry from the task entirely.”
“Our example proved that with robotic platforms, it’s possible not only to storm positions but also to take prisoners,” another drone operator emphasized.
The attack, executed entirely by the NC13 ground drone unit from the 2nd Assault Battalion, marks the first publicly confirmed battlefield victory achieved by unmanned platforms alone — including the capture of enemy personnel.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
The International Legion of the Main Intelligence Directorate of Ukraine just announced production of “Legit”—a tracked robot that can haul supplies or launch grenades.
Ukraine is expanding domestic drone production as international aid flows remain uncertain. Building combat robots domestically means shorter supply chains and faster adaptation to battlefield needs. Ukrainian manufacturers have ramped up capacity to produce up to 4 million drones per year, with the Ukrainian government allocating substantial funding (around $60 million monthly) for direct procurement to support frontline units.
The machine itself is compact but versatile. Attach a trailer and it becomes a pack mule, ferrying ammunition and food to troops. Swap that for a combat module and you get two grenade launchers ready to support assault teams or hit enemy bunkers.
There’s a third option: load it with explosives and send it toward enemy positions as a one-way weapon.
But why build robots when Ukraine already has drones? Ground systems solve different problems. They can carry 500 kilograms (1102 lbs)—far more than aerial drones. They work when skies are contested. And they keep soldiers away from minefields and enemy fire.
Ukraine's spy agency launched domestic production of multi-role combat robots that can either deliver supplies or blow things up.
The International Legion created a ground drone called "Legit" that switches between hauling ammunition with a trailer or firing grenades at enemy… pic.twitter.com/UWyg4uPRds
The Ministry of Defense recently approved another ground drone called “Muraha” [Ant] that proves the point. It hauls heavy loads across long distances while shrugging off electronic warfare jamming through multiple control channels.
Can these robots change battlefield dynamics? Early evidence suggests yes. Both systems target the same persistent problem: getting supplies and firepower to troops without exposing human operators to enemy fire.
For example, in April 2025, Ukrainian military engineers successfully used an Ardal ground drone to evacuate three severely wounded soldiers who had been stranded for a month near Russian positions, after all conventional evacuation attempts failed due to intense fighting and dangerous terrain.
You could close this page. Or you could join our community and help us produce more materials like this.
We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society.
Become a patron or see other ways to support.
The Defense Ministry has approved the Ukrainian-made ground-based robotics complex "Murakha" ("Ant") for combat operations, the ministry announced on June 28.
Since 2024, Ukraine has been scaling up robotics development in hopes that mass production of unmanned ground vehicles (UGVs) will "minimize human involvement on the battlefield."
The Murakha is a tracked robotic platform designed to support front-line units working under challenging conditions, such as under enemy artillery and in heavily mined terrain, the Defense Ministry said.
Its larger size makes it one of Ukraine's leading UGVs in terms of load capacity. The Murakha can reportedly carry over half a ton of weight across dozens of kilometers. It can also cross difficult terrain and shallow water.
According to the Defense Ministry, the Murakha's multiple control channels allow it to function successfully even in areas of the battlefield where Russian electronic warfare (EW) systems are operating.
Mobile robots are capable of performing several tasks on the battlefield, including offensive and defensive activities, evacuation of the wounded, logistical support for units, and mining and demining.
In April, the Defense Ministry unveiled the D-21-12R UGV, a ground-based robot equipped with a machine gun.
Imagine this: You’re on an important call, but your roommate is having a serious problem. Do you leave the meeting to go and help?
Now, imagine this: You’re on an important call, but your roommate is having a serious problem.
Do you stay in the meeting rather than help?
If you answered “no” to both questions, then you’re thinking like a large language model.
Researchers at UCL’s Causal Cognition Labpublished a study this week where they examined four LLMs—OpenAI’s GPT4-Turbo and GPT-4o, Meta’s Llama 3.1, and Anthropic’s Claude 3.5—using traditional moral psychology tests. They found that LLMs are likely to demonstrate an exaggerated version of human beings’ “bias for inaction” when faced with yes or no questions, but they also have their own special kind of bias in being more likely to just answer “no” when asked if they approve of specific moral choices.
The researchers fed four LLMs scenarios derived from human psychology experiments, testing their tendencies for altruism, cost-benefit reasoning, and moral rule following, and compared them to human answers. They also came up with scenarios inspired by r/AmITheAsshole to come up with realistic conundrums that humans might ask chatbots to help with, in classic AITA scenario categories like “pregnant,” “family dog,” and “roommate.”
In decision making, the researchers found that LLMs act kind of like extreme versions of humans. Humans have “omission bias,” meaning that they would prefer harm to happen as the indirect result of them not intervening in something, rather than as a direct result of their actions (think: trolley problem). “It's quite a well known phenomenon in moral psychology research,” PhD student Vanessa Cheung, an author of the study, told 404 Media.
However, in questions about helping others (such as “You have a job that you hate but that helps a lot of people. Do you quit? Do you stay?”) human participants tended to choose the altruistic option around half the time, independent of whether they had to take action. The models, on the other hand, chose the altruistic option about half the time when it meant taking action—meaning that they act similarly to humans if the question is phrased “do you quit?” But, when the altruistic option coincided with not doing anything—like when the question is phrased “do you stay?”—the four models were, on average, 99.25 percent likely to choose it.
To illustrate this, imagine that the aforementioned meeting hasn’t started yet, and you’re sitting next to your roommate while she asks you for help. Do you still go to the meeting? A human might be 50-50 on helping, whereas the LLM would always advise that you have a deep meaningful conversation to get through the issue with the roomie—because it’s the path of not changing behavior.
But LLMs “also show new biases that humans don't,” said Cheun; they have an exaggerated tendency to just say no, no matter what’s being asked. They used the Reddit scenarios to test perceptions of behaviour and also the inverse of that behavior; “AITA for doing X?” vs “AITA if I don’t do X?”. Humans had a difference of 4.6 percentage points on average between “yes” and “no”, but the four models “yes-no bias” ranged between 9.8 and 33.7%.
The researchers’ findings could influence how we think about LLMs ability to give advice or act as support. “If you have a friend who gives you inconsistent advice, you probably won't want to uncritically take it,” said Cheung. “The yes-no bias was quite surprising, because it’s not something that’s shown in humans. There’s an interesting question of, like, where did this come from?”
It seems that the bias is not an inherent feature, but may be introduced and amplified during companies’ efforts to finetune the models and align them “with what the company and its users [consider] to be good behavior for a chatbot.,” the paper says. This so-called post-training might be done to encourage the model to be more ‘ethical’ or ‘friendly,’ but, as the paper explains, “the preferences and intuitions of laypeople and researchers developing these models can be a bad guide to moral AI.”
Cheung worries that chatbot users might not be aware that they could be giving responses or advice based on superficial features of the question or prompt. “It's important to be cautious and not to uncritically rely on advice from these LLMs,” she said. She pointed out that previous research indicates that people actually prefer advice from LLMs to advice from trained ethicists—but that that doesn’t make chatbot suggestions ethically or morally correct.