Russia is trying to blame Ukraine for the attacks on NATO. Recently, a video appeared in Lithuania showing “drone debris with Ukrainian writing,” which Russian Telegram channels have been actively spreading, Spravdi reports.
In early July, Polish investigators revealed that wreckage from Russian drones shot down in Ukraine contained 4G modems equipped with SIM cards from Polish mobile operators, as reported by Polish journalist Marek Budzisz. Later, a SIM card from a Lithuanian operator was also discovered in another drone. Analysts say the findings directly point to Russia’s preparation for drone incursions into NATO airspace, as Moscow tested mobile network connections in advance.
Fact-checkers at Ukrinformconfirmed that the clip was professionally edited, with rapid scene changes, unlike amateur footage, indicating it was staged.
Fakes come alive on screen
The original video came from the TikTok account katelynltu, created specifically for this post. The account’s avatar is also used by other fake profiles, confirming its bot status.
Stefanov on the stage of lies
In reality, the video is a Russian propaganda fabrication aimed at distracting from Russia’s real attacks on NATO countries.
Russian propagandist RT correspondent Oleksii Stefanov, previously expelled from Latvia for propaganda, appears in the clip, confirming the action was planned.
Sky under attack
Since late July, Russia has regularly carried out drone attacks on the Baltic states and Eastern Europe. Lithuania confirmed that drones deliberately violated its airspace, while Poland and Romania were also targeted by Russian drones.
Similar fakes are actively spreading in Poland and Romania, undermining trust in information about actual strikes.
On 10 September, Russian drones violated Polish airspace. The incident prompted Polish aviation to scramble and temporarily shut down airports in Warsaw, Lublin, and Rzeszów amid fears of a wider attack. NATO advanced fighter jets shot down at least four of the reportedly 19 drones that crossed into Poland.
It’s now illegal in Michigan to make AI-generated sexual imagery of someone without their written consent. Michigan joins 47 other states in the U.S. that have enacted their own deepfake laws.
Michigan Governor Gretchen Whitmer signed the bipartisan-sponsored House Bills 4047 and its companion bill 4048 on August 26. In a press release, Whitmer specifically called out the sexual uses for deepfakes. “These videos can ruin someone’s reputation, career, and personal life. As such, these bills prohibit the creation of deep fakes that depict individuals in sexual situations and creates sentencing guidelines for the crime,” the press release states. That’s something we’ve seen time and time again with victims of deepfake harassment, who’ve told us over the course of the six years since consumer-level deepfakes first hit the internet that the most popular application of this technology has been carelessness and vindictiveness against the women its users target—and that sexual harassment using AI has always been its most popular use.
Making a deepfake of someone is now a misdemeanor in Michigan, punishable by imprisonment of up to one year and fines up to $3,000 if they “knew or reasonably should have known that the creation, distribution, dissemination, or reproduction of the deep fake would cause physical, emotional, reputational, or economic harm to an individual falsely depicted,” and if the deepfake depicts the target engaging in a sexual act and is identifiable “by a reasonable individual viewing or listening to the deep fake,” the law states.
This is all before the deepfake’s creator posts it online. It escalates to a felony if the person depicted suffers financial loss, the person making the deepfake intended to profit off of it, if that person maintains a website or app for the purposes of creating deepfakes or if they posted it to any website at all, if they intended to “harass, extort, threaten, or cause physical, emotional, reputational, or economic harm to the depicted individual,” or if they have a previous conviction.
💡
Have you been targeted by deepfake harassment, or have you made deepfakes of real people? Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.
The law specifically says that this isn’t to be construed to make platforms liable, but the person making the deepfakes. But we already have federal law in place that makes platforms liable: the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or TAKE IT DOWN Act, introduced by Ted Cruz in June 2024 and signed into law in May this year, made platforms liable for not moderating deepfakes and imposes extremely short timelines for acting on AI-generated abuse imagery reports from users. That law’s drawn a lot of criticism from civil liberties and online speech activists for being too overbroad; As the Verge pointed out before it became law, because the Trump administration’s FTC is in charge of enforcing it, it could easily become a weapon against all sorts of speech, including constitutionally-protected free speech.
"Platforms that feel confident that they are unlikely to be targeted by the FTC (for example, platforms that are closely aligned with the current administration) may feel emboldened to simply ignore reports of NCII,” the Cyber Civil Rights Initiative told the Verge in April. “Platforms attempting to identify authentic complaints may encounter a sea of false reports that could overwhelm their efforts and jeopardize their ability to operate at all."
“If you do not have perfect technology to identify whatever it is we're calling a deepfake, you are going to get a lot of guessing being done by the social media companies, and you're going to get disproportionate amounts of censorship,” especially for marginalized groups, Kate Ruane, an attorney and director of the Center for Democracy and Technology’s Free Expression Project, told me in June 2024. “For a social media company, it is not rational for them to open themselves up to that risk, right? It's simply not. And so my concern is that any video with any amount of editing, which is like every single TikTok video, is then banned for distribution on those social media sites.”
On top of the TAKE IT DOWN Act, at the state level, deepfakes laws are either pending or enacted in every state except New Mexico and Missouri. In some states, like Wisconsin, the law only protects minors from deepfakes by expanding child sexual abuse imagery laws.
Even as deepfakes legislation seems to finally catch up to the notion that AI-generated sexual abuse imagery is abusive, reporting this kind of harassment to authorities or pursing civil action against one’s own abuser is still difficult, expensive, and re-traumatizing in most cases.