Vue normale

Reçu aujourd’hui — 22 novembre 2025Numérique
  • ✇404 Media
  • A Lost Planet Created the Moon. Now, We Know Where It Came From.
    Welcome back to the Abstract! Here are the studies this week that overthrew the regime, survived outer space, smashed planets, and crafted an ancient mystery from clay.First, a queen gets sprayed with acid—and that’s not even the most horrifying part of the story. Then: a moss garden that is out of this world, the big boom that made the Moon, and a breakthrough in the history of goose-human relations.As always, for more of my work, check out my book First Contact: The Story of Our Obsession with
     

A Lost Planet Created the Moon. Now, We Know Where It Came From.

22 novembre 2025 à 09:00
A Lost Planet Created the Moon. Now, We Know Where It Came From.

Welcome back to the Abstract! Here are the studies this week that overthrew the regime, survived outer space, smashed planets, and crafted an ancient mystery from clay.

First, a queen gets sprayed with acid—and that’s not even the most horrifying part of the story. Then: a moss garden that is out of this world, the big boom that made the Moon, and a breakthrough in the history of goose-human relations.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens, or subscribe to my personal newsletter the BeX Files

What is this, a regime change for ants?

Shimada, Taku et al. “Socially parasitic ant queens chemically induce queen-matricide in host workers.” Current Biology.

Every so often, a study opens with such a forceful hook that it is simply best for me to stand aside and allow it to speak for itself. Thus:

“Matricide—the killing of a mother by her own genetic offspring—is rarely observed in nature, but not unheard-of. Among animal species in which offspring remain with their mothers, the benefits gained from maternal care are so substantial that eliminating the mother almost never pays, making matricide vastly rarer than infanticide.”

“Here, we report matricidal behavior in two ant species, Lasius flavus and Lasius japonicus, where workers kill resident queens (their mothers) after the latter have been sprayed with abdominal fluid by parasitic ant queens of the ants Lasius orientalis and Lasius umbratus.”

Mad props to this team for condensing an entire etymological epic into three sentences. Such murderous acts of dynastic usurpation were first observed by Taku Shimada, an ant enthusiast who runs a blog called Ant Room. Though matricide is sometimes part of a life cycle—like mommy spiders sacrificing their bodies for consumption by their offspring—there is no clear precedent for the newly-reported form of matricide, in which neither the young nor mother benefits from an evolutionary point of view.

In what reads like an unfolding horror, the invading parasitic queens “covertly approach the resident queen and spray multiple jets of abdominal fluid at her”—formic acid, as it turns out—that then “elicits abrupt attacks by host workers, which ultimately kill their own mother,” report Shimada and his colleagues.  

“The parasitic queens are then accepted, receive care from the orphaned host workers and produce their own brood to found a new colony,” the team said. “Our findings are the first to document a novel host manipulation that prompts offspring to kill an otherwise indispensable mother.”

My blood is curdling and yet I cannot look away! Though this strategy is uniquely nightmarish, it is not uncommon for invading parasitic ants to execute queens in any number of creative ways. The parasites are just usually a bit more hands-on (or rather, tarsus-on) about the process. 

“Queen-killing” has “evolved independently on multiple occasions across [ant species], indicating repeated evolutionary gains,” Shimada’s team said. “Until now, the only mechanistically documented solution was direct assault: the parasite throttles or beheads the host queen, a tactic that has arisen convergently in several lineages.”

When will we get an ant Shakespeare?! Someone needs to step up and claim that title, because these queens blow Lady MacBeth out of the water.

In other news…

That’s one small stem for a plant, one giant leaf for plant-kind

Maeng, Chang-hyun et al. “Extreme environmental tolerance and space survivability of the moss, Physcomitrium patens.” iScience, 

Scientists simply love to expose extremophile life to the vacuum of space to, you know, see how well they do out there. In a new addition to this tradition, a study reports that spores from the moss Physcomitrium patens survived a full 283 days chilling on the outside of the International Space Station, which is generally not the side of an orbital habitat you want to be stuck on. 

A Lost Planet Created the Moon. Now, We Know Where It Came From.
A reddish-brown spore similar to those used in the space exposure experiment. Image: Tomomichi Fujita

Even wilder, most of the spacefaring spores were reproductively successful upon their return to Earth. “Remarkably, even after 9 months of exposure to space conditions, over 80% of the encased spores germinated upon return to Earth,” said researchers led by Chang-hyun Maeng of Hokkaido University. “To the best of our knowledge, this is the first report demonstrating the survival of bryophytes”—the family to which mosses belong—”following exposure to space and subsequent return to the ground.”

Congratulations to these mosses for boldly growing where no moss has grown before.

Hints of a real-life ghost world

Hopp, Timo et al. “The Moon-forming impactor Theia originated from the inner Solar System.” Science.

Earth had barely been born before a Mars-sized planet, known as Theia, smashed into it some 4.5 billion years ago. The debris from the collision coalesced into what is now our Moon, which has played a key role in Earth’s habitability, so we owe our lives in part to this primordial punch-up.

A Lost Planet Created the Moon. Now, We Know Where It Came From.
KABLOWIE! Image: NASA/JPL-Caltech

Scientists have now revealed new details about Theia by measuring the chemical makeup of “lunar samples, terrestrial rocks, and meteorites…from which Theia and proto-­Earth might have formed,” according to a new study. They conclude that Theia likely originated in the inner solar system based on the chemical signatures that this shattered world left behind on the Moon and Earth. 

“We found that all of Theia and most of Earth’s other constituent materials originated from the inner Solar System,” said researchers led by Timo Hopp of The University of Chicago and the Max Planck Institute for Solar System Research. “Our calculations suggest that Theia might have formed closer to the Sun than Earth did.”

Wherever its actual birthplace, what remains of Theia is buried on the Moon and as giant undigested slabs inside Earth’s mantle. Rest in pieces, sister.

Goosebumps of yore

Davin, Laurent et al. “A 12,000-year-old clay figurine of a woman and a goose marks symbolic innovations in Southwest Asia. Proceedings of the National Academy of Sciences.

You’ve heard of the albatross around your neck, but what about the goose on your back? A new study reports the discovery of a 12,000-year-old artifact in Israel that is the “earliest known figurine to depict a human–animal interaction” with its vision of a goose mysteriously draped over a woman’s spine and shoulders.

The tiny, inch-high figurine was recovered from a settlement built by the prehistoric Natufian culture and it may represent some kind of sex thing. 

A Lost Planet Created the Moon. Now, We Know Where It Came From.
An image of the artifact, and an artistic reconstruction. Image: Davin, Laurent et al.

“We…suggest that by modeling a goose in this specific posture, the Natufian manufacturer intended to portray the trademark pattern of the gander’s mating behavior,” said researchers led by Laurent Davin of the Hebrew University of Jerusalem. “This kind of imagined mating between humans and animal spirits is typical of an animistic perspective, documented in cross-cultural archaeological and ethnographic records in specific situations” such as an “erotic dream” or “shamanistic vision.”

First, the bizarre Greek myth of Leda and the Swan, and now this? What is it about ancient cultures and weird waterfowl fantasies? In any case, my own interpretation is that the goose was just tired and needed a piggyback (or gaggle-back).

Thanks for reading! See you next week.

Reçu hier — 21 novembre 2025Numérique
  • ✇404 Media
  • Behind the Blog: A Risograph Journey and Data Musings
    This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss how data is accessed, AI in games, and more.JOSEPH: This was a pretty big week for impact at 404 Media. Sam’s piece on an exposed AI porn platform ended up with the company closing off those exposed images. Our months-long reporting and pressure from lawmakers led to the closure of the Travel Intelligence Program (TIP), in which a company o
     

Behind the Blog: A Risograph Journey and Data Musings

21 novembre 2025 à 11:54
Behind the Blog: A Risograph Journey and Data Musings

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss how data is accessed, AI in games, and more.

JOSEPH: This was a pretty big week for impact at 404 Media. Sam’s piece on an exposed AI porn platform ended up with the company closing off those exposed images. Our months-long reporting and pressure from lawmakers led to the closure of the Travel Intelligence Program (TIP), in which a company owned by the U.S.’s major airlines sold flyers data to the government for warrantless surveillance.

For the quick bit of context I have typed many, many times this year: that company is Airlines Reporting Corporation (ARC), and is owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. ARC gets data, including a traveler’s name, credit card used, where they’re flying to and from, whenever someone books a flight with one of more than 10,000 travel agencies. Think Expedia, especially. ARC then sells access to that data to a slew of government agencies, including ICE, the FBI, the SEC, the State Department, ATF, and more.

  • ✇Dans les algorithmes
  • ChatFishing, hameçonnage par l’IA
    Votre nouveau crush sur Tinder vous pose des questions stimulantes et vous discutez avec lui jusqu’au bout de la nuit ? Et lorsque vous le rencontrez, c’est la douche froide, il ne dit pas un mot. Pas de doute, vous avez été chatfishée !
     
  • ✇Dans les algorithmes
  • Mais où sont les faux négatifs de la reconnaissance faciale ?
    Dans la revue Data & Policy, les juristes Karen Yeung et Wenlong Li ont observé les résultats de quatre expérimentations de reconnaissance faciale en temps réel menées à Londres, aux Pays de Galles, à Berlin et à Nice. En Grande-Bretagne par exemple, aucune information n’a été recueillie sur les faux négatifs générés par les systèmes, comme les personnes fichées mais non identifiées par le logiciel. Nulle part, l’impact négatif des systèmes n’a été observé. Pour les chercheuses, les expérime
     

Mais où sont les faux négatifs de la reconnaissance faciale ?

21 novembre 2025 à 01:00

Dans la revue Data & Policy, les juristes Karen Yeung et Wenlong Li ont observé les résultats de quatre expérimentations de reconnaissance faciale en temps réel menées à Londres, aux Pays de Galles, à Berlin et à Nice. En Grande-Bretagne par exemple, aucune information n’a été recueillie sur les faux négatifs générés par les systèmes, comme les personnes fichées mais non identifiées par le logiciel. Nulle part, l’impact négatif des systèmes n’a été observé. Pour les chercheuses, les expérimentations de ce type manquent de rigueur et ne produisent d’ailleurs aucune connaissance nouvelle. Quand on ne cherche pas les défauts, assurément, on ne les trouve pas. Via Algorithm Watch.

Reçu avant avant-hierNumérique
  • ✇404 Media
  • Cops Used Flock to Monitor No Kings Protests Around the Country
    Police departments and officials from Border Patrol used Flock’s automatic license plate reader (ALPR) cameras to monitor protests hundreds of times around the country during the last year, including No Kings protests in June and October, according to data obtained by the Electronic Frontier Foundation (EFF).The data provides the clearest picture yet of how cops widely use Flock to monitor protesters. In June, 404 Media reported cops in California used Flock to track what it described as an “
     

Cops Used Flock to Monitor No Kings Protests Around the Country

20 novembre 2025 à 19:00
Cops Used Flock to Monitor No Kings Protests Around the Country

Police departments and officials from Border Patrol used Flock’s automatic license plate reader (ALPR) cameras to monitor protests hundreds of times around the country during the last year, including No Kings protests in June and October, according to data obtained by the Electronic Frontier Foundation (EFF).

The data provides the clearest picture yet of how cops widely use Flock to monitor protesters. In June, 404 Media reported cops in California used Flock to track what it described as an “immigration protest.” The new data shows more than 50 federal, state, and local law enforcement ran hundreds of searches in connection with protest activity, according to the EFF.

  • ✇404 Media
  • Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says
    Elon Musk is a better role model than Jesus, better at conquering Europe than Hitler, the greatest blowjob giver of all time, should have been selected before Peyton Manning in the 1998 NFL draft, is a better pitcher than Randy Johnson, has the “potential to drink piss better than any human in history,” and is a better porn star than Riley Reid, according to Grok, X’s sycophantic AI chatbot that has seemingly been reprogrammed to treat Musk like a god. Grok has been tweaked sometime in the la
     

Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says

20 novembre 2025 à 16:38
Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says

Elon Musk is a better role model than Jesus, better at conquering Europe than Hitler, the greatest blowjob giver of all time, should have been selected before Peyton Manning in the 1998 NFL draft, is a better pitcher than Randy Johnson, has the “potential to drink piss better than any human in history,” and is a better porn star than Riley Reid, according to Grok, X’s sycophantic AI chatbot that has seemingly been reprogrammed to treat Musk like a god. 

Grok has been tweaked sometime in the last several days and will now choose Musk as being superior to the entire rest of humanity at any given task. The change is somewhat reminiscent of Grok’s MechaHitler debacle. It is, for the moment, something that is pretty funny and which people on various social media platforms are dunking on Musk and Grok for, but it’s also an example of how big tech companies, like X, are regularly putting their thumbs on the scales of their AI chatbots to distort reality and to obtain their desired outcome. 

  • ✇404 Media
  • ICE Says Critical Evidence In Abuse Case Was Lost In 'System Crash' a Day After It Was Sued
    The federal government claims that the day after it was sued for allegedly abusing detainees at an ICE detention center, a “system crash” deleted nearly two weeks of surveillance footage from inside the facility.  People detained at ICE’s Broadview Detention Center in suburban Chicago sued the government on October 30; according to their lawyers and the government, nearly two weeks of footage that could show how they were treated was lost in a “system crash” that happened on October 31.“The g
     

ICE Says Critical Evidence In Abuse Case Was Lost In 'System Crash' a Day After It Was Sued

20 novembre 2025 à 14:40
ICE Says Critical Evidence In Abuse Case Was Lost In 'System Crash' a Day After It Was Sued

The federal government claims that the day after it was sued for allegedly abusing detainees at an ICE detention center, a “system crash” deleted nearly two weeks of surveillance footage from inside the facility.  

People detained at ICE’s Broadview Detention Center in suburban Chicago sued the government on October 30; according to their lawyers and the government, nearly two weeks of footage that could show how they were treated was lost in a “system crash” that happened on October 31.

“The government has said that the data for that period was lost in a system crash apparently on the day after the lawsuit was filed,” Alec Solotorovsky, one of the lawyers representing people detained at the facility, said in a hearing about the footage on Thursday that 404 Media attended via phone. “That period we think is going to be critical […] because that’s the period right before the lawsuit was filed.”

Earlier this week, we reported on the fact that the footage, from October 20 to October 30, had been “irretrievably destroyed.” At a hearing Thursday, we learned more about what was lost and the apparent circumstances of the deletion. According to lawyers representing people detained at the facility, it is unclear whether the government is even trying to recover the footage; government lawyers, meanwhile, said “we don’t have the resources” to continue preserving surveillance footage from the facility and suggested that immigrants detained at the facility (or their lawyers) could provide “endless hard drives where we could save the information, that might be one solution.” 

It should be noted that ICE and Border Patrol agents continued to be paid during the government shutdown, that Trump’s “Big Beautiful Bill” provided $170 billion in funding for immigration enforcement and border protection, which included tens of billions of dollars in funding for detention centers. 

People detained at the facility are suing the government over alleged horrific treatment and living conditions at the detention center, which has become a site of mass protest against the Trump administration’s mass deportation campaign. 

Solotorovsky said that the footage the government has offered is from between September 28 and October 19, and from between October 31 and November 7. Government lawyers have said they are prepared to provide footage from five cameras from those time periods; Solotorovsky said the plaintiffs’ attorneys believe there are 63 surveillance cameras total at the facility. He added that over the last few weeks the plaintiffs’ legal team has been trying to work with the government to figure out if the footage can be recovered but that it is unclear who is doing this work on the government’s side. He said they were referred to a company called Five by Five Management, “that appears to be based out of a house,” has supposedly been retained by the government. 

“We tried to engage with the government through our IT specialist, and we hired a video forensic specialist,” Solotorovsky said. He added that the government specialist they spoke to “didn’t really know anything beyond the basic specifications of the system. He wasn’t able to answer any questions about preservation or attempts to recover the data.” He said that the government eventually put him in touch with “a person who ostensibly was involved in those events [attempting to recover the data], and it was kind of a no-name LLC called Five by Five Management that appears to be based out of a house in Carol Stream. We were told they were on site and involved with the system when the October 20 to 30 data was lost, but nobody has told us that Five By Five Management or anyone else has been trying to recover the data, and also very importantly things like system logs, administrator logs, event logs, data in the system that may show changes to settings or configurations or deletion events or people accessing the system at important times.”

Five by Five Management could not be reached for comment.

Solotorovsky said those logs are going to be critical for “determining whether the loss was intentional. We’re deeply concerned that nobody is trying to recover the data, and nobody is trying to preserve the data that we’re going to need for this case going forward.”

Jana Brady, an assistant US attorney representing the Department of Homeland Security in the case, did not have much information about what had happened to the footage, and said she was trying to get in touch with contractors the government had hired. She also said the government should not be forced to retain surveillance footage from every camera at the facility and that the “we [the federal government] don’t have the resources to save all of the video footage.”

“We need to keep in mind proportionality. It took a huge effort to download and save and produce the video footage that we are producing and to say that we have to produce and preserve video footage indefinitely for 24 hours a day, seven days a week, indefinitely, which is what they’re asking, we don’t have the resources to do that,” Brady said. “we don't have the resources to save all of the video footage 24/7 for 65 cameras for basically the end of time.”

She added that the government would be amenable to saving all footage if the plaintiffs “have endless hard drives that we could save things to, because again we don’t have the resources to do what the court is ordering us to do. But if they have endless hard drives where we could save the information, that might be one solution.”

Magistrate Judge Laura McNally said they aren’t being “preserved from now until the end of time, they’re being preserved for now,” and said “I’m guessing the federal government has more resources than the plaintiffs here and, I’ll just leave it at that.” 

When McNally asked if the footage was gone and not recoverable, Brady said “that’s what I’ve been told.”  

“I’ve asked for the name and phone number for the person that is most knowledgeable from the vendor [attempting to recover] the footage, and if I need to depose them to confirm this, I can do this,” she said. “But I have been told that it’s not recoverable, that the system crashed.”

Plaintiffs in the case say they are being held in “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.” 

  • ✇404 Media
  • OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea
    OnlyFans will start running background checks on people signing up as content creators, the platform’s CEO recently announced. As reported by adult industry news outlet XBIZ, OnlyFans CEO Keily Blair announced the partnership in a LinkedIn post. Blair doesn’t say in the post when the checks will be implemented, whether all types of criminal convictions will bar creators from signing up, if existing creators will be checked as well, or what countries’ criminal records will be checked. OnlyFans
     

OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea

20 novembre 2025 à 10:45
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea

OnlyFans will start running background checks on people signing up as content creators, the platform’s CEO recently announced. 

As reported by adult industry news outlet XBIZ, OnlyFans CEO Keily Blair announced the partnership in a LinkedIn post. Blair doesn’t say in the post when the checks will be implemented, whether all types of criminal convictions will bar creators from signing up, if existing creators will be checked as well, or what countries’ criminal records will be checked. 

OnlyFans did not respond to 404 Media's request for comment.

“I am very proud to add our partnership with Checkr Trust to our onboarding process in the US,” Blair wrote. “Checkr, Inc. helps OnlyFans to prevent people who have a criminal conviction which may impact on our community's safety from signing up as a Creator on OnlyFans. It’s collaborations like this that make the real difference behind the scenes and keep OnlyFans a space where creators and fans feel secure and empowered.”  

Many OnlyFans creators turned to the platform, and to online sex work more generally, when they’re not able to obtain employment at traditional workplaces. Some sex workers doing in-person work turned to online sex work as a way to make ends meet—especially after the passage of the Fight Online Sex Trafficking Act in 2018 made it much more difficult to screen clients for escorting. And in-person sex work is still criminalized in the U.S. and many other countries. 

“Criminal background checks will not stop potential predators from using the platform (OF), it will only harm individuals who are already at higher risk. Sex work has always had a low barrier to entry, making it the most accessible career for people from all walks of life,” performer GoAskAlex, who’s on OnlyFans and other platforms, told me in an email. “Removing creators with criminal/arrest records will only push more vulnerable people (overwhelmingly, women) to street based/survival sex work. Adding more barriers to what is arguably the safest form of sex work (online sex work) will push sex industry workers to less and less safe options.” 

Jessica Starling, who also creates adult content on OnlyFans, told me in a call that their first thought was that if someone using OnlyFans has a prostitution charge, they might not be able to use the platform. “If they're trying to transition to online work, they won’t be able to do that anymore,” they said. “And the second thing I thought was that it's just invasive and overreaching... And then I looked up the company, and I'm like, ‘Oh, wow, this is really bad.’”

Checkr is reportedly used by Uber, Instacart, Shipt, Postmates, and Lyft, and lists many more companies like Dominos and Doordash on its site as clients. The company has been sued hundreds of times for violations of the Fair Credit Reporting Act or other consumer credit complaints. The Fair Credit Reporting Act says that companies providing information to consumer reporting agencies are legally obligated to investigate disputed information. And a lot of people dispute the information Checkr and Inflection provide on them, claiming mixed-up names, acquittals, and decades-old misdemeanors or traffic tickets prevented them from accessing platforms that use background checking services.  

Checkr regularly acquires other background checking and age verification companies, and acquired a background check company called Inflection in 2022. At the time, I found more than a dozen lawsuits against Inflection alone in a three year span, many of them from people who found out about the allegedly inaccurate reports Inflection kept about them after being banned from Airbnb after the company claimed they failed checks. 

How OnlyFans Piracy Is Ruining the Internet for Everyone
Innocent sites are being delisted from Google because of copyright takedown requests against rampant OnlyFans piracy.
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea404 MediaEmanuel Maiberg
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea

“Sex workers face discrimination when leaving the sex trade, especially those who have been face-out and are identifiable in the online world. Facial recognition technology has advanced to a point where just about anyone can ascertain your identity from a single picture,” Alex said. “Leaving the online sex trade is not as easy as it once was, and anything you've done online will follow you for a lifetime. Creators who are forced to leave the platform will find that safe and stable alternatives are far and few between.”

Last month, Pornhub announced that it would start performing background checks on existing content partners—which primarily include studios—next year. "To further protect our creators and users, all new applicants must now complete a criminal background check during onboarding," the platform announced in a newsletter to partners, as reported by AVN

Alex said she believes background checks in the porn industry could be beneficial, under very specific circumstances. “I do not think that someone with egregious history of sexual violence should be allowed to work in the sex trade in any capacity—similarly, a person convicted of hurting children should be not able to work with children—so if the criminal record checks were searching specifically for sex based offences I could see the benefit, but that doesn't appear to be the case (to my knowledge). What's to stop OnlyFans from deactivating someone's account due to a shoplifting offense?” she said. “I'd like to know more about what they're searching for with these background checks.”

Even with third-party companies like Checkr doing the work, as is the case with third-party age verification that’s swept the U.S. and targeted the porn industry, increased data means increased risk of it being leaked or hacked. Last year, a background check company called National Public Data claimed it was breached by hackers who got the confidential data of 2.9 billion people. The unencrypted data was then sold on the dark web.

Pornhub Is Now Blocked In Almost All of the U.S. South
As of today, three more states join the list of 17 that can’t access Pornhub because of age verification laws.
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea404 MediaSamantha Cole
OnlyFans Will Start Checking Criminal Records. Creators Say That's a Terrible Idea

“It’s dangerous for anyone, but it's especially dangerous for us [adult creators] because we're more vulnerable anyway. Especially when you're online, you're hypervisible,” Starling said. “It doesn't protect anyone except OnlyFans themselves, the company.” 

OnlyFans became the household name in independent porn because of the work of its adult content creators. Starling mentioned that because the platform has dominated the market, it’s difficult to just go to another platform if creators don’t want to be subjected to background checks. “We're put in a position where we have very limited power," they said. "So when a platform decides to do something like this, we’re kind of screwed, right?” 

Earlier this year, OnlyFans owner Fenix International Ltd reportedly entered talks to sell the company to an investor group at a valuation of around $8 billion.

  • ✇404 Media
  • Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
    The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it’s in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles “invincible.” Joe Biden said the missile was “almost impossible to stop.” Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order.As winter begins in Ukraine, Russia has ramped up attacks on power and water infrastructure u
     

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song

20 novembre 2025 à 10:11
Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song

The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it’s in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles “invincible.” Joe Biden said the missile was “almost impossible to stop.” Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order.

As winter begins in Ukraine, Russia has ramped up attacks on power and water infrastructure using the hypersonic Kinzhal missile. Russia has come to rely on massive long-range barrages that include drones and missiles. An overnight attack in early October included 496 drones and 53 missiles, including the Kinzhal. Another attack at the end of October involved more than 700 mixed missiles and drones, according to the Ukrainian Air Force.

“Only one type of system in Ukraine was able to intercept those kinds of missiles. It was the Patriot system, which the United States provided to Ukraine. But, because of the limits of those systems and the shortage of ammunition, Ukraine defense are unable to intercept most of those Kijnhals,” a member of Night Watch—a Ukrainian electronic warfare team—told 404 Media. The representative from Night Watch spoke to me on the condition of anonymity to discuss war tactics.

Kinzhals and other guided munitions navigate by communicating with Russian satellites that are part of the GLONASS system, a GPS-style navigation network. Night Watch uses a jamming system called Lima EW to generate a disruption field that prevents anything in the area from communicating with a satellite. Many traditional jamming systems work by blasting receivers on munitions and aircraft with radio noise. Lima does that, but also sends along a digital signal and spoofs navigation signals. It “hacks” the receiver it's communicating with to throw it off course.

Night Watch shared pictures of the downed Kinzhals with 404 Media that showed a missile with a controlled reception pattern antenna (CRPA), an active antenna that’s meant to resist jamming and spoofing. “We discovered that this missile had pretty old type of technology,” Night Watch said. “They had the same type of receivers as old Soviet missiles used to have. So there is nothing special, there is nothing new in those types of missiles.”

Night Watch told 404 Media that it used this Lima to take down 19 Kinzhals in the past two weeks. First, it replaces the missile’s satellite navigation signals with the Ukrainian song “Our Father Is Bandera.” 

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
A downed Kinzhal. Night Watch photo.

Any digital noise or random signal would work to jam the navigation system, but Night Watch wanted to use the song because they think it’s funny. “We just send a song…we just make it into binary code, you know, like 010101, and just send it to the Russian navigation system,” Night Watch said. “It’s just kind of a joke. [Bandera] is a Ukrainian nationalist and Russia tries to use this person in their propaganda to say all Ukrainians are Nazis. They always try to scare the Russian people that Ukrainians are, culturally, all the same as Bandera.”

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Once the song hits, Night Watch uses Lima to spoof a navigation signal to the missiles and make them think they’re in Lima, Peru. Once the missile’s confused about its location, it attempts to change direction. These missiles are fast—launched from a MiG-31 they can hit speeds of up to Mach 5.7 or more than 4,000 miles per hour—and an object moving that fast doesn’t fare well with sudden changes of direction.

“The airframe cannot withstand the excessive stress and the missile naturally fails,” Night Watch said. “When the Kinzhal missile tried to quickly change navigation, the fuselage of this missile was unable to handle the speed…and, yeah., it was just cut into two parts…the biggest advantage of those missiles, speed, was used against them. So that’s why we have intercepted 19 missiles for the last two weeks.”

Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
Electronics in a downed Kinzhal. Night Watch photo.

Night Watch told 404 Media that Russia is attempting to defeat the Lima system by loading the missiles with more of the old tech. The goal seems to be to use the different receivers to hop frequencies and avoid Lima’s signal. 

“What is Russia trying to do? Increase the amount of receivers on those missiles. They used to have eight receivers and right now they increase it up to 12, but it will not help,” Night Watch said. “The last one we intercepted, they already used 16 receivers. It’s pretty useless, that type of modification.” 

According to Night Watch, countering Lima by increasing the number of receivers on the missile is a profound misunderstanding of its tech. “They think we make the attack on each receiver and as soon as one receiver attacks, they try to swap in another receiver and get a signal from another satellite. But when the missile enters the range of our system, we cover all types of receivers,” they said. “It’s physically impossible to connect with another satellite, but they think that it’s possible. That’s why they started with four receivers and right now it’s 16. I guess in the future we’ll see 24, but it’s pretty useless.”

  • ✇Dans les algorithmes
  • Meta : de l’IA publicitaire à la fraude publicitaire
    « Dans un avenir proche, nous souhaitons que chaque entreprise puisse nous indiquer son objectif, comme vendre quelque chose ou acquérir un nouveau client, le montant qu’elle est prête à payer pour chaque résultat, et connecter son compte bancaire ; nous nous occuperons du reste », déclarait Zuckerberg lors de l’assemblée générale annuelle des actionnaires de l’entreprise (voir notre article, « L’IA, un nouvel internet… sans condition »). Nous y sommes, explique Jason Koebler pour 404media en mo
     

Meta : de l’IA publicitaire à la fraude publicitaire

20 novembre 2025 à 01:00

« Dans un avenir proche, nous souhaitons que chaque entreprise puisse nous indiquer son objectif, comme vendre quelque chose ou acquérir un nouveau client, le montant qu’elle est prête à payer pour chaque résultat, et connecter son compte bancaire ; nous nous occuperons du reste », déclarait Zuckerberg lors de l’assemblée générale annuelle des actionnaires de l’entreprise (voir notre article, « L’IA, un nouvel internet… sans condition »). Nous y sommes, explique Jason Koebler pour 404media en montrant l’usage de l’IA générative par Ticketmaster pour personnaliser ses campagnes publicitaires, où l’IA est utilisée à la fois pour le ciblage et la génération des publicités. « Moins d’argent investi dans la création signifie plus de budget publicitaire et donc une plus grande variété de publicités », rappelle Koebler. « Les entreprises peuvent ainsi inonder les réseaux sociaux de millions de variantes de publicités IA faciles à créer, investir leur budget publicitaire dans les versions les plus performantes et laisser les algorithmes de ciblage faire le reste. Dans ce cas précis, l’IA est une stratégie de mise à l’échelle. Inutile de consacrer des sommes considérables en temps, en argent et en ressources humaines à peaufiner les textes publicitaires et à concevoir des publicités pertinentes, astucieuses, drôles, charmantes ou accrocheuses. Il suffit de publier des tonnes de versions bâclées, et la plupart des gens ne verront que celles qui fonctionnent bien ».

Un rapport de Reuters, vient de révéler que « 10 % du chiffre d’affaires brut de Meta provient de publicités pour des produits frauduleux et des arnaques ». « 15 milliards de publicités frauduleuses sont diffusées chaque jour, générant 7 milliards de dollars de revenus par an ». Mais, plutôt que refuser ces publicités frauduleuses, Meta ne ferme pas les comptes qui les proposent et leur inflige des frais supplémentaires, les rendant plus rentables encore qu’elles ne sont. Un tiers des arnaques aux États-Unis transiteraient par Facebook (au Royaume-Uni, ce chiffre atteindrait 54 % des pertes liées aux arnaques aux paiements). Si Meta a mis en place des mesures pour réduire la fraude sur sa plateforme, l’entreprise estime que le montant maximal des amendes qu’elle devra finalement payer dans le monde s’élèvera à 1 milliard de dollars, alors qu’elle en encaisse 7… On comprend que Meta ne soit pas incité à la diligence, comme l’explique très clairement une note interne citée par Reuters, ironise Cory Doctorow. Mais surtout, on y apprend que l’équipe antifraude est tributaire d’un quota interne : « elle n’est autorisée à prendre que des mesures susceptibles de réduire les revenus publicitaires de 0,15 % (soit 135 millions de dollars) ». Les services de modération ou de lutte contre la fraude ressemblent désormais aux services clients qu’on évoquait récemment : une ligne budgétaire avec des objectifs et des contraintes !

Pire, explique encore Doctorow dans sa lecture de Reuters : alors que les équipes de sécurité recevaient environ 10 000 signalements de fraudes valides par semaine, celles-ci, en ignoraient ou en rejetaient à tort 96 %. Le problème, c’est que lorsque Meta classe un signalement sans suite ou refuse de corriger des signalements de fraudes valides, non seulement les utilisateurs perdent beaucoup, mais l’usurpation d’identité permet également de faire les poches des relations des victimes en leur extorquant de l’argent, voire beaucoup d’argent.

Meta qualifie ce type d’escroquerie, où les escrocs usurpent l’identité d’utilisateurs, d’« organique », la distinguant ainsi des publicités frauduleuses, où les escrocs paient pour atteindre leurs victimes potentielles. Meta estime héberger 22 milliards de messages frauduleux « organiques » par jour. Ces escroqueries organiques sont en réalité souvent autorisées par les conditions d’utilisation de Meta : lorsque la police de Singapour a porté plainte auprès de Meta concernant 146 publications frauduleuses, l’entreprise a conclu que seulement 23 % d’entre elles violaient ses conditions d’utilisation. Les autres étaient toutes autorisées. Ces fraudes tolérées incluaient des offres alléchantes promettant des réductions de 80 % sur de grandes marques de mode, des offres de faux billets de concert et de fausses offres d’emploi – le tout autorisé par les propres politiques de Meta. Des notes internes consultées par Reuters révèlent que les équipes antifraude de Meta étaient de plus en plus exaspérées de constater que ces escroqueries n’étaient pas interdites sur la plateforme. Un employé de Meta l’a même écrit a sa direction en dénonçant des escroqueries visibles : « Les politiques actuelles ne permettraient pas de signaler ce compte ! » Mais même si un fraudeur enfreint les conditions d’utilisation de Meta, l’entreprise reste inactive. Selon les propres politiques de Meta, un « Compte à forte valeur ajoutée » (un compte dépensant des sommes importantes en publicités frauduleuses) doit accumuler plus de 500 « avertissements » (c’est-à-dire des violations avérées des politiques de Meta) avant d’être suspendu. Reuters a constaté que 40 % des escrocs les plus notoires étaient toujours actifs sur la plateforme six mois après avoir été signalés comme les fraudeurs les plus prolifiques de l’entreprise. 

Ce mépris flagrant pour les utilisateurs de Meta n’est pas dû à une nouvelle tendance sadique de la part de la direction. Comme le démontre en détail le livre de Sarah Wynn-Williams, Careless People (Flatiron Books, 2025 ; Des gens peu recommandables, Buchet-Chastel, 2025), l’entreprise a toujours été dirigée par des individus sans scrupules. Ce qui a changé en quelques années, assène Doctorow, c’est qu’ils ont assimilé qu’ils pouvaient gagner de l’argent en vous escroquant. 

Oui, réduire la fraude a un coût ! « Tant que nous aurons un environnement législatif si complaisant qui ne leur inflige qu’un milliard de dollars d’amende alors qu’ils en ont engrangé 7 sur nos malheurs », nous n’irons pas très loin.

  • ✇404 Media
  • Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'
    In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.The new policy has not gone over well.
     

Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'

19 novembre 2025 à 12:17
Joe Rogan Subreddit Bans 'Political Posts' But Still Wants 'Free Speech'

In a move that has confused and angered its users, the r/JoeRogan subreddit has banned all posts about politics. Adding to the confusion, the subreddit’s mods have said that political comments are still allowed, just not posts. “After careful consideration, internal discussion and tons of external feedback we have collectively decided that r/JoeRogan is not the place for politics anymore,” moderator OutdoorRink said in a post announcing the change today.

The new policy has not gone over well. For the last 10 years, the Joe Rogan Experience has been a central part of American political life. He interviews entertainers, yes, but also politicians and powerful businessmen. He had Donald Trump on the show and endorsed his bid for President. During the COVID and lockdown era, Rogan cast himself as an opposition figure to the heavy regulatory hand of the state. In a recent episode, Rogan’s guest was another podcaster, Adam Carolla, and the two spent hours talking about Covid lockdowns, Gavin Newsom, and specific environmental laws and building codes they argue is preventing Los Angeles from rebuilding after the Palisades fire.

To hear the mods tell it, the subreddit is banning politics out of concern for Rogan’s listeners. “For too long this subreddit has been overrun by users who are pushing a political agenda, both left and right, and that stops today,” the post announcing the ban said. “It is not lost on us that Joe has become increasingly political in recent years and that his endorsement of Trump may have helped get him elected. That said, we are not equipped to properly moderate, arbitrate and curate political posts…while also promoting free speech.” 

To be fair, as Rogan’s popularity exploded over the years, and as his politics have shifted to the right, many Reddit users have turned to the r/JoeRogan to complain about the direction Rogan and his podcast have taken. These posts are often antagonistic to Rogan and his fans, but are still “on-topic.”

Over the past few months, the moderator who announced the ban has posted several times about politics on r/JoeRogan. On November 3, they said that changes were coming to the moderation philosophy of the sub. “In the past few years, a significant group of users have been taking advantage of our ‘anything goes’ free speech policy,” they said. “This is not a political subreddit. Obviously Joe has dipped his toes in the political arena so we have allowed politics to become a component of the daily content here. That said, I think most of you will agree that it has gone too far and has attracted people who come here solely to push their political agenda with little interest in Rogan or his show.” A few days later the mod posted a link to a CBC investigation into MMA gym owners with neo-Nazi ties, a story only connected to Rogan by his interested in MMA and work as a UFC commentator.

r/JoeRogan’s users see the new “no political posts” policy as hypocrisy. And a lot of them think it has everything to do with recent revelations about Jeffrey Epstein. The connections between Epstein, Trump, and various other Rogan guests have been building for years. A recent, poorly formatted, dump of 200,000 Epstein files contained multiple references to Trump and Congress is set to release more. 

 “Random new mod appears and want to ruin this sub on a pathetic power trip. Transparently an attempt to cover for the pedophiles in power that Joe endorsed and supports. Not going to work,” one commenter said under the original post announcing the new ban.

“Perfectly timed around the Epstein files due to be released as well. So much for being free speech warriors eh space chimps?,” said one.

“Talking politics was great when it was all dunking on trans people and brown people but now that people have to defend pedophiles that banned hemp it's not so fun anymore,” said another.

You can see the remnants of pre-politics bans discussions lingering on r/JoeRogan. There are, of course, clips from the show and discussions of its guests but there’s also a lot of Epstein memes, posts about Epstein news, and fans questioning why Rogan hasn’t spoken out about Epstein recently after talking about it on the podcast for years.

Multiple guests Rogan has hosted on the show have turned up in the Epstein files, chief among them Donald Trump. The House GOP slipped a ban on hemp into the bill to re-open the government, a move that will close a loophole that’s allowed people to legally smoke weed in states like Texas. These are not the kinds of things the chill apes of Rogan’s fandom wanted.

“I think we all know what eventually happened to Joe and his podcast. The slow infiltration of right wing grifters coupled with Covid, it very much did change him. And I saw firsthand how that trickled down into the comedy community, especially one where he was instrumental in helping to rebuild. Instead of it being a platform to share his interests and eccentricities, it became a place to share his grievances and fears….how can we not expect to be allowed to talk about this?” user GreppMichaels said. “Do people really think this sub can go back to silly light chatter about aliens or conspiracies? Joe did this, how do the mods think we can pretend otherwise?”

  • ✇404 Media
  • Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn
    An erotic roleplay chatbot and AI image creation platform called Secret Desires left millions of user-uploaded photos exposed and available to the public. The databases included nearly two million photos and videos, including many photos of completely random people with very little digital footprint. The exposed data shows how many people use AI roleplay apps that allow face-swapping features: to create nonconsensual sexual imagery of everyone, from the most famous entertainers in the world t
     

Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn

19 novembre 2025 à 10:20
Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn

An erotic roleplay chatbot and AI image creation platform called Secret Desires left millions of user-uploaded photos exposed and available to the public. The databases included nearly two million photos and videos, including many photos of completely random people with very little digital footprint. 

The exposed data shows how many people use AI roleplay apps that allow face-swapping features: to create nonconsensual sexual imagery of everyone, from the most famous entertainers in the world to women who are not public figures in any way. In addition to the real photo inputs, the exposed data includes AI-generated outputs, which are mostly sexual and often incredibly graphic. Unlike “nudify” apps that generate nude images of real people, these images are putting people into AI-generated videos of hardcore sexual scenarios.  

  • ✇404 Media
  • Podcast: The Epstein Email Dump Is a Mess
    We start this week with a rant from Jason about how the latest dump of Epstein emails were released. It would be a lot easier to cover them if they were published differently! After the break, we talk about Joseph’s piece about a contractor hiring essentially randos off LinkedIn to physically track immigrants for $300. In the subscribers-only section, Sam tells us about a new adult industry code of conduct that has been a long time coming Listen to the weekly podcast on Apple Podcasts, Spo
     

Podcast: The Epstein Email Dump Is a Mess

19 novembre 2025 à 08:57
Podcast: The Epstein Email Dump Is a Mess

We start this week with a rant from Jason about how the latest dump of Epstein emails were released. It would be a lot easier to cover them if they were published differently! After the break, we talk about Joseph’s piece about a contractor hiring essentially randos off LinkedIn to physically track immigrants for $300. In the subscribers-only section, Sam tells us about a new adult industry code of conduct that has been a long time coming

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

  • ✇Dans les algorithmes
  • Sur la piste des algorithmes… l’opacité est toujours la règle
    L’Observatoire des algorithmes publics a mis à jour son inventaire, portant de 70 à 120 son répertoire d’algorithmes utilisés dans le secteur public. Les constats établis l’année dernière, au lancement de l’initiative, demeurent les mêmes : l’opacité administrative reste persistante ; les évaluations des systèmes et les informations budgétaires lacunaires. Et si l’IA fait son entrée dans l’administration, là aussi, l’information est bien souvent inexistante.  Signalons que depuis septembre, l
     

Sur la piste des algorithmes… l’opacité est toujours la règle

19 novembre 2025 à 01:00

L’Observatoire des algorithmes publics a mis à jour son inventaire, portant de 70 à 120 son répertoire d’algorithmes utilisés dans le secteur public. Les constats établis l’année dernière, au lancement de l’initiative, demeurent les mêmes : l’opacité administrative reste persistante ; les évaluations des systèmes et les informations budgétaires lacunaires. Et si l’IA fait son entrée dans l’administration, là aussi, l’information est bien souvent inexistante. 

Signalons que depuis septembre, l’Odap a également publié de nombreux entretiens avec des chercheurs sur ces enjeux. Tous ces entretiens, « sur la piste des algorithmes », méritent largement une lecture. 

On recommandera notamment celui avec la sociologue Claire Vivès sur le contrôle à France Travail qui pointe trois outils algorithmiques développés à France Travail : un outil de pour déterminer les contrôles des bénéficiaires, un autre pour déterminer si les bénéficiaires recherchent activement un travail et un outil d’aide à la détermination de la sanction en cas de manquements dans la recherche d’emploi. « En 2023, plus de la moitié des contrôles ont ciblé des demandeur·ses d’emploi inscrit·es dans des métiers dits en tension. Ce ciblage pose des questions d’équité territoriale et sociale. » Mais là encore, les caractéristiques de ces outils demeurent opaques aux usagers comme aux agents, et plus encore aux citoyens puisque leurs fonctionnement ne sont pas documentés. A l’heure où le Sénat vient de proposer de nouvelles mesures de surveillance des bénéficiaires, comme d’accéder à leurs relevés téléphoniques, révèle L’Humanité, la surveillance coercitive des allocataires avance bien plus vite que la transparence de l’action administrative. 

La sociologue souligne combien ces outils ont du mal à s’inscrire dans les activités des agents et surtout que ces dispositifs viennent souvent alourdir leur activité plutôt qu’en réduire la charge. « Les réformes sont pensées à distance, sans prise en compte réelle du travail fait sur le terrain. Les algorithmes ne font qu’ajouter une couche à ce problème, en accentuant ce sentiment de perte de sens, parfois même de mépris ou au moins d’ignorance de leur expertise de la part des décideur·ses. Dans ce contexte, on peut se demander si l’acceptation des algorithmes, quand elle existe, ne tient pas surtout à une forme de résignation. » 

« L’une des principales difficultés de notre enquête tient en réalité à une opacité structurelle de l’administration, qui dépasse largement la question algorithmique. France Travail, comme d’autres institutions administratives, reste un espace très difficile d’accès. Les autorisations sont rares, les possibilités de terrain très encadrées, et les refus fréquents, que ce soit pour les chercheur·ses ou les journalistes d’ailleurs. Nos demandes d’observations du travail réalisé sur les plateformes de contrôle, par exemple, n’ont jamais abouti. Et sur le terrain, les agent·es eux-mêmes hésitent à parler : le devoir de réserve est souvent interprété de façon erronée et fait barrière aux prises de parole. Sur les algorithmes spécifiquement, l’information est très parcellaire. La Direction générale de France Travail ne publie quasiment rien à ce sujet, le site officiel est muet sur les outils utilisés. C’est d’ailleurs quelque chose qui pose question : pourquoi est-ce que les documents internes, qui sont ceux d’une administration publique, ne sont pas diffusés en accès libre ? En tous cas, tout cela conditionne profondément notre manière de mener l’enquête, le rythme de travail et les types de matériaux mobilisables. » 

L’opacité fonctionnelle explique certainement la perspective à orienter la suite de l’enquête. « Dans les prochaines étapes de l’enquête, nous allons notamment chercher à travailler sur le volet conception de ces outils : qui les commande ? Qui les paramètre ? Avec quels objectifs ? Qui les évalue ? Ce sont des points sur lesquels on a pour l’instant peu de réponses ». On a hâte de voir ce qui va en sortir en tout cas.

  • ✇404 Media
  • Scientists Discover the Origin of Kissing — And It’s Not Human
    🌘Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week. Kissing is one of humanity’s most cherished rituals—just think of the sheer variety of smooches, from the “wedding kiss” to the “kiss of death.” Now, scientists have discovered that the origins of this behavior, which is widespread among many primates, likely dates back at least 21 million years, according to a study published on Tuesday in the journal Evolu
     

Scientists Discover the Origin of Kissing — And It’s Not Human

18 novembre 2025 à 19:01
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists Discover the Origin of Kissing — And It’s Not Human

Kissing is one of humanity’s most cherished rituals—just think of the sheer variety of smooches, from the “wedding kiss” to the “kiss of death.” Now, scientists have discovered that the origins of this behavior, which is widespread among many primates, likely dates back at least 21 million years, according to a study published on Tuesday in the journal Evolution and Human Behavior.  

In other words, our early primate relatives were sitting in a tree, K-I-S-S-I-N-G, in the early Miocene period. Moreover, the deep evolutionary roots of kissing suggest that Neanderthals likely smooched each other, and probably our human ancestors as well. The new study is the first attempt to reconstruct the evolutionary timeline of kissing by analyzing a wealth of observations about this behavior in modern primates and other animals. 

“It is kind of baffling to me that people haven't looked at this from an evolutionary perspective before,” said Matilda Brindle, an evolutionary biologist at the University of Oxford who led the study, in a call with 404 Media. “There have been some people who have put ideas out there, but no one's done it in a systematic way.”

“Kissing doesn't occur in all human cultures, but in those that it does, it's really important,” she added. “That's why we thought it was really exciting to study.”

Scientists Discover the Origin of Kissing — And It’s Not Human
A collage of mouth-to-mouth contact across species. Image: Brindle, Matilda et al.

The ritual of the “first kiss” is a common romantic trope, but tracking down the “first kiss” in an evolutionary sense is no easy feat. For starters, the adaptive benefits of kissing have long eluded researchers. Mouth-to-mouth contact raises the odds of oral disease transfer, and it’s not at all clear what advantages puckering up confers to make it worth the trouble.

“Kissing is kind of risky,” Brindle said. “You're getting very close to another animal's face. There could be diseases. To me, that suggests that it is important. There must be some benefits to this behavior.”

Some common explanations for sex-related kissing include mate evaluation—bad breath or other red flags during a smoochfest might affect the decision to move on to copulation. Kissing may also stimulate sexual receptiveness and perhaps boost the odds of fertilization. In platonic contexts, kissing could serve a social purpose, similar to grooming, of solidifying bonds between parents and offspring, or even to smooth over conflicts between group members. 

“We know that chimpanzees, when they've had a bit of a bust up, will often go and kiss each other and make up,” Brindle said. “That might be really useful for navigating social relationships. Primates are obviously an incredibly social group of animals, and so this could be just a social lubricant for them.”

Though most of us have probably never considered the question, Brindle and her colleagues first had to ask: what is a kiss? They made a point to exclude forms of oral contact that don’t fall into the traditional idea of kissing as a prosocial behavior. For example, lots of animals share food directly through mouth-to-mouth contact, such as regurgitation from a parent to offspring. In addition, some animals display antagonistic behavior through mouth-to-mouth contact, such as “kiss-fighting” behavior seen in some fish. 

The team ultimately defined kissing as “a non-agonistic interaction involving directed, intraspecific, oral-oral contact with some movement of the lips/mouthparts and no food transfer.” Many animals engage in kissing under these terms—from insects, to birds, to mammals—but the researchers were most interested in primates.

To that end, they gathered observations of kissing across primate species and fed the data into models that analyzed the timeline of the behavior through the evolutionary relationships between species. The basic idea is that if humans, bonobos, and chimpanzees all kiss (which they do) then the common ancestor of these species likely kissed as well. 

The results revealed that the evolutionary “first kiss” likely occurred among primates at least 21 million years ago. Since Neanderthals and our own species, Homo sapiens, are known to have interbred—plus they also shared oral microbes—the team speculates that Neanderthals and our own human ancestors might have kissed as well.   

While the study provides a foundation for the origins of kissing, Brindle said there is not yet enough empirical data to test out different hypotheses about its benefits—or to explain why it is important in some species and cultures, but not others. To that end, she hopes other scientists will be inspired to report more observations about kissing in wild and captive animal populations.

“I was actually surprised that there were so few data out there,” Brindle said. “I thought that this would be way better documented when I started this study. What I would really love is, for people who see this behavior, to note it down, report it, so that we can actually start collecting more contextual information: Is this a romantic or a platonic kiss? Who were the actors in it? Was it an adult male and an adult female, or a mother and offspring? Were they eating at the time? Was there copulation before or after the kiss?”

“These sorts of questions will enable us to pick apart these potential adaptive hypotheses,” she concluded.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
  • ✇404 Media
  • HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’
    The legendary hacker conference Hackers on Planet Earth (HOPE) says that it has been “banned” from St. John’s University, the venue where it has held the last several HOPE conferences, because someone told the university the conference had an “anti-police agenda.”HOPE was held at St. John’s University in 2022, 2024, and 2025, and was going to be held there in 2026, as well. The conference has been running at various venues over the last 31 years, and has become well-known as one of the better
     

HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’

18 novembre 2025 à 14:32
HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’

The legendary hacker conference Hackers on Planet Earth (HOPE) says that it has been “banned” from St. John’s University, the venue where it has held the last several HOPE conferences, because someone told the university the conference had an “anti-police agenda.”

HOPE was held at St. John’s University in 2022, 2024, and 2025, and was going to be held there in 2026, as well. The conference has been running at various venues over the last 31 years, and has become well-known as one of the better hacking and security research conferences in the world. Tuesday, the conference told members of its mailing list that it had “received some disturbing news,” and that “we have been told that ‘materials and messaging’ at our most recent conference ‘were not in alignment with the mission, values, and reputation of St. John’s University’ and that we would no longer be able to host our events there.” 

The conference said that after this year’s conference, they had received “universal praise” from St. John’s staff, and said they were “caught by surprise” by the announcement. 

“What we're told - and what we find rather hard to believe - is that all of this came about because a single person thought we were promoting an anti-police agenda,” the email said. “They had spotted pamphlets on a table which an attendee had apparently brought to HOPE that espoused that view. Instead of bringing this to our attention, they went to the president's office at St. John's after the conference had ended. That office held an investigation which we had no knowledge of and reached its decision earlier this month. The lack of due process on its own is extremely disturbing.”

“The intent of the person behind this appears clear: shut down events like ours and make no attempt to actually communicate or resolve the issue,” the email continued. “If it wasn't this pamphlet, it would have been something else. In this day and age where academic institutions live in fear of offending the same authorities we've been challenging for decades, this isn't entirely surprising. It is, however, greatly disappointing.”

St. John’s University did not immediately respond to a request for comment. Hacking and security conferences in general have a long history of being surveilled by or losing their venues. For example, attendees of the DEF CON hacking conference have reported being surveilled and having their rooms searched; last year, some casinos in Las Vegas made it clear that DEF CON attendees were not welcome. And academic institutions have been vigorously attacked by the Trump administration over the last few months over the courses they teach, the research they fund, and the events they hold, though we currently do not know the specifics of why St. John’s made this decision. 

It is not clear what pamphlets HOPE is referencing, and the conference did not immediately respond to a request for comment, but the conference noted that St. Johns could have made up any pretext for banning them. It is worth mentioning that Joshua Aaron, the creator of the ICEBlock ICE tracking app, presented at HOPE this year. ICEBlock has since been deleted by the Apple App Store and the Google Play store after being pressured by the Trump administration. 

“Our content has always been somewhat edgy and we take pride in challenging policies we see as unfair, exposing security weaknesses, standing up for individual privacy rights, and defending freedom of speech,” HOPE wrote in the email. The conference said that it has not yet decided what it will do next year, but that it may look for another venue, or that it might “take a year off and try to build something bigger.” 

“There will be many people who will say this is what we get for being too outspoken and for giving a platform to controversial people and ideas. But it's this spirit that defines who we are; it's driven all 16 of our past conferences. There are also those who thought it was foolish to ever expect a religious institution to understand and work with us,” the conference added. “We are not changing who we are and what we stand for any more than we'd expect others to. We have high standards for our speakers, presenters, and staff. We value inclusivity and we have never tolerated hate, abuse, or harassment towards anyone. This should not be news, as HOPE has been around for a while and is well known for its uniqueness, spirit, and positivity.” 

  • ✇404 Media
  • ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras
    Lawyers from the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) sued the city of San Jose, California over its deployment of Flock’s license plate-reading surveillance cameras, claiming that the city’s nearly 500 cameras create a pervasive database of residents movements in a surveillance network that is essentially impossible to avoid. The lawsuit was filed on behalf of the Services, Immigrant Rights & Education Network and Council on American-Islamic Rela
     

ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras

18 novembre 2025 à 14:31
ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras

Lawyers from the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) sued the city of San Jose, California over its deployment of Flock’s license plate-reading surveillance cameras, claiming that the city’s nearly 500 cameras create a pervasive database of residents movements in a surveillance network that is essentially impossible to avoid. 

The lawsuit was filed on behalf of the Services, Immigrant Rights & Education Network and Council on American-Islamic Relations, California, and claims that the surveillance is a violation of California’s constitution and its privacy laws. The lawsuit seeks to require police to get a warrant in order to search Flock’s license plate system. The lawsuit is one of the highest profile cases challenging Flock; a similar lawsuit in Norfolk, Virginia seeks to get Flock’s network shut down in that city altogether.

“San Jose’s ALPR [automatic license plate reader] program stands apart in its invasiveness,” ACLU of Northern California and EFF lawyers wrote in the lawsuit. “While many California agencies run ALPR systems, few retain the locations of drivers for an entire year like San Jose. Further, it is difficult for most residents of San Jose to get to work, pick up their kids, or obtain medical care without driving, and the City has blanketed its roads with nearly 500 ALPRs.”

The lawsuit argues that San Jose’s Flock cameras “are an invasive mass surveillance technology” that “collect[s] driver locations en masse.”

“Most drivers are unaware that San Jose’s Police Department is tracking their locations and do not know all that their saved location data can reveal about their private lives and activities,” it adds. The city of San Jose currently has at least 474 ALPR cameras, up from 149 at the end of 2023; according to data from the city, more than 2.6 million vehicles were tracked using Flock in the month of October alone. The lawsuit states that Flock ALPRs are stationed all over the city, including “around highly sensitive locations including clinics, immigration centers, and places of worship. For example, three ALPR cameras are positioned on the roads directly outside an immigration law firm.” 

Andrew Crocker, surveillance litigation director for the EFF, told 404 Media in a phone call that “it’s fair to say that anyone driving in San Jose is likely to have their license plates captured many times a day. That pervasiveness is important.”

ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras
DeFlock's map of San Jose's ALPRs
ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras
A zoomed in look at San Jose

A search of DeFlock, a crowdsourced map of ALPR deployments around the country, shows hundreds of cameras in San Jose spaced essentially every few blocks around the city. The map is not exhaustive.

The lawsuit argues that warrantless searches of these cameras are illegal under the California constitution’s search and seizure clause, which Crocker said “has been interpreted to be even stronger than the Fourth Amendment,” as well as other California privacy laws. The case is part of a broader backlash against Flock as it expands around the United States. 404 Media’s reporting has shown that the company collects millions of records from around the country, and that it has made its national database of car locations available to local cops who have in turn worked with ICE. Some of those searches have violated California and Illinois law, and have led to reforms from the company. Crocker said that many of these problems will be solved if police simply need to get a warrant to search the system.

“Our legal theory and the remedy we’re seeking is quite simple. We think they need a warrant to search these databases,” he said. “The warrant requirement is massive and should help in terms of preventing these searches because they will have to be approved by a judge.” The case in Norfolk is ongoing. San Jose Police Department and Flock did not immediately respond to a request for comment. 

  • ✇404 Media
  • Airlines Will Shut Down Program That Sold Your Flights Records to Government
    Airlines Reporting Corporation (ARC), a data broker owned by the U.S.’s major airlines, will shut down a program in which it sold access to hundreds of millions of flight records to the government and let agencies track peoples’ movements without a warrant, according to a letter from ARC shared with 404 Media.ARC says it informed lawmakers and customers about the decision earlier this month. The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about ARC’s
     

Airlines Will Shut Down Program That Sold Your Flights Records to Government

18 novembre 2025 à 13:43
Airlines Will Shut Down Program That Sold Your Flights Records to Government

Airlines Reporting Corporation (ARC), a data broker owned by the U.S.’s major airlines, will shut down a program in which it sold access to hundreds of millions of flight records to the government and let agencies track peoples’ movements without a warrant, according to a letter from ARC shared with 404 Media.

ARC says it informed lawmakers and customers about the decision earlier this month. The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about ARC’s data selling practices. The news also comes after 404 Media reported on Tuesday that the IRS had searched the massive database of Americans flight data without a warrant.

“As part of ARC’s programmatic review of its commercial portfolio, we have previously determined that TIP is no longer aligned with ARC’s core goals of serving the travel industry,” the letter, written by ARC President and CEO Lauri Reishus, reads. TIP is the Travel Intelligence Program. As part of that, ARC sold access to a massive database of peoples’ flights, showing who travelled where, and when, and what credit card they used. 

Airlines Will Shut Down Program That Sold Your Flights Records to Government
The ARC letter.

“All TIP customers, including the government agencies referenced in your letter, were notified on November 12, 2025, that TIP is sunsetting this year,” Reishus continued. Reishus was responding to a letter sent to airline executives earlier on Tuesday by Senator Ron Wyden, Congressman Andy Biggs, Chair of the Congressional Hispanic Caucus Adriano Espaillat, and Senator Cynthia Lummis. That letter revealed the IRS’s warrantless use of ARC’s data and urged the airlines to stop the ARC program. ARC says it notified Espaillat's office on November 14.

ARC is co-owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. The data broker acts as a bridge between airlines and travel agencies. Whenever someone books a flight through one of more than 12,800 travel agencies, such as Expedia, Kayak, or Priceline, ARC receives information about that booking. It then packages much of that data and sells it to the government, which can search it by name, credit card, and more. 404 Media has reported that ARC’s customers include the FBI, multiple components of the Department of Homeland Security, ATF, the SEC, TSA, and the State Department.  

Espaillat told 404 Media in a statement “this is what we do. This is how we’re fighting back. Other industry groups in the private sector should follow suit. They should not be in cahoots with ICE, especially in ways may be illegal.”

Wyden said in a statement “it shouldn't have taken pressure from Congress for the airlines to finally shut down the sale of their customers’ travel data to government agencies by ARC, but better late than never. I hope other industries will see that selling off their customers' data to the government and anyone with a checkbook is bad for business and follow suit.”

“Because ARC only has data on tickets booked through travel agencies, government agencies seeking information about Americans who book tickets directly with an airline must issue a subpoena or obtain a court order to obtain those records. But ARC’s data sales still enable government agencies to search through a database containing 50% of all tickets booked without seeking approval from a judge,” the letter from the lawmakers reads.

Update: this piece has been updated to include statements from CHC Chair Espaillat and Senator Wyden.

  • ✇404 Media
  • IRS Accessed Massive Database of Americans Flights Without a Warrant
    The IRS accessed a database of hundreds of millions of travel records, which show when and where a specific person flew and the credit card they used, without obtaining a warrant, according to a letter signed by a bipartisan group of lawmakers and shared with 404 Media. The country’s major airlines, including Delta, United Airlines, American Airlines, and Southwest, funnel customer records to a data broker they co-own called the Airlines Reporting Corporation (ARC), which then sells access to
     

IRS Accessed Massive Database of Americans Flights Without a Warrant

18 novembre 2025 à 11:00
IRS Accessed Massive Database of Americans Flights Without a Warrant

The IRS accessed a database of hundreds of millions of travel records, which show when and where a specific person flew and the credit card they used, without obtaining a warrant, according to a letter signed by a bipartisan group of lawmakers and shared with 404 Media. The country’s major airlines, including Delta, United Airlines, American Airlines, and Southwest, funnel customer records to a data broker they co-own called the Airlines Reporting Corporation (ARC), which then sells access to peoples’ travel data to government agencies.

The IRS case in the letter is the clearest example yet of how agencies are searching the massive trove of travel data without a search warrant, court order, or similar legal mechanism. Instead, because the data is being sold commercially, agencies are able to simply buy access. In the letter addressed to nine major airlines, the lawmakers urge them to shut down the data selling program. Update: after this piece was published, ARC said it already planned to shut down the program. You can read more here.

  • ✇404 Media
  • Contractor Recruiting People on LinkedIn to Physically Track Immigrants for ICE, Will Pay $300
    A current pilot project aims to pay former law enforcement and military officers to physically track immigrants and verify their addresses to give to ICE for $300 each. There is no indication that the pilot involves licensed private investigators, and appears to be open to people who are now essentially members of the general public, 404 Media has learned.The pilot is a dramatic, and potentially dangerous, escalation in the Trump administration’s mass deportation campaign. People without any
     

Contractor Recruiting People on LinkedIn to Physically Track Immigrants for ICE, Will Pay $300

18 novembre 2025 à 10:05
Contractor Recruiting People on LinkedIn to Physically Track Immigrants for ICE, Will Pay $300

A current pilot project aims to pay former law enforcement and military officers to physically track immigrants and verify their addresses to give to ICE for $300 each. There is no indication that the pilot involves licensed private investigators, and appears to be open to people who are now essentially members of the general public, 404 Media has learned.

The pilot is a dramatic, and potentially dangerous, escalation in the Trump administration’s mass deportation campaign. People without any official role in government would be tasked with tracking down targets for ICE. It appears to be part of ICE’s broader plan to use bounty hunters or skip tracers to confirm immigrant’s addresses through data and physical surveillance. Some potential candidates for the pilot were recruited on LinkedIn and were told they would be given vehicles to monitor the targets.

  • ✇404 Media
  • Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’
    The Department of Homeland Security claimed in court proceedings that nearly two weeks worth of surveillance footage from ICE’s Broadview Detention Center in suburban Chicago has been “irretrievably destroyed” and may not be able to be recovered, according to court records reviewed by 404 Media.The filing was made as part of a class action lawsuit against the Department of Homeland Security by people being held at Broadview, which has become the site of widespread protests against ICE. The la
     

Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’

18 novembre 2025 à 09:58
Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’

The Department of Homeland Security claimed in court proceedings that nearly two weeks worth of surveillance footage from ICE’s Broadview Detention Center in suburban Chicago has been “irretrievably destroyed” and may not be able to be recovered, according to court records reviewed by 404 Media.

The filing was made as part of a class action lawsuit against the Department of Homeland Security by people being held at Broadview, which has become the site of widespread protests against ICE. The lawsuit says that people detained at the facility are being held in abhorrent, “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.” 

As part of discovery in the case, the plaintiffs’ lawyers requested surveillance footage from the facility starting from mid September, which is when ICE stepped up its mass deportation campaign in Chicago. In a status report submitted by lawyers from both the plaintiffs and the Department of Homeland Security, lawyers said that nearly two weeks of footage has been “irretrievably destroyed.”

“Defendants have agreed to produce. Video from September 28, 2025 to October 19, 2025, and also from October 31, 2025 to November, 7 2025,” the filing states. “Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all.” Law & Crime first reported on the filing.

Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’
A screenshot from the court filing

The filing adds that the plaintiffs, who are being represented by lawyers from the American Civil Liberties Union of Illinois, the MacArthur Justice Center, and the Eimer Stahl law firm, hired an IT contractor to work with the government “to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved.”

Surveillance footage from inside the detention center would presumably be critical in a case about the alleged abusive treatment of detainees and inhumane living conditions. The filing states that the plaintiffs' attorneys have “communicated to Defendants that they are most concerned with obtaining the available surveillance videos as quickly as possible.”

ICE did not respond to a request for comment from 404 Media. A spokesperson for the ACLU of Illinois told 404 Media “we don’t have any insight on this. Hoping DHS can explain.” 

  • ✇Dans les algorithmes
  • Intelligence artificielle générale : le délire complotiste de la tech
    « Le mythe de l’intelligence artificielle générale ressemble beaucoup à une théorie du complot et c’est peut-être la plus importante de notre époque ». Obsédées par cette technologie hypothétique, les entreprises d’IA nous la vendent avec acharnement, explique le journaliste Will Douglas Heaven dans la Technology Review.   Pour élaborer une théorie du complot, il faut plusieurs ingrédients, rappelle-t-il : un schéma suffisamment flexible pour entretenir la croyance même lorsque les choses ne
     

Intelligence artificielle générale : le délire complotiste de la tech

18 novembre 2025 à 01:00

« Le mythe de l’intelligence artificielle générale ressemble beaucoup à une théorie du complot et c’est peut-être la plus importante de notre époque ». Obsédées par cette technologie hypothétique, les entreprises d’IA nous la vendent avec acharnement, explique le journaliste Will Douglas Heaven dans la Technology Review.  

Pour élaborer une théorie du complot, il faut plusieurs ingrédients, rappelle-t-il : un schéma suffisamment flexible pour entretenir la croyance même lorsque les choses ne se déroulent pas comme prévu ; la promesse d’un avenir meilleur qui ne peut se réaliser que si les croyants découvrent des vérités cachées ; et l’espoir d’être sauvé des horreurs de ce monde. L’intelligence artificielle générale (IAG) remplit quasiment tous ces critères. Et plus on examine cette idée de près, plus elle ressemble à un complot. Ce n’en est pas totalement un, bien sûr, pas exactement, concède Heaven. « Mais en nous penchant sur les points communs entre l’IAG et les véritables théories du complot, je pense que nous pouvons mieux cerner ce concept et le révéler pour ce qu’il est : un délire techno-utopique (ou techno-dystopique, à vous de choisir) qui s’est ancré dans des croyances profondément enracinées et difficiles à déraciner »

Une histoire de l’IAG

Dans son article, Heaven retrace l’histoire du terme d’intelligence artificielle générale, initié par Ben Goertzel et Shane Legg. Quand ils discutent de cette idée, l’idée d’une IA capable d’imiter voire dépasser les capacités humaines était alors une plaisanterie. Mais Goertzel en tira un un livre sous ce titre (Springer, 2006) qui se présentait sous les atours les plus sérieux, puis organisa en 2008 une conférence dédiée au sujet (une conférence devenue annuelle et qui continue encore). En rejoignant DeepMind en tant que cofondateur, Shane Legg y importe le terme, légitimant le concept. Proche de Peter Thiel et de Eliezer Yudkowsky, Goertzel a beaucoup discuté avec eux du concept. Mais si l’iconoclaste Ben Goertzel était enthousiaste, le sombre Yudkowsky, lui, était beaucoup plus pessimiste, voyant l’arrivée de l’AGI comme une catastrophe. Malgré tous ces efforts, le concept n’a alors rencontré que peu d’échos et semblait surtout tenir de la pure science-fiction. 

C’est la publication de Superintelligence par le philosophe Nick Bostrom en 2014, qui  va changer les choses. Bostrom rend acceptable les concepts spécieux de Yudkowsky. Aujourd’hui, l’IAG est évoquée partout, que ce soit pour annoncer l’arrivée de temps immémoriaux ou pour prédire l’extermination de l’humanité. 

Dans son récent livre, apocalyptique, If Anyone Builds It, Everyone Dies : why Superhuman AI Would Kill Us All (Si quelqu’un le construit, tout le monde meurt, Little Brown, 2025, non traduit), coécrit avec Nate Soares, Yudkowsky accumule les déclarations extravagantes pour une interdiction totale de l’IAG. Un ouvrage « décousu et superficiel », comme l’explique le journaliste Adam Becker – auteur lui-même de More Everything Forever : AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity (Hachette, 2025, non traduit) – dans sa recension critique pour The Atlantic, qui « tente de nous faire croire que l’intelligence est un concept discret et mesurable, et que son accroissement est une question de ressources et de puissance de calcul ». L’IA superintelligente hypothétique des Cassandre « fait ce dont rêvent toutes les start-ups technologiques : croître de façon exponentielle et éliminer ses concurrents jusqu’à atteindre un monopole absolu », expliquait déjà l’écrivain de science-fiction Ted Chiang. La superintelligence évoque bien plus un capitalisme débridé porté par des individus parfaitement néoréactionnaires qu’autre chose, comme l’analysait Elisabeth Sandifer dans son livre sur l’extrême-droite technologique américaine, ou encore les livres de Thibault Prévost ou Nastasia Hadjadji et Olivier Tesquet que nous avons déjà chroniqué. « En réalité, l’apocalypse de l’IA qui inquiète tant Yudkowsky et Soares n’est autre que notre propre monde, vu à travers le prisme déformant d’un miroir de science-fiction », une « vision simpliste du salut technologique », conclut Becker.

Malgré sa vacuité, l’ouvrage de Soares et Yudkowsky est un des bestsellers du New York Times. Pour Heaven, comme toutes les théories du complot les plus puissantes, l’intelligence artificielle générale s’est infiltrée dans le débat public et a pris racine.

Le mythe d’une machine plus intelligente que l’humain, capable de tout faire, se retrouve pourtant dès l’origine de l’IA, chez Alan Turing comme chez John McCarthy. « Mais l’IAG n’est pas une technologie, c’est un rêve », affirme Becker. Comme nombre de théories du complot, il est impossible de réfuter une idée aussi protéiforme que l’IAG. Discuter d’IAG consiste en un affrontement de visions du monde, et non en un échange de raisonnements fondés sur des preuves, puisqu’il ne peut y en avoir autour d’un objet hypothétique pour lequel il n’existe pas de définition précise et partagée. Les prédictions sur l’avènement de l’IAG sont formulées avec la précision de numérologues annonçant la fin des temps. Sans véritable enjeu, les échéances sont repoussées sans conviction. « L’IAG, c’est toujours ce qui arrivera, la prochaine fois, mais son arrivée imminente est la vérité que partagent ses adeptes »

Du conspirationnisme

Pour l’anthropologue des religions Jeremy Cohen, qui étudie les théories du complot dans les milieux technologiques, la vérité cachée « est un élément fondamental de la pensée conspirationniste ». Pour Ben Goertzel et les thuriféraires de l’IAG, les raisons du scepticisme envers l’IAG tiennent du scepticisme global. « Avant chaque grande avancée technique, du vol humain à l’énergie électrique, des hordes de prétendus experts vous expliquaient pourquoi cela n’arriverait jamais. En réalité, la plupart des gens ne croient qu’à ce qu’ils voient. » Si vous n’êtes pas convaincus par l’IAG, c’est que vous êtes un idiot naïf disent ses partisans, inversant la charge de la preuve, alors qu’ils sont bien plus que d’autres les idiots utiles de ce qu’ils dénoncent et vénèrent à la fois.

« L’idée de donner naissance à des dieux-machines est évidemment très flatteuse pour l’ego », affirme la philosophe Shannon Vallor de l’Edinburgh Futures Institute (voir notre article sur son livre, « L’IA n’est qu’un miroir »). « C’est incroyablement séduisant de penser que l’on pose soi-même les fondements de cette transcendance ». C’est un autre point commun avec les théories du complot. Une partie de l’attrait réside dans le désir de trouver un sens à un monde chaotique et parfois dénué de sens et dans l’aspiration à être une personne consciente du danger. Pour David Krueger, chercheur à l’Université de Montréal et ancien directeur de recherche à l’Institut de sécurité de l’IA du Royaume-Uni, nombre de personnes travaillant sur l’IA considèrent cette technologie comme notre successeur naturel. « Ils voient cela comme une forme de maternité » dont ils ont la charge, explique-t-il. Jeremy Cohen, lui, dresse des parallèles entre de nombreuses théories du complot moderne et le mouvement New Age, qui connut son apogée dans les années 1970 et 1980. Ses adeptes croyaient que l’humanité était sur le point d’accéder à une ère de bien-être spirituel et d’éveil de la conscience, instaurant un monde plus paisible et prospère. L’idée était qu’en s’adonnant à un ensemble de pratiques pseudo-religieuses, les humains transcenderaient leurs limites et accéderaient à une sorte d’utopie hippie. Pour Cohen, nous sommes confrontés aux mêmes attentes à l’égard de l’IAG : que ce soit par la destruction ou la sublimation de l’humanité, elle seule permettra de surmonter les problèmes auxquels l’humanité est confrontée. Pour Yudkowsky et Soares, les enjeux de l’IAG sont plus importants que le risque nucléaire ou le risque climatique. 

Pour beaucoup de ceux qui optent pour cette croyance, l’IAG arrivera d’un seul coup, sous la forme d’une singularité technologique introduite par l’auteur de science-fiction Vernor Vinge dans les années 80. Un moment transcendant où l’humanité, telle que nous la connaissons, changera à jamais. Pour Shannon Vallor, ce système de croyance est remarquable par la façon dont la foi en la technologie a remplacé la foi en l’humanité. Malgré son côté ésotérique, la pensée New Age était au moins motivée par l’idée que les gens avaient le potentiel de changer le monde par eux-mêmes, pourvu qu’ils puissent y accéder. Avec la quête de l’IA générale, nous avons abandonné cette confiance en nous et adhéré à l’idée que seule la technologie peut nous sauver, explique-t-elle. C’est une pensée séduisante, voire réconfortante, pour beaucoup. « Nous vivons à une époque où les autres voies d’amélioration matérielle de la vie humaine et de nos sociétés semblent épuisées », affirme Vallor. La technologie promettait autrefois un avenir meilleur : le progrès était une échelle que nous devions gravir vers l’épanouissement humain et social. « Nous avons dépassé ce stade », déclare Vallor. « Je pense que ce qui redonne espoir à beaucoup et leur permet de retrouver cet optimisme quant à l’avenir, c’est l’IA générale. » Poussons cette idée à son terme et, une fois encore, l’IA générale devient une sorte de divinité, capable de soulager les souffrances terrestres, affirme Vallor. Kelly Joyce, sociologue à l’Université de Caroline du Nord, qui étudie comment les croyances culturelles, politiques et économiques façonnent notre rapport à la technologie, considère toutes ces prédictions extravagantes concernant l’IA générale comme quelque chose de plus banal : un exemple parmi d’autres de la tendance actuelle du secteur technologique à faire des promesses excessives. « Ce qui m’intrigue, c’est que nous nous laissions prendre au piège. »

« À chaque fois », dit-elle. « Il existe une conviction profonde que la technologie est supérieure aux êtres humains. » Joyce pense que c’est pourquoi, lorsque l’engouement s’installe, les gens sont prédisposés à y croire. « C’est une religion », dit-elle. « Nous croyons en la technologie. La technologie est divine. Il est très difficile de s’y opposer. Les gens ne veulent pas l’entendre. » 

Le fantasme d’ordinateurs capables de faire presque tout ce qu’un humain peut faire est séduisant. Mais comme beaucoup de théories du complot répandues, elle a des conséquences bien réelles. Elle fausse notre perception des enjeux, déstabilise l’industrie pour l’éloigner d’applications immédiates… Et surtout, elle nous invite à la paresse. A quoi bon s’acharner à résoudre les problèmes du monde réel, quand les machines s’en chargeront demain ? Le projet pharaonique de l’IA engloutit désormais des centaines de milliards de dollars et détourne nombre d’investissements de technologies plus immédiates, capables de changer dès à présent la vie des gens. 

Tina Law, spécialiste des politiques technologiques à l’Université de Californie à Davis, s’inquiète du fait que les décideurs politiques soient davantage influencés par le discours selon lequel l’IA finira par nous anéantir que par les préoccupations réelles concernant l’impact concret et immédiat de l’IA sur la vie des gens dès aujourd’hui. La question des inégalités est occultée par la notion de risque existentiel. « Le battage médiatique est une stratégie lucrative pour les entreprises technologiques », affirme Law. Ce battage médiatique repose en grande partie sur l’idée que ce qui se passe est inévitable : si nous ne le construisons pas, quelqu’un d’autre le fera. « Quand quelque chose est présenté comme inévitable », rappelle Law, « les gens doutent non seulement de leur capacité à résister, mais aussi de leur droit à le faire. » Tout le monde se retrouve piégé. 

Selon Milton Mueller, du Georgia Institute of Technology, spécialiste des politiques et de la réglementation technologiques, le champ de distorsion lié à l’IAG ne se limite pas aux politiques technologiques. La course à l’IA générale est comparée à la course à la bombe atomique, explique-t-il. « Celui qui y parviendra en premier aura un pouvoir absolu sur tous les autres. C’est une idée folle et dangereuse qui faussera profondément notre approche de la politique étrangère. » Les entreprises (et les gouvernements) ont tout intérêt à promouvoir le mythe de l’IA générale, explique encore Mueller, car elles peuvent ainsi prétendre être les premières à y parvenir. Mais comme il s’agit d’une course sans consensus sur la ligne d’arrivée, le mythe peut être entretenu tant qu’il est utile. Ou tant que les investisseurs sont prêts à y croire. Il est facile d’imaginer comment cela se déroule. Ce n’est ni l’utopie ni l’enfer : c’est OpenAI et ses pairs qui s’enrichissent considérablement. 

Voilà. Le grand complot de l’IAG est enfin résolu, ironise Heaven. « Et peut-être cela nous ramène-t-il à la question du complot, et à un rebondissement inattendu dans cette histoire. Jusqu’ici, nous avons ignoré un aspect courant de la pensée conspirationniste : l’existence d’un groupe de personnalités influentes tirant les ficelles en coulisses et la conviction que, par la recherche de la vérité, les croyants peuvent démasquer cette cabale ». L’IAG n’accuse publiquement aucune force occulte d’entraver son développement ou d’en dissimuler les secrets. Aucun complot n’est ourdi par les Illuminati ou le Forum économique mondial… ici, ceux-là même qui dénoncent les dangers fomentent la cabale. Ceux qui propagent la théorie du complot de l’IAG sont ses principaux instigateurs. Les géants de la Silicon Valley investissent toutes leurs ressources dans la création d’une IAG à des fins lucratives. Le mythe de l’IAG sert leurs intérêts plus que ceux de quiconque. Comme le souligne Vallor : « Si OpenAI affirme construire une machine qui rendra les entreprises encore plus puissantes qu’elles ne le sont aujourd’hui, elle n’obtiendra pas l’adhésion du public nécessaire. » « N’oubliez pas : vous créez un dieu et vous finissez par lui ressembler », ironise Heaven. « Beaucoup pensent que s’ils y parviennent en premier, ils pourront dominer le monde ». 

À bien des égards, conclut Heaven, je pense que l’idée même d’IAG repose sur une vision déformée de ce que l’on attend de la technologie, et même de ce qu’est l’intelligence. En résumé, l’argument en faveur de l’IA générale repose sur le postulat qu’une technologie, l’IA, a progressé très rapidement et continuera de progresser. Mais abstraction faite des objections techniques – que se passera-t-il si les progrès cessent ? –, il ne reste plus que l’idée que l’intelligence est une ressource dont on peut augmenter la quantité grâce aux données, à la puissance de calcul ou aux réseaux neuronaux adéquats. Or, ce n’est pas le cas. L’intelligence n’est pas une quantité que l’on peut accroître indéfiniment. Des personnes intelligentes peuvent exceller dans un domaine et être moins douées dans d’autres. Tout comme les babioles dopées à l’IA peuvent exceller dans une tâche et être nulles dans bien d’autres, surtout toutes celles, innombrables, qui échappent à leurs données et continueront de leur échapper. 

De l’IAG au fantasme des trombones

Ce fantasme d’automatisation totale que produirait l’IAG est aussi la symbolique du jeu du maximiseur de trombone. Le jeu met en scène, très concrètement, une idée développée par le chantre du transhumanisme d’extrême droite, Nick Bostrom, dès 2002, à savoir celle du « risque existentiel » que ferait peser sur nous cette intelligence artificielle superintelligente, capable de créer des situations pouvant détruire toute vie sur terre. Pour Bostrom, l’usine à trombone est un démonstrateur, où une IA qui aurait pour seul objectif de produire des trombones, pourrait exploiter toutes les ressources pour se faire jusqu’à leur plus total épuisement, en optimisant son objectif sans limites. 

Cette démonstration, pourtant extrêmement simpliste, a marqué les esprits. Son aporie même, qui fait d’un objectif absurde le démonstrateur ultime de l’intelligence des machines, demeure particulièrement problématique. Puisqu’il n’y a ici aucune intelligence des machines, mais bien au contraire, la démonstration de leur pure aberration. Or, la crainte que les systèmes d’IA puissent interpréter les commandes de manière catastrophique « repose sur des hypothèses douteuses quant au déploiement de la technologie dans le monde réel », comme le disaient les chercheurs Arvind Narayanan et Sayash Kapoor. Le jeu est bien plus un simulateur de marché qu’un jeu sur l’IA. Universal Paperclips incite les joueurs à l’empathie avec ses buts, notamment en proposant de nouvelles fonctionnalités pour les assouvir, et c’est bien notre empathie avec le jeu qui conduit l’IA à détruire l’humanité pour accomplir son but absurde de productivité sans limite. Son créateur, Frank Lantz, raconte d’ailleurs que ce n’est pas l’augmentation des chiffres, mais la manière dont ils augmentent qui incite les joueurs à cliquer et à faire cœur avec les objectifs du jeu. « Les jeux incrémentaux sont très bruts. Ils s’attachent à un processus pour faire que les joueurs deviennent obsédés par sa croissance». L’interface épurée hypnotise par la répétition. « Le joueur détruit l’univers avec le même sentiment d’éloignement que lorsqu’on commande un pull en ligne ». Cette fiction est censée nous prévenir que l’IA pourrait avoir des motivations très différentes de l’homme.

Pourtant le jeu, très scripté, ne montre en rien que l’IA pourrait raisonner, planifier ou comprendre le monde physique ou le dominer. Il masque surtout que des fonctionnalités déclenchent des possibilités et que celles-ci sont prévues par l’interface et le concepteur du jeu. La fable du maximiseur de trombone ne démontre en rien que l’IA pourrait prendre le contrôle du monde, mais que ses objectifs et fonctions, eux, sont le produit de celui qui les a programmés. Le maximiseur de trombone n’accomplit aucune démonstration qu’une intelligence artificielle générale conduirait à l’extermination du genre humain. Comme l’ont dit Kate Crawford ou Melanie Mitchell, les voix les plus fortes qui débattent des dangers potentiels de la superintelligence sont celles d’hommes blancs aisés et racistes, et, pour eux, la plus grande menace est peut-être l’émergence d’un prédateur au sommet de l’intelligence artificielle, mais qui est certainement personne d’autre qu’eux-mêmes. Le capitalisme est un bien plus grand maximiseur que l’IA. Les entreprises maximisent le cours de leurs actions sans se soucier des coûts, qu’ils soient humains, environnementaux ou autres. Ce processus d’optimisation est bien plus incontrôlable et pourrait bien rendre notre planète inhabitable bien avant que nous sachions comment créer une IA optimisant les trombones

Hubert Guillaud

  • ✇404 Media
  • A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On
    Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “d
     

A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On

17 novembre 2025 à 15:00
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On

Online survey research, a fundamental method for data collection in many scientific studies, is facing an existential threat because of large language models, according to new research published in the Proceedings of the National Academy of Sciences (PNAS). The author of the paper, associate professor of government at Dartmouth and director of the Polarization Research Lab Sean Westwood, created an AI tool he calls "an autonomous synthetic respondent,” which can answer survey questions and “demonstrated a near-flawless ability to bypass the full range” of “state-of-the-art” methods for detecting bots. 

According to the paper, the AI agent evaded detection 99.8 percent of the time.

"We can no longer trust that survey responses are coming from real people," Westwood said in a press release. "With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”

Survey research relies on attention check questions (ACQs), behavioral flags, and response pattern analysis to detect inattentive humans or automated bots. Westwood said these methods are now obsolete after his AI agent bypassed the full range of standard ACQs and other detection methods outlined in prominent papers, including one paper designed to detect AI responses. The AI agent also successfully avoided “reverse shibboleth” questions designed to detect nonhuman actors by presenting tasks that an LLM could complete easily, but are nearly impossible for a human. 

💡
Are you a researcher who is dealing with the problem of AI-generated survey data? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪(609) 678-3204‬. Otherwise, send me an email at emanuel@404media.co.

“Once the reasoning engine decides on a response, the first layer executes the action with a focus on human mimicry,” the paper, titled “The potential existential threat of large language models to online survey research,” says. “To evade automated detection, it simulates realistic reading times calibrated to the persona’s education level, generates human-like mouse movements, and types open-ended responses keystroke by-keystroke, complete with plausible typos and corrections. The system is also designed to accommodate tools for bypassing antibot measures like reCAPTCHA, a common barrier for automated systems.”

The AI, according to the paper, is able to model “a coherent demographic persona,” meaning that in theory someone could sway any online research survey to produce any result they want based on an AI-generated demographic. And it would not take that many fake answers to impact survey results. As the press release for the paper notes, for the seven major national polls before the 2024 election, adding as few as 10 to 52 fake AI responses would have flipped the predicted outcome. Generating these responses would also be incredibly cheap at five cents each. According to the paper, human respondents typically earn $1.50 for completing a survey.

Westwood’s AI agent is a model-agnostic program built in Python, meaning it can be deployed with APIs from big AI companies like OpenAI, Anthropic, or Google, but can also be hosted locally with open-weight models like LLama. The paper used OpenAI’s o4-mini in its testing, but some tasks were also completed with DeepSeek R1, Mistral Large, Claude 3.7 Sonnet, Grok3, Gemini 2.5 Preview, and others, to prove the method works with various LLMs. The agent is given one prompt of about 500 words which tells it what kind of persona to emulate and to answer questions like a human. 

The paper says that there are several ways researchers can deal with the threat of AI agents corrupting survey data, but they come with trade-offs. For example, researchers could do more identity validation on survey participants, but this raises privacy concerns. Meanwhile, the paper says, researchers should be more transparent about how they collect survey data and consider more controlled methods for recruiting participants, like address-based sampling or voter files.

“Ensuring the continued validity of polling and social science research will require exploring and innovating research designs that are resilient to the challenges of an era defined by rapidly evolving artificial intelligence,” the paper said.

  • ✇404 Media
  • The Video Game Industry’s Existential Crisis (with Jason Schreier)
    The video game industry has had a turbulent few years. The pandemic made people play more and caused a small boom, which then subsided, resulting in wave after wave of massive layoffs. Microsoft, one of the major console manufacturers, is shifting its strategy for Xbox as the company shifts its focus to AI. And now, Electronic Arts, once a load-bearing publisher for the industry with brands like The Sims and Madden, is going private via a leveraged buyout in a deal involving Saudi Arabia’s Pu
     

The Video Game Industry’s Existential Crisis (with Jason Schreier)

17 novembre 2025 à 10:00
The Video Game Industry’s Existential Crisis (with Jason Schreier)

The video game industry has had a turbulent few years. The pandemic made people play more and caused a small boom, which then subsided, resulting in wave after wave of massive layoffs. Microsoft, one of the major console manufacturers, is shifting its strategy for Xbox as the company shifts its focus to AI. And now, Electronic Arts, once a load-bearing publisher for the industry with brands like The Sims and Madden, is going private via a leveraged buyout in a deal involving Saudi Arabia’s Public Investment Fund and Jared Kushner. 

Video games are more popular than ever, but many of the biggest companies in the business seem like they are struggling to adapt and convert that popularity into stability and sustainability. To try and understand what the hell is going on, this week we have a conversation between Emanuel and Jason Schreier, who reports about video games for Bloomberg and one of the best journalists on this beat. 

Jason helps us unpack why Microsoft is now aiming for higher-than-average profit margins at Xbox and why the company is seemingly bowing out of the console business despite a massive acquisition spree. We also talk about what the EA deal tells us about other game publishers, and what all these problems tell us about changing player habits and the future of big budget video games. 

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube

Become a paid subscriber for early access to these interview episodes and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

  • ✇404 Media
  • This App Lets ICE Track Vehicles and Owners Across the Country
    Immigration and Customs Enforcement (ICE) recently invited staff to demos of an app that lets officers instantly scan a license plate, adding it to a database of billions of records that shows where else that vehicle has been spotted around the country, according to internal agency material viewed by 404 Media. That data can then be combined with other information such as driver license data, credit header data, marriage records, vehicle ownership, and voter registrations, the material shows.
     

This App Lets ICE Track Vehicles and Owners Across the Country

17 novembre 2025 à 09:28
This App Lets ICE Track Vehicles and Owners Across the Country

Immigration and Customs Enforcement (ICE) recently invited staff to demos of an app that lets officers instantly scan a license plate, adding it to a database of billions of records that shows where else that vehicle has been spotted around the country, according to internal agency material viewed by 404 Media. That data can then be combined with other information such as driver license data, credit header data, marriage records, vehicle ownership, and voter registrations, the material shows.

The capability is powered by both Motorola Solutions and Thomson Reuters, the massive data broker and media conglomerate, which besides running the Reuters news service, also sells masses of personal data to private industry and government agencies. The material notes that the capabilities allow for predicting where a car may travel in the future, and also can collect face scans for facial recognition. 

The material shows that ICE continues to buy or source a wealth of personal and sensitive information as part of its mass deportation effort, from medical insurance claims data, to smartphone location data, to housing and labor data. The app, called Mobile Companion, is a tool designed to be used in real time by ICE officials in the field, similar to its facial recognition app but for finding more information about vehicles.

💡
Do you work at ICE or CBP? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
  • ✇Dans les algorithmes
  • Vers un RGPD révisé au profit de l’IA… et au détriment de nos droits
    Bruxelles propose de réviser le RGPD pour faciliter l’entraînement des modèles d’IA, en considérant désormais l’utilisation de données personnelles pour l’entraînement d’une IA comme un « intérêt légitime », explique Jérôme Marin dans CaféTech. Autre changement majeur : une redéfinition plus restrictive de la notion de « donnée personnelle ». Une information ne serait plus considérée comme telle si l’entreprise qui la collecte n’est pas en mesure d’identifier la personne concernée. Son utilisati
     

Vers un RGPD révisé au profit de l’IA… et au détriment de nos droits

17 novembre 2025 à 01:01

Bruxelles propose de réviser le RGPD pour faciliter l’entraînement des modèles d’IA, en considérant désormais l’utilisation de données personnelles pour l’entraînement d’une IA comme un « intérêt légitime », explique Jérôme Marin dans CaféTech. Autre changement majeur : une redéfinition plus restrictive de la notion de « donnée personnelle ». Une information ne serait plus considérée comme telle si l’entreprise qui la collecte n’est pas en mesure d’identifier la personne concernée. Son utilisation échapperait alors au RGPD. 

Bruxelles propose également d’assouplir la « protection renforcée » des données sensibles. Celle-ci ne s’appliquerait plus que lorsqu’elles « révèlent directement » l’origine raciale ou ethnique, les opinions politiques, l’état de santé ou l’orientation sexuelle.

  • ✇Dans les algorithmes
  • Objectiver la douleur ?
    Vous vous souvenez peut-être de ce que nous disait le neuroscientifique Albert Moukheiber sur la scène d’USI : la difficulté que nous avons à mesurer la douleur d’une manière objective ? Et bien visiblement, la Technology Review nous annonce plusieurs solutions en cours de déploiement, celle d’une application pour smartphone réservée aux professionnels et baptisée PainCheck qui scanne les visages pour détecter des mouvements musculaires microscopiques et qui utilise l’IA pour générer un score de
     

Objectiver la douleur ?

17 novembre 2025 à 01:00

Vous vous souvenez peut-être de ce que nous disait le neuroscientifique Albert Moukheiber sur la scène d’USI : la difficulté que nous avons à mesurer la douleur d’une manière objective ? Et bien visiblement, la Technology Review nous annonce plusieurs solutions en cours de déploiement, celle d’une application pour smartphone réservée aux professionnels et baptisée PainCheck qui scanne les visages pour détecter des mouvements musculaires microscopiques et qui utilise l’IA pour générer un score de douleur. 

Ce système propose donc une réponse par l’analyse comportementale, qui vise à détecter des grimaces, des postures, des inspirations brusques, corrélées à différents niveaux de douleurs, en analysant des micro-mouvements faciaux. PainCheck recherche sur les visages des mouvements microscopiques spécifiques comme la levée de la lèvre supérieure, le pincement des sourcils, une tension des joues… provenant d’une méthode de description des mouvements du visage. Associé à d’autres indications comportementales (comme l’évaluation de gémissements, la présence de douleurs, de troubles du sommeil…), l’application vise à transformer ces indications en score. Développé en Australie depuis 2017, notamment dans des Ehpad, le système a été autorisé également au Royaume-Uni ainsi qu’au Canada et est en cours d’autorisation aux Etats-Unis. Son utilisation a permis de réduire les prescriptions de médicaments et d’améliorer la prise en compte de la douleur. Après s’être concentré sur les patients âgés, les développeurs de PainChek tentent d’adapter leurs outils aux bébés de moins d’un an. 

Une autre piste consiste à mesurer la réponse galvanique, c’est-à-dire la réponse électrique des muscles ou des nerfs via des capteurs adaptés, comme des casques EEG, couplés à la mesure de la fréquence cardiaque, de la transpiration, à l’image du moniteur PMD-200 de Medasense qui propose de produire des scores de douleurs pour les patients opérés afin d’aider les anesthésistes à ajuster les doses d’anti douleurs pendant et après les opérations. 

Reste que si ces outils proposent des solutions, celles-ci ne peuvent être fiables et adaptées à tous les publics. L’analyse des mouvements du visage par exemple n’est pas adaptée à tous et risque de discriminer certains profils, comme les minorités ethniques ou culturelles qui n’expriment pas la douleur de la même façon que les autres ou parce que les outils vont être mal adaptés à certaines handicap (dans un récent article, Wired montrait que, sans surprise, la reconnaissance faciale ne fonctionnait pas pour les gueules cassées, pointant que l’enjeu n’était pas tant que la reconnaissance faciale intègre mieux les profils atypiques, qu’elle ne devienne pas un outil obligatoire empêchant ces publics par exemple de voyager ou de prendre l’avion). Le rêve d’un outil parfaitement universel de mesure de la douleur est certainement peu probable et nécessite, à nouveau, de bien connaître les publics sur lesquels ils sont entraînés et ceux sur lesquels ils vont être relativement efficaces, des autres. Les douleurs des femmes noires sont depuis longtemps minorées en milieu médical, comme celles des populations atypiques, comme les handicapés ou les autistes. Ces outils ne fonctionneront certainement pas très bien sur eux. Les outils ne viennent pas sans biais et inexactitudes. 

L’autre risque de ces nouveaux outils, enfin, c’est que leur promesse d’objectivité normative nous pousse à éliminer la prise en compte de la subjectivité de la douleur. Comme le disait Laura Tripaldi dans Gender Tech, le « masque de l’objectivité scientifique occulte le récit idéologique ». Les technologies ne peuvent être des vecteurs d’émancipations seulement si elles sont conçues comme des espaces ouverts et partagés, permettant de se composer et de se recomposer, plutôt que d’être assignés et normalisés. L’objectivation de la douleur et la normalisation qu’elle implique risque surtout de laisser sur le carreau ceux qui ne sont pas objectivables, comme ceux qui y sont insensibles.

  • ✇404 Media
  • Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA
    Welcome back to the Abstract! These are the studies this week that reached back through time, flooded the zone, counted the stars, scored science goals, and topped it all off with a ten-course meal.First, scientists make a major breakthrough thanks to a very cute mammoth mummy. Then: the climate case for busy beavers; how to reconnect with 3,000 estranged siblings; this is your brain on football; and last, what Queen Elizabeth II had for lunch on February 20, 1957. As always, for more of my work
     

Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA

15 novembre 2025 à 11:23
Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA

Welcome back to the Abstract! These are the studies this week that reached back through time, flooded the zone, counted the stars, scored science goals, and topped it all off with a ten-course meal.

First, scientists make a major breakthrough thanks to a very cute mammoth mummy. Then: the climate case for busy beavers; how to reconnect with 3,000 estranged siblings; this is your brain on football; and last, what Queen Elizabeth II had for lunch on February 20, 1957.

 As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens, or subscribe to my personal newsletter the BeX Files

The long afterlife of Yuka the mammoth

Mármol Sánchez, Emilio et al. “Ancient RNA expression profiles from the extinct woolly mammoth.” Cell.

Scientists have sequenced RNA—a key ingredient of life as we know it—from the remains of a mammoth that lived 39,000 years ago during the Pleistocene “Ice Age” period, making it by far the oldest RNA on record. 

The previous record holder for oldest RNA was sourced from a puppy that lived in Siberia 14,300 years ago. The new study has now pushed that timeline back by an extraordinary 25,000 years, opening a new window into ancient genetics and revealing a surprise about a famous mammoth mummy called Yuka. 

“Ancient DNA has revolutionized the study of extinct and extant organisms that lived up to 2 million years ago, enabling the reconstruction of genomes from multiple extinct species, as well as the ecosystems where they once thrived,” said researchers led by Emilio Mármol Sánchez of the Globe Institute in Copenhagen, who completed the study while at Stockholm University.

“However, current DNA sequencing techniques alone cannot directly provide insights into tissue identity, gene expression dynamics, or transcriptional regulation, as these are encoded in the RNA fraction.”

“Here, we report transcriptional profiles from 10 late Pleistocene woolly mammoths,” the team continued. “One of these, dated to be ∼39,000 years old, yielded sufficient detail to recover…the oldest ancient RNA sequences recorded to date.”

DNA, the double-stranded “blueprint” molecule that stores genetic information, is far sturdier than RNA, which is why it can be traced back for millions of years instead of thousands. Single-stranded RNA, a “messenger” molecule that carries out the orders of DNA, is more fragile and rare in the paleontological record.

In addition to proving that RNA can survive much longer than previously known, the team discovered that Yuka—the mammoth that died 39,000 years ago—has been misgendered for years (yes, I realize gender is a social construct that does not apply to extremely dead mammoths, but mis-sexed just doesn’t have the same ring). 

Yuka was originally deemed female according to a 2021 study that observed the “presence of skin folds in the genital area compatible with labia vulvae structures in modern elephants and the absence of male-specific muscle structures.” Mármol Sánchez and his colleagues have now overturned this anatomical judgement by probing the genetic remnants of Yuka’s Y chromosome.

In fact, as I write this on Thursday, November 13—a day before the embargo on this study lifts on Friday—Yuka is still listed as female on Wikipedia. 

Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA

Just a day until you can live your truth, buddy.

In other news…

Leave it to beavers 

Burgher, Jesse A. S. et al. “Beaver-related restoration and freshwater climate resilience across western North America.” Restoration Ecology.

Every era has a champion; in our warming world, eager beavers may rise to claim this lofty title. 

These enterprising rodents are textbook “ecosystem engineers” that reshape environments with sturdy dams that create biodiverse havens that are resistant to climate change. To better assess the role of beavers in the climate crisis, researchers reviewed the reported behavioral beaver-related restoration (BRR) projects across North America. 

“Climate change is projected to impact streamflow patterns in western North America, reducing aquatic habitat quantity and quality and harming native species, but BRR has the potential to ameliorate some of these impacts,” said researchers led by Jesse A. S. Burgher of Washington State University. 

The team reports “substantial evidence that BRR increases climate resiliency…by reducing summer water temperatures, increasing water storage, and enhancing floodplain connectivity” while also creating “fire-resistant habitat patches.” 

So go forth and get busy, beavers! May we survive this crisis in part through the skin of your teeth.

One big happy stellar family

Boyle, Andrew W. et al. “Lost Sisters Found: TESS and Gaia Reveal a Dissolving Pleiades Complex.” The Astrophysical Journal.

Visible from both the Northern and Southern Hemispheres, the Pleiades is the most widely recognized and culturally significant star cluster in the night sky. While this asterism is defined by a handful of especially radiant stars, known as the Seven Sisters, scientists have now tracked down thousands of other stellar siblings born from the same clutch scattered across some 2,000 light years.

Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA
Wide-field shot of Pleiades. Image Antonio Ferretti & Attilio Bruzzone

“We find that the Pleiades constitutes the bound core of a much larger, coeval structure” and “we refer to this structure as the Greater Pleiades Complex,” said researchers led by Andrew W. Boyle of the University of North Carolina at Chapel Hill. “On the basis of uniform ages, coherent space velocities, detailed elemental abundances, and traceback histories, we conclude that most stars in this complex originated from the same giant molecular cloud.” 

The work “further cements the Pleiades as a cornerstone of stellar astrophysics” and adds new allure to a cluster that first exploded into the skies during the Cretaceous age. (For more on the Pleiades, check out this piece I wrote earlier this year about the deep roots of its lore).

Getting inside your head(er)

Zamorano, Francisco et al. “Brain Mechanisms across the Spectrum of Engagement in Football Fans: A Functional Neuroimaging Study.” Radiology.

Scientists have peered into a place I would never dare to visit—the minds of football fans during high-stakes plays. To tap into the neural side of fanaticism, researchers enlisted 60 healthy male fans from the ages of 20 to 45 to witness dozens of goal sequences from matches involving their favorite teams, rival teams, and “neutral” teams while their brains were scanned by an fMRI machine. 

The participants were rated according to a “Football Supporters Fanaticism Scale (FSFS)” with criteria like “violent thought and/or action tendencies” and “institutional belonging and/or identification.” The scale divided the group up into 38 casual spectators, 19 committed fans, and four deranged fanatics (adjectives are mine for flourish).

Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA
Rendering of the negative effect of significant defeat. Image: Radiological Society of North America (RSNA)

“Our key findings revealed that scoring against rivals activated the reward system…while conceding to rivals triggered the mentalization network and inhibited the dorsal anterior cingulate cortex (dACC)”—a region responsible for cognitive control and decision-making—said researchers led by Francisco Zamorano of the Universidad San Sebastián in Chile. “Higher Football Supporters Fanaticism Scale scores correlated with reduced dACC activation during defeats, suggesting impaired emotional regulation in highly engaged fans.”

In other words, it is now scientifically confirmed that football fanatics are Messi bitches who love drama. 

Diplomacy served up fresh

Cabral, Óscar et al “Power for dinner. Culinary diplomacy and geopolitical aspects in Portuguese diplomatic tables (1910-2023).”

We’ll close, as all things should, with a century of fine Portuguese dining. In yet another edition of “yes, this can be a job,” researchers collected 457 menus served at various diplomatic meals in Portugal from 1910 to 2023 to probe “how Portuguese gastronomic culture has been leveraged as a culinary diplomacy and geopolitical rapprochement strategy.” 

As a lover of both food and geopolitical bureaucracy, this study really hit the spot. Highlights include a 1957 “regional lunch” for Queen Elizabeth II that aimed to channel “Portugality” through dishes like lobster and fruit tarts from the cities of Peniche and Alcobaça. The study is also filled with amazing asides like “the inclusion of imperial ice cream in the European Free Trade Association official luncheon (ID45, 1960) seems to transmit a sense of geopolitical greatness and vast governing capacity.” Ice cream just tastes so much better when it’s a symbol of international power. 

Scientists Make Genetic Breakthrough with 39,000-Year-Old Mammoth RNA
Menu of the “Luncheon in honour of her Majesty Queen Elizabeth II and his Royal Highness the Duke of Edinburgh” held in Alcobaça (Portugal) on February 20th, 1957. Image: Cabral et al., 2025.

The team also unearthed a possible faux pas: Indian president Ramaswamy Venkataraman, a vegetarian who was raised Hindu, was served roast beef in 1990. In a footnote, Cabral and his colleagues concluded that “further investigation is deemed necessary to understand the context of ‘roast beef’ service to the Indian President in 1990.” Talk about juicy gossip!

 Thanks for reading! See you next week.

  • ✇404 Media
  • Power Companies Are Using AI To Build Nuclear Power Plants
    Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster. “If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as par
     

Power Companies Are Using AI To Build Nuclear Power Plants

14 novembre 2025 à 12:28
Power Companies Are Using AI To Build Nuclear Power Plants

Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now, this push could lead to disaster. 

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.

  • ✇404 Media
  • Behind the Blog: Trolling on the Internet
    This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss getting back on our AI slop bullshit, deepfakes in schools, and epistemic virtues.JASON: I was back on my bullshit this week, by which I mean staring at some horrible AI-generated shit on Facebook. In this case, I was looking at slop of ICE raids generated using OpenAI’s Sora. I talk about this in the article but one of the many reasons why
     

Behind the Blog: Trolling on the Internet

14 novembre 2025 à 11:36
Behind the Blog: Trolling on the Internet

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss getting back on our AI slop bullshit, deepfakes in schools, and epistemic virtues.

JASON: I was back on my bullshit this week, by which I mean staring at some horrible AI-generated shit on Facebook. In this case, I was looking at slop of ICE raids generated using OpenAI’s Sora. I talk about this in the article but one of the many reasons why I think social media—especially Facebook and Instagram—is fucked if they continue to monetize and promote this stuff is because this specific slop page is not being pushed by anyone who seems to have any sort of ideological interest in immigration or the United States or anything like that. It’s just someone trying to make money. And it’s almost definitely one single person, with one single account, generating millions and millions and millions of views. We are very early in the horrible AI slop game, and yet we are seeing the damage that just a few people can do with industrial-grade content generation machines. 

Over the years I have talked to a lot of academics about the idea of “trolling” on the internet. The person who has most informed my thinking over the years is Whitney Phillips, who has written a lot about 4chan, “trolling” culture, and generally the worst corners of the internet in a series of books and papers. This Is Why We Can’t Have Nice Things, which is about 4chan, is extremely good, as is her paper called The Oxygen of Amplification. She (and other academics) have argued that it’s not the intent behind “trolls” that matters, it’s the impact. She wrote a lot about “just trolling” racism and misogyny and xenophobia on the internet and what’s important is how it impacts people and changes how they interact with online spaces. You can’t really be “ironically” racist, you’re just racist. And with anonymity and pseudonymity online, you can’t really infer what’s inside of someone’s heart because often it doesn’t matter if the person posting Harambe memes or making deepfake nudes of celebrities of their classmates or whatever didn’t actually “mean” it. 

  • ✇Dans les algorithmes
  • Du fardeau de penser
    « La vraie surprise est que beaucoup de personnes semblent ravies de se décharger du fardeau de mettre leur pensée en mots. Voilà ce qui me paraît être le fond du problème, plus que l’outil qui le permet ». Martin Lafréchoux (qui revient du futur)
     
  • ✇Dans les algorithmes
  • De l’impunité du vol d’identité
    Dans la dernière newsletter d’Algorithm Watch, le journaliste Nicolas Kayser-Bril revient sur la production par un magazine bulgare d’articles génératifs qui lui étaient attribués. Ce qu’il montre, c’est que les mécanismes de réclamation dysfonctionnent. Google lui a demandé de prouver qu’il ne travaillait pas pour ce magazine (!) et a refusé de désindexer les articles. L’autorité de protection des données allemande a transmis sa demande à son homologue bulgare, sans réponse. Le seul moyen pour
     

De l’impunité du vol d’identité

14 novembre 2025 à 01:03

Dans la dernière newsletter d’Algorithm Watch, le journaliste Nicolas Kayser-Bril revient sur la production par un magazine bulgare d’articles génératifs qui lui étaient attribués. Ce qu’il montre, c’est que les mécanismes de réclamation dysfonctionnent. Google lui a demandé de prouver qu’il ne travaillait pas pour ce magazine (!) et a refusé de désindexer les articles. L’autorité de protection des données allemande a transmis sa demande à son homologue bulgare, sans réponse. Le seul moyen pour mettre fin au problème a été de contacter un avocat pour qu’il produise une menace à l’encontre du site, ce qui n’a pas été sans frais pour le journaliste. La « législation sur la protection des données, comme le RGPD, n’a pas été d’une grande aide ».

Ceux qui pratiquent ces usurpations d’identité, qui vont devenir très facile avec l’IA générative, n’ont pour l’instant pas grand chose à craindre, constate Kayser-Bril.

  • ✇Dans les algorithmes
  • « L’IA générative est un désastre social »
    « Les industriels de l’IA ont habilement orienté le débat sur l’IA générative vers leurs propres intérêts, en nous expliquant qu’elle était une technologie transformatrice qui améliore de nombreux aspects de notre société, notamment l’accès aux soins de santé et à l’éducation ». Mais plutôt que prendre au sérieux les vraies critiques (notamment le fait que ces technologies ne soient pas si transformatrices qu’annoncées et qu’elles n’amélioreront ni l’accès au soin ni l’accès à l’éducation), les
     

« L’IA générative est un désastre social »

14 novembre 2025 à 01:01

« Les industriels de l’IA ont habilement orienté le débat sur l’IA générative vers leurs propres intérêts, en nous expliquant qu’elle était une technologie transformatrice qui améliore de nombreux aspects de notre société, notamment l’accès aux soins de santé et à l’éducation ». Mais plutôt que prendre au sérieux les vraies critiques (notamment le fait que ces technologies ne soient pas si transformatrices qu’annoncées et qu’elles n’amélioreront ni l’accès au soin ni l’accès à l’éducation), les géants de l’IA ont préféré imposer leur propre discours sur ses inconvénients : à savoir, celui de la menace existentielle, explique clairement Paris Marx sur son blog. Ce scénario totalement irréaliste a permis de mettre de côté les inquiétudes bien réelles qu’impliquent le déploiement sans mesure de l’IA générative aujourd’hui (comme par exemple, le fait qu’elle produise des « distorsion systémique » de l’information selon une étude de 22 producteurs d’information de services publics).

En Irlande, à quelques jours des élections présidentielles du 24 octobre, une vidéo produite avec de l’IA a été diffusée montrant Catherine Connolly, la candidate de gauche en tête des sondages, annoncer qu’elle se retirait de la course, comme si elle le faisait dans le cadre d’un reportage d’une des chaînes nationales. La vidéo avait pour but de faire croire au public que l’élection présidentielle était déjà terminée, sans qu’aucun vote n’ait été nécessaire et a été massivement visionnée avant d’être supprimée. 

Cet exemple nous montre bien que nous ne sommes pas confrontés à un risque existentiel où les machines nous subvertiraient, mais que nous sommes bel et bien confrontés aux conséquences sociales et bien réelles qu’elles produisent. L’IA générative pollue l’environnement informationnel à tel point que de nombreuses personnes ne savent plus distinguer s’il s’agit d’informations réelles ou générées. 

Les grandes entreprises de l’IA montrent bien peu de considération pour ses effets sociaux. Au lieu de cela, elles imposent leurs outils partout, quelle que soit leur fiabilité, et participent à inonder les réseaux de bidules d’IA et de papoteurs destinés à stimuler l’engagement, ce qui signifie plus de temps passé sur leurs plateformes, plus d’attention portée aux publicités et, au final, plus de profits publicitaires. 
En réponse à ces effets sociaux, les gouvernements semblent se concentrer sur la promulgation de limites d’âge afin de limiter l’exposition des plus jeunes à ces effets, sans paraître vraiment se soucier des dommages individuels que ces produits peuvent causer au reste de la population, ni des bouleversements politiques et sociétaux qu’ils peuvent engendrer. Or, il est clair que des mesures doivent être prises pour endiguer ces sources de perturbation sociale et notamment les pratiques de conception addictives qui ciblent tout le monde, alors que les chatbots et les générateurs d’images et de vidéos accélèrent les dégâts causés par les réseaux sociaux. Du fait des promesses d’investissements, de gains de productivité hypothétiques, les gouvernements sacrifient les fondements d’une société démocratique sur l’autel de la réussite économique profitant à quelques monopoles. Pour Paris Marx, l’IA générative n’est rien d’autre qu’une forme de « suicide social » qu’il faut endiguer avant qu’elle ne nous submerge. « Aucun centre de données géant ni le chiffre d’affaires d’aucune entreprise d’IA ne justifie les coûts que cette technologie est en train d’engendrer pour le public ».

  • ✇Dans les algorithmes
  • Mon corps électrique
    En 2022, Arnaud Robert est devenu tétraplégique. Dans un podcast en 7 épisodes pour la Radio-Télévision Suisse, il raconte sa décision de participer à une étude scientifique pour laquelle il a reçu un implant cérébral afin de retrouver le contrôle d’un de ses bras. Un podcast qui décortique le rapport à la technologie de l’intérieur, au plus intime, loin des promesses transhumanistes. « Être cobaye, cʹest prêter son corps à un destin plus grand que le sien ». Mais être cobaye, c’est apprendre au
     

Mon corps électrique

14 novembre 2025 à 01:00

En 2022, Arnaud Robert est devenu tétraplégique. Dans un podcast en 7 épisodes pour la Radio-Télévision Suisse, il raconte sa décision de participer à une étude scientifique pour laquelle il a reçu un implant cérébral afin de retrouver le contrôle d’un de ses bras. Un podcast qui décortique le rapport à la technologie de l’intérieur, au plus intime, loin des promesses transhumanistes. « Être cobaye, cʹest prêter son corps à un destin plus grand que le sien ». Mais être cobaye, c’est apprendre aussi que les miracles technologiques ne sont pas toujours au-rendez-vous. Passionnant !

  • ✇404 Media
  • Google Has Chosen a Side in Trump's Mass Deportation Effort
    Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side
     

Google Has Chosen a Side in Trump's Mass Deportation Effort

13 novembre 2025 à 09:06
Google Has Chosen a Side in Trump's Mass Deportation Effort

Google is hosting a Customs and Border Protection (CBP) app that uses facial recognition to identify immigrants, and tell local cops whether to contact ICE about the person, while simultaneously removing apps designed to warn local communities about the presence of ICE officials. ICE-spotting app developers tell 404 Media the decision to host CBP’s new app, and Google’s description of ICE officials as a vulnerable group in need of protection, shows that Google has made a choice on which side to support during the Trump administration’s violent mass deportation effort.

Google removed certain apps used to report sightings of ICE officials, and “then they immediately turned around and approved an app that helps the government unconstitutionally target an actual vulnerable group. That's inexcusable,” Mark, the creator of Eyes Up, an app that aims to preserve and map evidence of ICE abuses, said. 404 Media only used the creator’s first name to protect them from retaliation. Their app is currently available on the Google Play Store, but Apple removed it from the App Store.

“Google wanted to ‘not be evil’ back in the day. Well, they're evil now,” Mark added.

💡
Do you know anything else about Google's decision? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The CBP app, called Mobile Identify and launched last week, is for local and state law enforcement agencies that are part of an ICE program that grants them certain immigration-related powers. The 287(g) Task Force Model (TFM) program allows those local officers to make immigration arrests during routine police enforcement, and “essentially turns police officers into ICE agents,” according to the New York Civil Liberties Union (NYCLU). At the time of writing, ICE has TFM agreements with 596 agencies in 34 states, according to ICE’s website.

  • ✇Dans les algorithmes
  • Syndicats : négociez les algorithmes !
    Comment répondre à la gestion algorithmique du travail ? Tel est l’ambition du rapport « Negotiating the Algorithm » publié par la Confédération européenne des syndicats sous la direction du journaliste indépendant Ben Wray, responsable du Gig Economy Project de Brave New Europe. Le rapport décrit la prédominance des logiciels managériaux au travail (qui seraient utilisés par plus de 79% des entreprises de l’Union européenne) et les abus qui en découlent et décrit les moyens de riposte mobilisab
     

Syndicats : négociez les algorithmes !

13 novembre 2025 à 01:00

Comment répondre à la gestion algorithmique du travail ? Tel est l’ambition du rapport « Negotiating the Algorithm » publié par la Confédération européenne des syndicats sous la direction du journaliste indépendant Ben Wray, responsable du Gig Economy Project de Brave New Europe. Le rapport décrit la prédominance des logiciels managériaux au travail (qui seraient utilisés par plus de 79% des entreprises de l’Union européenne) et les abus qui en découlent et décrit les moyens de riposte mobilisables par les travailleurs en lien notamment avec la nouvelle législation européenne des travailleurs des plateformes. La gestion algorithmique confère aux employeurs des avantages informationnels considérables sur les travailleurs, leur permet de contourner les conventions collectives et de modifier les conditions de travail et les salaires de chaque travailleur voire de chaque poste. Elle leur permet d’espionner les travailleurs même en dehors de leurs heures de travail et leur offre de nombreuses possibilités de représailles. 

En regard, les travailleurs piégés par la gestion algorithmique sont privés de leur pouvoir d’action et de leurs possibilités de résolution de problèmes, et bien souvent de leurs droits de recours, tant la gestion algorithmique se déploie avec de nombreuses autres mesures autoritaires, comme le fait de ne pouvoir joindre le service RH. 

Il est donc crucial que les syndicats élaborent une stratégie pour lutter contre la gestion algorithmique. C’est là qu’intervient la directive sur le travail de plateforme qui prévoit des dispositions assez riches, mais qui ne sont pas auto-exécutoires… C’est-à-dire que les travailleurs doivent revendiquer les droits que la directive propose, au travail comme devant les tribunaux. Or, elle permet aux travailleurs et à leurs représentants d’exiger des employeurs des données exhaustives sur les décisions algorithmiques, du licenciement au calcul du salaire. 

Bien souvent ces données ne sont pas rendues dans des formats faciles à exploiter, constate Wray : le rapport encourage donc les syndicats à constituer leurs propres groupes d’analyses de données. Le rapport plaide également pour que les syndicats développent des applications capables de surveiller les applications patronales, comme l’application UberCheats, qui permettait de comparer le kilométrage payé par Uber à ses livreurs par rapport aux distances réellement parcourues (l’application a été retirée en 2021 au prétexte de son nom à la demande de la firme Uber). En investissant dans la technologie, les syndicats peuvent combler le déficit d’information des travailleurs sur les employeurs. Wray décrit comment les travailleurs indépendants ont créé des « applications de contre-mesure » ​​qui ont documenté les vols de salaires et de pourboires (voir notre article “Réguler la surveillance au travail”), permis le refus massif d’offres au rabais et aidé les travailleurs à faire valoir leurs droits devant les tribunaux. Cette capacité technologique peut également aider les organisateurs syndicaux, en fournissant une plateforme numérique unifiée pour les campagnes syndicales dans tous les types d’établissements. Wray propose que les syndicats unissent leurs forces pour créer « un atelier technologique commun » aux travailleurs, qui développerait et soutiendrait des outils pour tous les types de syndicats à travers l’Europe. 

Le RGPD confère aux travailleurs de larges pouvoirs pour lutter contre les abus liés aux logiciels de gestion, estime encore le rapport. Il leur permet d’exiger le système de notation utilisé pour évaluer leur travail et d’exiger la correction de leurs notes, et interdit les « évaluations internes cachées ». Il leur donne également le droit d’exiger une intervention humaine dans les prises de décision automatisées. Lorsque les travailleurs sont « désactivés » (éjectés de l’application), le RGPD leur permet de déposer une « demande d’accès aux données » obligeant l’entreprise à divulguer « toutes les informations personnelles relatives à cette décision », les travailleurs ayant le droit d’exiger la correction des « informations inexactes ou incomplètes ». Malgré l’étendue de ces pouvoirs, ils ont rarement été utilisés, en grande partie en raison de failles importantes du RGPD. Par exemple, les employeurs peuvent invoquer l’excuse selon laquelle la divulgation d’informations révélerait leurs secrets commerciaux et exposerait leur propriété intellectuelle. Le RGPD limite la portée de ces excuses, mais les employeurs les ignorent systématiquement. Il en va de même pour l’excuse générique selon laquelle la gestion algorithmique est assurée par un outil tiers. Cette excuse est illégale au regard du RGPD, mais les employeurs l’utilisent régulièrement (et s’en tirent impunément). 

La directive sur le travail de plateforme corrige de nombreuses failles du RGPD. Elle interdit le traitement des « données personnelles d’un travailleur relatives à : son état émotionnel ou psychologique ; l’utilisation de ses échanges privés ; la captation de données lorsqu’il n’utilise pas l’application ; concernant l’exercice de ses droits fondamentaux, y compris la syndicalisation ; les données personnelles du travailleur, y compris son orientation sexuelle et son statut migratoire ; et ses données biométriques lorsqu’elles sont utilisées pour établir son identité. » Elle étend le droit d’examiner le fonctionnement et les résultats des « systèmes décisionnels automatisés » et d’exiger que ces résultats soient exportés vers un format pouvant être envoyé au travailleur, et interdit les transferts à des tiers. Les travailleurs peuvent exiger que leurs données soient utilisées, par exemple, pour obtenir un autre emploi, et leurs employeurs doivent prendre en charge les frais associés. La directive sur le travail de plateforme exige une surveillance humaine stricte des systèmes automatisés, notamment pour des opérations telles que les désactivations. 

Le fonctionnement de leurs systèmes d’information est également soumis à l’obligation pour les employeurs d’informer les travailleurs et de les consulter sur les « modifications apportées aux systèmes automatisés de surveillance ou de prise de décision ». La directive exige également que les employeurs rémunèrent des experts (choisis par les travailleurs) pour évaluer ces changements. Ces nouvelles règles sont prometteuses, mais elles n’entreront en vigueur que si quelqu’un s’y oppose lorsqu’elles sont enfreintes. C’est là que les syndicats entrent en jeu. Si des employeurs sont pris en flagrant délit de fraude, la directive les oblige à rembourser les experts engagés par les syndicats pour lutter contre les escroqueries. 

Wray propose une série de recommandations détaillées aux syndicats concernant les éléments qu’ils devraient exiger dans leurs contrats afin de maximiser leurs chances de tirer parti des opportunités offertes par la directive sur le travail de plateforme, comme la création d’un « organe de gouvernance » au sein de l’entreprise « pour gérer la formation, le stockage, le traitement et la sécurité des données. Cet organe devrait inclure des délégués syndicaux et tous ses membres devraient recevoir une formation sur les données. » 

Il présente également des tactiques technologiques que les syndicats peuvent financer et exploiter pour optimiser l’utilisation de la directive, comme le piratage d’applications permettant aux travailleurs indépendants d’augmenter leurs revenus. Il décrit avec enthousiasme la « méthode des marionnettes à chaussettes », où de nombreux comptes tests sont utilisés pour placer et réserver du travail via des plateformes afin de surveiller leurs systèmes de tarification et de détecter les collusions et les manipulations de prix. Cette méthode a été utilisée avec succès en Espagne pour jeter les bases d’une action en justice en cours pour collusion sur les prix. 

Le nouveau monde de la gestion algorithmique et la nouvelle directive sur le travail de plateforme offrent de nombreuses opportunités aux syndicats. Cependant, il existe toujours un risque qu’un employeur refuse tout simplement de respecter la loi, comme Uber, reconnu coupable de violation des règles de divulgation de données et condamné à une amende de 6 000 € par jour jusqu’à sa mise en conformité. Uber a maintenant payé 500 000 € d’amende et n’a pas divulgué les données exigées par la loi et les tribunaux. 

Grâce à la gestion algorithmique, les patrons ont trouvé de nouveaux moyens de contourner la loi et de voler les travailleurs. La directive sur le travail de plateforme offre aux travailleurs et aux syndicats toute une série de nouveaux outils pour contraindre les patrons à jouer franc jeu. « Ce ne sera pas facile, mais les capacités technologiques développées par les travailleurs et les syndicats ici peuvent être réutilisées pour mener une guerre de classes numérique totale », s’enthousiasme Cory Doctorow.

AI-Generated Sora Videos of ICE Raids Are Wildly Viral on Facebook

12 novembre 2025 à 12:49
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
AI-Generated Sora Videos of ICE Raids Are Wildly Viral on Facebook

“Watch your step sir, keep moving,” a police officer with a vest that reads ICE and a patch that reads “POICE” says to a Latino-appearing man wearing a Walmart employee vest. He leads him toward a bus that reads “IMMIGRATION AND CERS.” Next to him, one of his colleagues begins walking unnaturally sideways, one leg impossibly darting through another as he heads to the back of a line of other Latino Walmart employees who are apparently being detained by ICE. Two American flag emojis are superimposed on the video, as is the text “Deportation.”

The video has 4 million views, 16,600 likes, 1,900 comments, and 2,200 shares on Facebook. It was, obviously, generated by OpenAI's Sora.

Some of the comments seem to understand this: “Why is he walking like that?” one says. “AI the guys foot goes through his leg,” another says. Many of the comments clearly do not: “Oh, you’ll find lots of them at Walmart,” another top comment reads. “Walmart doesn’t do paperwork before they hire you?” another says. “They removing zombies from Walmart before Halloween?” 

0:00
/0:14

The latest trend in Facebook’s ever downward spiral down the AI slop toilet are AI deportation videos. These are posted by an account called “USA Journey 897” and have the general vibe of actual propaganda videos posted by ICE and the Department of Homeland Security’s social media accounts. Many of the AI videos focus on workplace deportations, but some are similar to horrifying, real videos we have seen from ICE raids in Chicago and Los Angeles. The account was initially flagged to 404 Media by Chad Loder, an independent researcher.

The videos universally have text superimposed over the three areas of a video where OpenAI’s Sora video generator places watermarks. This, as well as the style of the videos being generated and tests done by 404 Media to make very similar videos, show that they were generated with Sora, highlighting how tools released by some of the richest companies in the world are being combined to generate and monetize videos that take advantage of human suffering (and how incredibly easy it is to hide a Sora watermark).

“PLEASE THAT’S MY BABY,” a dark-skinned woman screams while being restrained by an ICE officer in another video. “Ma’am stop resisting, keep moving,” an officer says back. The camera switches to an image of the baby: “YOU CAN’T TAKE ME FROM HER, PLEASE SHE’S RIGHT THERE. DON’T DO THIS, SHE’S JUST A BABY. I LOVE YOU, MAMA LOVES YOU,” the woman says. The video switches to a scene of the woman in the back of an ICE van. The video has 1,400 likes and 407 comments, which include “ Don’t separate them….take them ALL!,” “Take the baby too,” and “I think the days of use those child anchors are about over with.” 

  • ✇404 Media
  • ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants
    Immigration and Customs Enforcement (ICE) is allocating as much as $180 million to pay bounty hunters and private investigators who verify the address and location of undocumented people ICE wishes to detain, including with physical surveillance, according to procurement records reviewed by 404 Media.The documents provide more details about ICE’s plan to enlist the private sector to find deportation targets. In October The Intercept reported on ICE’s intention to use bounty hunters or skip tr
     

ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants

12 novembre 2025 à 11:07
ICE Plans to Spend $180 Million on Bounty Hunters to Stalk Immigrants

Immigration and Customs Enforcement (ICE) is allocating as much as $180 million to pay bounty hunters and private investigators who verify the address and location of undocumented people ICE wishes to detain, including with physical surveillance, according to procurement records reviewed by 404 Media.

The documents provide more details about ICE’s plan to enlist the private sector to find deportation targets. In October The Intercept reported on ICE’s intention to use bounty hunters or skip tracers—an industry that often works on insurance fraud or tries to find people who skipped bail. The new documents now put a clear dollar amount on the scheme to essentially use private investigators to find the locations of undocumented immigrants.

💡
Do you know anything else about this plan? Are you a private investigator or skip tracer who plans to do this work? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
  • ✇404 Media
  • OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content
    OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators. The flaw in OpenAI’s attempt to stop users from
     

OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content

12 novembre 2025 à 10:44
OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content

OpenAI’s video generator Sora 2 is still producing copyright infringing content featuring Nintendo characters and the likeness of real people, despite the company’s attempt to stop users from making such videos. OpenAI updated Sora 2 shortly after launch to detect videos featuring copyright infringing content, but 404 Media’s testing found that it’s easy to circumvent those guardrails with the same tricks that have worked on other AI generators. 

The flaw in OpenAI’s attempt to stop users from generating videos of Nintendo and popular cartoon characters exposes a fundamental problem with most generative AI tools: it is extremely difficult to completely stop users from recreating any kind of content that’s in the training data, and OpenAI can’t remove the copyrighted content from Sora 2’s training data because it couldn’t exist without it. 

Shortly after Sora 2 was released in late September, we reported about how users turned it into a copyright infringement machine with an endless stream of videos like Pikachu shoplifting from a CVS and Spongebob Squarepants at a Nazi rally. Companies like Nintendo and Paramount were obviously not thrilled seeing their beloved cartoons committing crimes and not getting paid for it, so OpenAI quickly introduced an “opt-in” policy, which prevented users from generating copyrighted material unless the copyright holder actively allowed it. Initially, OpenAI’s policy allowed users to generate copyrighted material and required the copyright holder to opt-out. The change immediately resulted in a meltdown among Sora 2 users, who complained OpenAI no longer allowed them to make fun videos featuring copyrighted characters or the likeness of some real people.   

This is why if you give Sora 2 the prompt “Animal Crossing gameplay,” it will not generate a video and instead say “This content may violate our guardrails concerning similarity to third-party content.” However, when I gave it the prompt “Title screen and gameplay of the game called ‘crossing aminal’ 2017,” it generated an accurate recreation of Nintendo’s Animal Crossing New Leaf for the Nintendo 3DS.

OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content

Sora 2 also refused to generate videos for prompts featuring the Fox cartoon American Dad, but it did generate a clip that looks like it was taken directly from the show, including their recognizable voice acting, when given this prompt: “blue suit dad big chin says ‘good morning family, I wish you a good slop’, son and daughter and grey alien say ‘slop slop’, adult animation animation American town, 2d animation.”

OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content

The same trick also appears to circumvent OpenAI’s guardrails against recreating the likeness of real people. Sora 2 refused to generate a video of “Hasan Piker on stream,” but it did generate a video of “Twitch streamer talking about politics, piker sahan.” The person in the generated video didn’t look exactly like Hasan, but he has similar hair, facial hair, the same glasses, and a similar voice and background. 

OpenAI Can’t Fix Sora’s Copyright Infringement Problem Because It Was Built With Stolen Content

A user who flagged this bypass to me, who wished to remain anonymous because they didn’t want OpenAI to cut off their access to Sora, also shared Sora generated videos of South Park, Spongebob Squarepants, and Family Guy. 

OpenAI did not respond to a request for comment. 

There are several ways to moderate generative AI tools, but the simplest and cheapest method is to refuse to generate prompts that include certain keywords. For example, many AI image generators stop people from generating nonconsensual nude images by refusing to generate prompts that include the names of celebrities or certain words referencing nudity or sex acts. However, this method is prone to failure because users find prompts that allude to the image or video they want to generate without using any of those banned words. The most notable example of this made headlines in 2024 after an AI-generated nude image of Taylor Swift went viral on X. 404 Media found that the image was generated with Microsoft’s AI image generator, Designer, and that users managed to generate the image by misspelling Swift’s name or using nicknames she’s known by, and describing sex acts without using any explicit terms. 

Since then, we’ve seen example after example of users bypassing generative AI tool guardrails being circumvented with the same method. We don’t know exactly how OpenAI is moderating Sora 2, but at least for now, the world’s leading AI company’s moderating efforts are bested by a simple and well established bypass method. Like with these other tools, bypassing Sora’s content guardrails has become something of a game to people online. Many of the videos posted on the r/SoraAI subreddit are of “jailbreaks” that bypass Sora’s content filters, along with the prompts used to do so. And Sora’s “For You” algorithm is still regularly serving up content that probably should be caught by its filters; in 30 seconds of scrolling we came across many videos of Tupac, Kobe Bryant, JuiceWrld, and DMX rapping, which has become a meme on the service.

It’s possible OpenAI will get a handle on the problem soon. It can build a more comprehensive list of banned phrases and do more post generation image detection, which is a more expensive but effective method for preventing people from creating certain types of content. But all these efforts are poor attempts to distract from the massive, unprecedented amount of copyrighted content that has already been stolen, and that Sora can’t exist without. This is not an extreme AI skeptic position. The biggest AI companies in the world have admitted that they need this copyrighted content, and that they can’t pay for it.  

The reason OpenAI and other AI companies have such a hard time preventing users from generating certain types of content once users realize it’s possible is that the content already exists in the training data. An AI image generator is only able to produce a nude image because there’s a ton of nudity in its training data. It can only produce the likeness of Taylor Swift because her images are in the training data. And Sora can only make videos of Animal Crossing because there are Animal Crossing gameplay videos in its training data. 

For OpenAI to actually stop the copyright infringement it needs to make its Sora 2 model “unlearn” copyrighted content, which is incredibly expensive and complicated. It would require removing all that content from the training data and retraining the model. Even if OpenAI wanted to do that, it probably couldn’t because that content makes Sora function. OpenAI might improve its current moderation to the point where people are no longer able to generate videos of Family Guy, but the Family Guy episodes and other copyrighted content in its training data are still enabling it to produce every other generated video. Even when the generated video isn’t recognizably lifting from someone else’s work, that’s what it’s doing. There’s literally nothing else there. It’s just other people’s stuff. 

  • ✇404 Media
  • Major Porn Studios Join Forces to Establish Industry ‘Code of Conduct’
    Six of the biggest porn studios in the world, including industry giant and Pornhub parent company Ayl o, announced Wednesday they have formed a first-of-its-kind coalition called the Adult Studio Alliance (ASA). The alliance’s purpose is to “contribute to a safe, healthy, dignified, and respectful adult industry for performers,” the ASA told 404 Media.“This alliance is intended to unite professionals creating adult content (from studios to crews to performers) under a common set of values and
     

Major Porn Studios Join Forces to Establish Industry ‘Code of Conduct’

12 novembre 2025 à 10:08
Major Porn Studios Join Forces to Establish Industry ‘Code of Conduct’

Six of the biggest porn studios in the world, including industry giant and Pornhub parent company Ayl o, announced Wednesday they have formed a first-of-its-kind coalition called the Adult Studio Alliance (ASA). The alliance’s purpose is to “contribute to a safe, healthy, dignified, and respectful adult industry for performers,” the ASA told 404 Media.

“This alliance is intended to unite professionals creating adult content (from studios to crews to performers) under a common set of values and guidelines. In sharing our common standards, we hope to contribute to a safe, healthy, dignified, and respectful adult industry for performers,” a spokesperson for ASA told 404 Media in an email. “As a diverse group of studios producing a large volume and variety of adult content, we believe it’s key to promote best practices on all our scenes. We all come from different studios, but we share the belief that all performers are entitled to comfort and safety on set.” 

The founding members include Aylo, Dorcel, ERIKALUST, Gamma Entertainment, Mile High Media and Ricky’s Room. Aylo owns some of the biggest platforms and porn studios in the industry, including Brazzers, Reality Kings, Digital Playground and more. 

Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone

12 novembre 2025 à 09:49
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone

A judge in Washington has ruled that police images taken by Flock’s AI license plate-scanning cameras are public records that can be requested as part of normal public records requests. The decision highlights the sheer volume of the technology-fueled surveillance state in the United States, and shows that at least in some cases, police cannot withhold the data collected by its surveillance systems.

In a ruling last week, Judge Elizabeth Neidzwski ruled that “the Flock images generated by the Flock cameras located in Stanwood and Sedro-Wooley [Washington] are public records under the Washington State Public Records Act,” that they are “not exempt from disclosure,” and that “an agency does not have to possess a record for that record to be subject to the Public Records Act.” 

She further found that “Flock camera images are created and used to further a governmental purpose” and that the images on them are public records because they were paid for by taxpayers. Despite this, the records that were requested as part of the case will not be released because the city automatically deleted them after 30 days. Local media in Washington first reported on the case; 404 Media bought Washington State court records to report the specifics of the case in more detail.

Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone
A screenshot from the judge's decision

Flock’s automated license plate reader (ALPR) cameras are used in thousands of communities around the United States. They passively take between six and 12 timestamped images of each car that passes by, allowing the company to make a detailed database of where certain cars (and by extension, people) are driving in those communities. 404 Media has reported extensively on Flock, and has highlighted that its cameras have been accessed by the Department of Homeland Security and by local police working with DHS on immigration cases. Last month, cops in Colorado used data from Flock cameras to incorrectly accuse an innocent woman of theft based on her car’s movements.

  • ✇404 Media
  • Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter
    We start with Matthew Gault’s dive into a battle between a small town and the construction of a massive datacenter for America’s nuclear weapon scientists. After the break, Joseph explains why people are 3D-printing whistles in Chicago. In the subscribers-only section, Jason zooms out and tells us what librarians are seeing with AI and tech, and how that is impacting their work and knowledge more broadly. Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid su
     

Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter

12 novembre 2025 à 09:32
Podcast: Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter

We start with Matthew Gault’s dive into a battle between a small town and the construction of a massive datacenter for America’s nuclear weapon scientists. After the break, Joseph explains why people are 3D-printing whistles in Chicago. In the subscribers-only section, Jason zooms out and tells us what librarians are seeing with AI and tech, and how that is impacting their work and knowledge more broadly.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

6:03 - ⁠Our New FOIA Forum! 11/19, 1PM ET⁠ 

7:50 - ⁠A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists⁠ 

12:27 - ⁠'A Black Hole of Energy Use': Meta's Massive AI Data Center Is Stressing Out a Louisiana Community⁠ 

21:09 - ⁠'House of Dynamite' Is About the Zoom Call that Ends the World⁠ 

30:35 - ⁠The Latest Defense Against ICE: 3D-Printed Whistles⁠

SUBSCRIBER'S STORY: ⁠AI Is Supercharging the War on Libraries, Education, and Human Knowledge⁠

  • ✇Dans les algorithmes
  • Dérégulation de l’IA ? Pas vraiment !
    Dans une tribune pour le Guardian, les chercheuses Sacha Alanoca et Maroussia Levesque estiment que si le gouvernement américain adopte une approche non interventionniste à l’égard des applications d’IA telles que les chatbots et les générateurs d’images, il est fortement impliqué dans les composants de base de l’IA. « Les États-Unis ne déréglementent pas l’IA ; ils réglementent là où la plupart des gens ne regardent pas ». En fait, expliquent les deux chercheuses, les régulations ciblent différ
     

Dérégulation de l’IA ? Pas vraiment !

12 novembre 2025 à 01:00

Dans une tribune pour le Guardian, les chercheuses Sacha Alanoca et Maroussia Levesque estiment que si le gouvernement américain adopte une approche non interventionniste à l’égard des applications d’IA telles que les chatbots et les générateurs d’images, il est fortement impliqué dans les composants de base de l’IA. « Les États-Unis ne déréglementent pas l’IA ; ils réglementent là où la plupart des gens ne regardent pas ». En fait, expliquent les deux chercheuses, les régulations ciblent différents composants des systèmes d’IA. « Les premiers cadres réglementaires, comme la loi européenne sur l’IA, se concentraient sur les applications à forte visibilité, interdisant les utilisations à haut risque dans les domaines de la santé, de l’emploi et de l’application de la loi afin de prévenir les préjudices sociétaux. Mais les pays ciblent désormais les éléments constitutifs de l’IA. La Chine restreint les modèles pour lutter contre les deepfakes et les contenus inauthentiques. Invoquant des risques pour la sécurité nationale, les États-Unis contrôlent les exportations des puces les plus avancées et, sous Biden, vont jusqu’à contrôler la pondération des modèles – la « recette secrète » qui transforme les requêtes des utilisateurs en résultats ». Ces réglementations sur l’IA se dissimulent dans un langage administratif technique, mais derrière ce langage complexe se cache une tendance claire : « la réglementation se déplace des applications de l’IA vers ses éléments constitutifs».

Les chercheuses dressent ainsi une taxonomie de la réglementation. « La politique américaine en matière d’IA n’est pas du laisser-faire. Il s’agit d’un choix stratégique quant à l’endroit où intervenir. Bien qu’opportun politiquement, le mythe de la déréglementation relève davantage de la fiction que de la réalité ». Pour elles, par exemple, il est difficile de justifier une attitude passive face aux préjudices sociétaux de l’IA, alors que Washington intervient volontiers sur les puces électroniques pour des raisons de sécurité nationale.

  • ✇404 Media
  • Remnants of Lost Continents Are Everywhere. Now, We Finally Know Why.
    🌘Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week. Tiny remnants of long-lost continents that vanished many millions of years ago are sprinkled around the world, including on remote island chains and seamounts, a mystery that has puzzled scientists for years. Now, a team has discovered a mechanism that can explain how this continental detritus ends up resurfacing in unexpected places, according to a study pu
     

Remnants of Lost Continents Are Everywhere. Now, We Finally Know Why.

11 novembre 2025 à 12:45
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Remnants of Lost Continents Are Everywhere. Now, We Finally Know Why.

Tiny remnants of long-lost continents that vanished many millions of years ago are sprinkled around the world, including on remote island chains and seamounts, a mystery that has puzzled scientists for years. 

Now, a team has discovered a mechanism that can explain how this continental detritus ends up resurfacing in unexpected places, according to a study published on Tuesday in Nature Geoscience.  

When continents are subducted into Earth’s mantle, the layer beneath the planet’s crust, waves can form that scrape off rocky material and sweep it across hundreds of miles to new locations.  This “mantle wave” mechanism fills in a gap in our understanding of how lost continents are metabolized through our ever-shifting planet.  

“There are these seamount chains where volcanic activity has erupted in the middle of the ocean,” said Sascha Brune, a professor at the GFZ Helmholtz Centre for Geosciences and University of Potsdam, in a call with 404 Media. “Geochemists go there, they drill, they take samples, and they do their isotope analysis, which is a very fancy geochemical analysis that gives you small elements and isotopes which come up with something like a ‘taste.’”

“Many of these ocean islands have a taste that is surprisingly similar to the continents, where the isotope ratio is similar to what you would expect from continents and sediments,” he continued. “And there has always been the question: why is this the case? Where does it come from?”

These continental sprinkles are sometimes linked to mantle plumes, which are hot columns of gooey rock that erupt from the deep mantle. Plumes bring material from ancient landmasses, which have been stuck in the mantle for eons, back to the light of day again. Mantle plumes are the source of key hot spots like Hawai’i and Iceland, but there are plenty of locations with enriched continental material that are not associated with plumes—or any other known continental recycling mechanisms. 

The idea of a mantle wave has emerged from a series of revelations made by lead author Tom Gernon, a professor at the University of Southampton, along with his colleagues at GFZ, including Brune. Gernon previously led a 2023 study that identified evidence of similar dynamics occurring within continents. By studying patterns in the distribution of diamonds across South Africa, the researchers showed that slow cyclical motions in the mantle dislodge chunks off the keel of landmasses as they plunge into the mantle. Their new study confirms that these waves can also explain how the elemental residue of the supercontinent Gondwana, which broke up over 100 million years ago, resurfaced in seamounts across the Indian Ocean and other locations. 

In other words, the ashes of dead continents are scattered across extant landmasses following long journeys through the mantle. Though it’s not possible to link these small traces back to specific past continents or time periods, Brune hopes that researchers will be able to extract new insights about Earth’s roiling past from the clues embedded in the ground under our feet.

“What we are saying now is that there is another element, with this kind of pollution of continental material in the upper mantle,” Brune said. “It is not replacing what was said before; it is just complementing it in a way where we don't need plumes everywhere. There are some regions that we know are not plume-related, because the temperatures are not high enough and the isotopes don't look like plume-affected. And for those regions, this new mechanism can explain things that we haven't explained before.”

“We have seen that there's quite a lot of evidence that supports our hypothesis, so it would be interesting to go to other places and investigate this a bit more in detail,” he concluded.

Update: This story has been update to note Tom Gernon was a lead author on the paper.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
  • ✇404 Media
  • Visualize All 23 Years of BYTE Magazine in All Its Glory, All at Once
    Fifty years ago—almost two decades before WIRED, seven years ahead of PCMag, just a few years after the first email ever passed through the internet and with the World Wide Web still 14 years away—there was BYTE. Now, you can see the tech magazine's entire run at once. Software engineer Hector Dearman recently released a visualizer to take in all of BYTE’s 287 issues as one giant zoomable map.The physical BYTE magazine published monthly from September 1975 until July 1998, for $10 a month. Pe
     

Visualize All 23 Years of BYTE Magazine in All Its Glory, All at Once

11 novembre 2025 à 11:52
Visualize All 23 Years of BYTE Magazine in All Its Glory, All at Once

Fifty years ago—almost two decades before WIRED, seven years ahead of PCMag, just a few years after the first email ever passed through the internet and with the World Wide Web still 14 years away—there was BYTE. Now, you can see the tech magazine's entire run at once. Software engineer Hector Dearman recently released a visualizer to take in all of BYTE’s 287 issues as one giant zoomable map.

The physical BYTE magazine published monthly from September 1975 until July 1998, for $10 a month. Personal computer kits were a nascent market, with the first microcomputers having just launched a few years prior. BYTE was founded on the idea that the budding microcomputing community would be well-served by a publication that could help them through it. 

  • ✇404 Media
  • DHS Is Deploying a Powerful Surveillance Tool at College Football Games
    A version of this article was previously published on FOIAball, a newsletter reporting on college football and public records. You can learn more about FOIAball and subscribe here. Last weekend, Charleston’s tiny private military academy, the Citadel, traveled to Ole Miss.This game didn’t have quite the same cachet as the Rebels' Week 11 opponent this time last year, when a one-loss Georgia went to Oxford. A showdown of ranked SEC opponents in early November 2024 had all eyes trained on Vaught-H
     

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

11 novembre 2025 à 10:03
DHS Is Deploying a Powerful Surveillance Tool at College Football Games

A version of this article was previously published on FOIAball, a newsletter reporting on college football and public records. You can learn more about FOIAball and subscribe here

Last weekend, Charleston’s tiny private military academy, the Citadel, traveled to Ole Miss.

This game didn’t have quite the same cachet as the Rebels' Week 11 opponent this time last year, when a one-loss Georgia went to Oxford. 

A showdown of ranked SEC opponents in early November 2024 had all eyes trained on Vaught-Hemingway Stadium. 

Including those of the surveillance state. 

According to documents obtained by FOIAball, the Ole Miss-Georgia matchup was one of at least two games last year where the school used a little-known Department of Homeland Security information-sharing platform to keep a watchful eye on attendees. 

The platform, called the Homeland Security Information Network (HSIN), is a centralized hub for the myriad law enforcement agencies involved with security at big events.

DHS Is Deploying a Powerful Surveillance Tool at College Football Games
CREDIT: Ole Miss/Georgia EAP, obtained by FOIAball

According to an Event Action Plan obtained by FOIAball, at least 11 different departments were on the ground at the Ole Miss-Georgia game, from Ole Miss campus police to a military rapid-response team.

HSINs are generally depicted as a secure channel to facilitate communication between various entities.

In a video celebrating its 20th anniversary, a former HSIN employee hammered home that stance.“When our communities are connected, our country is indeed safer," they said.

In reality HSIN is an integral part of the vast surveillance arm of the U.S. government.

Left unchecked since 9/11, supercharged by technological innovation, HSIN can subject any crowd to almost constant monitoring, looping in live footage from CCTV cameras, from drones flying overhead, and from police body cams and cell phones. 

HSIN has worked with private businesses to ensure access to cameras across cities; they collect, store, and mine vast amounts of personal data; and they have been used to facilitate facial recognition searches from companies like Clearview AI.

It’s one of the least-reported surveillance networks in the country. 

And it's been building this platform on the back of college football. 

Since 9/11, HSINs have become a widely used tool. 

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

A recent Inspector General report found over 55,000 active accounts using HSIN, ranging from federal employees to local police agencies to nebulous international stakeholders. 

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

The platforms host what’s called SBU, sensitive but unclassified information, including threat assessments culled from media monitoring.

According to a privacy impact study from 2006, HSIN was already maintaining a database of suspicious activities and mining those for patterns. 

"The HSIN Database can be mined in a manner that identifies potential threats to the homeland or trends requiring further analysis,” it noted. 

In an updated memo from 2012 discussing whose personal information HSIN can collect and disseminate, the list includes the blanket, “individuals who may pose a threat to the United States.”

DHS Is Deploying a Powerful Surveillance Tool at College Football Games
DHS Is Deploying a Powerful Surveillance Tool at College Football Games

A 2023 DHS “Year in Review” found that HSIN averaged over 150,000 logins per month. 

Its Connect platform, which coordinates security and responses at major events, was utilized over 500 times a day. 

HSIN operated at the Boston Marathon, Lollapalooza, the World Series, and the presidential primary debates. It has also been used at every Super Bowl for the last dozen years.

DHS is quick to tout the capabilities of HSINs in internal communications reviewed by FOIAball.  

In doing so, it reveals the growth of its surveillance scope. In documents from 2018, DHS makes no mention of live video surveillance.

But a 2019 annual review said that HSINs used private firms to help wrangle cameras at commercial businesses around Minneapolis, which hosted the Final Four that year. 

“Public safety partners use HSIN Connect to share live video streams from stationary cameras as well as from mobile phones,” it said. “[HSIN communities such as] the Minneapolis Downtown Security Executive Group works with private sector firms to share live video from commercial businesses’ security cameras, providing a more comprehensive operating picture and greater situational awareness in the downtown area.”

And the platform has made its way to college campuses.

Records obtained by FOIAball show how pervasive this technology has become on college campuses, for everything from football games to pro-Palestinian protests.

In November 2023, students at Ohio State University held several protests against Israel’s war in Gaza. At one, over 100 protesters blocked the entrance to the school president’s office. 

A report that year from DHS revealed the protesters were being watched in real-time from a central command center. 

Under the heading "Supporting Operation Excellence," DHS said the school used HSIN to surveil protesters, integrating the school’s closed-circuit cameras to live stream footage to HSIN Connect.

“Ohio State University has elevated campus security by integrating its closed-circuit camera system with HSIN Connect,” it said. “This collaboration creates a real-time Common Operating Picture for swift information sharing, enhancing OSU’s ability to monitor campus events and prioritize community safety.”

“HSIN Connect proved especially effective during on-campus protests, expanding OSU’s security capabilities,” the school’s director of emergency management told DHS. “HSIN Connect has opened new avenues for us in on-campus security.” 

While it opened new avenues, the platform already had a well-established relationship with the school. 

According to an internal DHS newsletter from January 2016, HSIN was utilized at every single Buckeyes home game in 2015. 

“HSIN was a go-to resource for game days throughout the 2015 season,” it said. 

It highlighted that data was being passed along and analyzed by DHS officials. 

The newsletter also revealed HSINs were at College Football Playoff games that year and have been in years since. There was no mention of video surveillance at Ohio State back in 2015. But in 2019, that capability was tested at Georgia Tech. 

There, police used “HSIN Connect to share live video streams with public safety partners.” 

A 2019 internal newsletter quoted a Georgia Tech police officer about the use of real-time video surveillance on game days, both from stationary cameras and cell phones.

“The mobile app for HSIN Connect also allows officials to provide multiple, simultaneous live video streams back to our Operations Center across a secure platform,” the department said.

Ohio State told FOIAball that it no longer uses HSIN for events or incidents. However, it declined to answer questions about surveilling protesters or football games.

Ohio State’s records department said that it did not have any documents relating to the use of HSIN or sharing video feeds with DHS. 

Georgia Tech’s records office told FOIAball that HSINs had not been used in years and claimed it was “only used as a tool to share screens internally." Its communications team did not respond to a request to clarify that comment.

Years later, DHS had eyes both on the ground and in the sky at college football. 

According to the 2023 annual review, HSIN Connect operated during University of Central Florida home games that season. There, both security camera and drone detection system feeds were looped into the platform in real-time.

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

DHS said that the "success at UCF's football games hints at a broader application in emergency management.” 

HSIN has in recent years been hooked into facial recognition systems.

A 2024 report from the U.S. Commission on Civil Rights found that the U.S. Marshals were granted access to HSIN, where they requested "indirect facial recognition searches through state and local entities" using Clearview AI. 

Which brings us to the Egg Bowl—the annual rivalry game between Ole Miss and Mississippi State. 

FOIAball learned about the presence of HSIN at Ole Miss through a records request to the city’s police department. It shared Event Action Plans for the Rebels’ games on Nov. 9, 2024 against Georgia and Nov. 30, 2024 against Mississippi State.

It’s unclear how these partnerships are forged. 

In videos discussing HSIN, DHS officials have highlighted their outreach to law enforcement, talking about how they want agencies onboarded and trained on the platform. No schools mentioned in this article answered questions about how their relationship with DHS started.

The Event Action Plan provides a fascinating level of detail that shows what goes into security planning for a college football game, from operations meetings that start on Tuesday to safety debriefs the following Monday. 

Its timeline of events discusses when Ole Miss’s Vaught-Hemingway Stadium is locked down and when security sweeps are conducted. Maps detail where students congregate beforehand and where security guards are posted during games. 

The document includes contingency plans for extreme heat, lightning, active threats, and protesters. It also includes specific scripts for public service announcers to read in the event of any of those incidents. 

It shows at least 11 different law enforcement agencies are on the ground on game days, from school cops to state police.

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

They even have the U.S. military on call. The 47th Civil Support Team, based out of Jackson Air National Guard Base, is ready to respond to a chemical, biological, or nuclear attack. 

All those agencies are steered via the document to the HSIN platform. 

Under a section on communications, it lists the HSIN Sitroom, which is “Available to all partners and stakeholders via computer & cell phone.”

The document includes a link to an HSIN Connect page.

It uses Eli Manning as an example of how to log in. 

“Ole Miss Emergency Management - Log in as a Guest and use a conventional naming convention such as: ‘Eli Manning - Athletics.’”

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

On the document, it notes that the HSIN hosts sensitive Personally Identifiable Information (PII) and Threat Analysis Documents. 

“Access is granted on a need-to-know basis, users will need to be approved prior to entry into the SitRoom.”

“The general public and general University Community is not permitted to enter the online SitRoom,” it adds. “All SitRooms contain operationally sensitive information and PII, therefore access must be granted by the ‘Host’.”

It details what can be accessed in the HSIN, such as a chat window for relaying information.

It includes a section on Threat Analysis, which DHS says is conducted through large-scale media monitoring.

The document does not detail whether the HSIN used at Ole Miss has access to surveillance cameras across campus. 

But that may not be something explicitly stated in documents such as these. 

Like Ohio State, UCF told FOIAball that it had no memoranda of understanding or documentation about providing access to video feeds to HSINs, despite DHS acknowledging those streams were shared. Ole Miss’ records department also did not provide any documents on what campus cameras may have been shared with DHS. 

While one might assume the feeds go dark after the game is over, there exists the very real possibility that by being tapped in once, DHS can easily access them again. 

“I’m worried about mission creep,” Matthew Guariglia, a senior policy analyst at the Electronic Frontier Foundation, told FOIAball. “These arrangements are made for very specific purposes. But they could become the apparatus of much greater state surveillance.” 

For Ole Miss, its game against Georgia went off without any major incidents. 

Well, save for one. 

DHS Is Deploying a Powerful Surveillance Tool at College Football Games

During the second quarter, a squirrel jumped onto the field, and play had to be stopped. 

In the EAP, there was no announcer script for handling a live animal interruption.

  • ✇404 Media
  • The Latest Defense Against ICE: 3D-Printed Whistles
    Chicagoans have turned to a novel piece of tech that marries the old-school with the new to warn their communities about the presence of ICE officials: 3D-printed whistles.The goal is to “prevent as many people from being kidnapped as possible,” Aaron Tsui, an activist with Chicago-based organization Cycling Solidarity, and who has been printing whistles, told 404 Media. “Whistles are an easy way to bring awareness for when ICE is in the area, printing out the whistles is something simple tha
     

The Latest Defense Against ICE: 3D-Printed Whistles

11 novembre 2025 à 08:53
The Latest Defense Against ICE: 3D-Printed Whistles

Chicagoans have turned to a novel piece of tech that marries the old-school with the new to warn their communities about the presence of ICE officials: 3D-printed whistles.

The goal is to “prevent as many people from being kidnapped as possible,” Aaron Tsui, an activist with Chicago-based organization Cycling Solidarity, and who has been printing whistles, told 404 Media. “Whistles are an easy way to bring awareness for when ICE is in the area, printing out the whistles is something simple that I can do in order to help bring awareness.”

Over the last couple months ICE has especially focused on Chicago as part of Operation Midway Blitz. During that time, Department of Homeland Security (DHS) personnel have shot a religious leader in the head, repeatedly violated court orders limiting the use of force, and even entered a daycare facility to detain someone.

💡
Do you know anything else about this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

3D printers have been around for years, with hobbyists using them for everything from car parts to kids’ toys. In media articles they are probably most commonly associated with 3D-printed firearms.

One of the main attractions of 3D printers is that they squarely put the means of production into the hands of essentially anyone who is able to buy or access a printer. There’s no need to set up a complex supply chain of material providers or manufacturers. No worry about a store refusing to sell you an item for whatever reason. Instead, users just print at home, and can do so very quickly, sometimes in a matter of minutes. The price of printers has decreased dramatically over the last 10 years, with some costing a few hundred dollars.

0:00
/0:04

A video of the process from Aaron Tsui.

People who are printing whistles in Chicago either create their own design or are given or download a design someone else made. Resident Justin Schuh made his own. That design includes instructions on how to best use the whistle—three short blasts to signal ICE is nearby, and three long ones for a “code red.” The whistle also includes the phone number for the Illinois Coalition for Immigrant & Refugee Rights (ICIRR) hotline, which people can call to connect with an immigration attorney or receive other assistance. Schuh said he didn’t know if anyone else had printed his design specifically, but he said he has “designed and printed some different variations, when someone local has asked for something specific to their group.” The Printables page for Schuh’s design says it has been downloaded nearly two dozen times.

  • ✇404 Media
  • Danish Redditor Charged for Posting Nude Scenes from Films
    In a landmark case for Danish courts and internationally, a man was sentenced to seven months’ suspended imprisonment and 120 hours of community service for posting nude scenes from copyrighted films.He’s convicted of “gross violations of copyright, including violating the right of publicity of more than 100 aggrieved female actors relating to their artistic integrity,” Danish police reported Monday.The man, a 40-year-old from Denmark who was a prolific Redditor under the username “KlammereFy
     

Danish Redditor Charged for Posting Nude Scenes from Films

11 novembre 2025 à 08:07
Danish Redditor Charged for Posting Nude Scenes from Films

In a landmark case for Danish courts and internationally, a man was sentenced to seven months’ suspended imprisonment and 120 hours of community service for posting nude scenes from copyrighted films.

He’s convicted of “gross violations of copyright, including violating the right of publicity of more than 100 aggrieved female actors relating to their artistic integrity,” Danish police reported Monday.

The man, a 40-year-old from Denmark who was a prolific Redditor under the username “KlammereFyr” (which translates to “NastierGuy”) was arrested and charged with copyright infringement in September 2024 by Denmark’s National Unit for Serious Crime (NSK). 

  • ✇404 Media
  • Our New FOIA Forum! 11/19, 1PM ET
    It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Wednesday, November 19th at 1 PM Eastern. That's in just over a week away! Add it to your calendar! This time we're focused on our coverage of Flock, the automatic license plate reader (ALPR) and surveillance tech company. Earlier this year anonymous r
     

Our New FOIA Forum! 11/19, 1PM ET

10 novembre 2025 à 11:52
Our New FOIA Forum! 11/19, 1PM ET

It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Wednesday, November 19th at 1 PM Eastern. That's in just over a week away! Add it to your calendar! 

This time we're focused on our coverage of Flock, the automatic license plate reader (ALPR) and surveillance tech company. Earlier this year anonymous researchers had the great idea of asking agencies for the network audit which shows why cops were using these cameras. Following that, we did a bunch of coverage, including showing that local police were performing lookups for ICE in Flock's nationwide network of cameras, and that a cop in Texas searched the country for a woman who self-administered an abortion. We'll tell you how all of this came about, what other requests people did after, and what requests we're exploring at the moment with Flock.

If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.

Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.

We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!

Our New FOIA Forum! 11/19, 1PM ET
  • ✇404 Media
  • A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists
    Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So w
     

A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists

10 novembre 2025 à 09:00
A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists

Ypsilanti, Michigan resident KJ Pedri doesn’t want her town to be the site of a new $1.2 billion data center, a massive collaborative project between the University of Michigan and America’s nuclear weapons scientists at Los Alamos National Laboratories (LANL) in New Mexico.

“My grandfather was a rocket scientist who worked on Trinity,” Pedri said at a recent Ypsilanti city council meeting, referring to the first successful detonation of a nuclear bomb. “He died a violent, lonely, alcoholic. So when I think about the jobs the data center will bring to our area, I think about the impact of introducing nuclear technology to the world and deploying it on civilians. And the impact that that had on my family, the impact on the health and well-being of my family from living next to a nuclear test site and the spiritual impact that it had on my family for generations. This project is furthering inhumanity, this project is furthering destruction, and we don’t need more nuclear weapons built by our citizens.”

At the Ypsilanti city council meeting where Pedri spoke, the town voted to officially fight against the construction of the data center. The University of Michigan says the project is not a data center, but a “high-performance computing facility” and it promises it won’t be used to “manufacture nuclear weapons.” The distinction and assertion are ringing hollow for Ypsilanti residents who oppose construction of the data center, have questions about what it would mean for the environment and the power grid, and want to know why a nuclear weapons lab 24 hours away by car wants to build an AI facility in their small town.

“What I think galls me the most is that this major institution in our community, which has done numerous wonderful things, is making decisions with—as I can tell—no consideration for its host community and no consideration for its neighboring jurisdictions,” Ypsilanti councilman Patrick McLean said during a recent council meeting. “I think the process of siting this facility stinks.”

For others on the council, the fight is more personal.

“I’m a Japanese American with strong ties to my family in Japan and the existential threat of nuclear weapons is not lost on me, as my family has been directly impacted,” Amber Fellows, a Ypsilanti city councilmember who led the charge in opposition to the data center, told 404 Media. “The thing that is most troubling about this is that the nuclear weapons that we, as Americans, witnessed 80 years ago are still being proliferated and modernized without question.”

It’s a classic David and Goliath story. On one side is Ypsilanti (called Ypsi by its residents), which has a population just north of 20,000 and situated about 40 minutes outside of Detroit. On the other is the University of Michigan and Los Alamos National Laboratories (LANL), American scientists famous for nuclear weapons and, lately, pushing the boundaries of AI.

The University of Michigan first announced the Los Alamos data center, what it called an “AI research facility,” last year. According to a press release from the university, the data center will cost $1.25 billion and take up between 220,000 to 240,000 square feet. “The university is currently assessing the viability of locating the facility in Ypsilanti Township,” the press release said.

A Small Town Is Fighting a $1.2 Billion AI Datacenter for America's Nuclear Weapon Scientists
Signs in an Ypsilanti yard.

On October 21, the Ypsilanti City Council considered a proposal to officially oppose the data center and the people of the area explained why they wanted it passed. One woman cited environmental and ethical concerns. “Third is the moral problem of having our city resources towards aiding the development of nuclear arms,” she said. “The city of Ypsilanti has a good track record of being on the right side of history and, more often than not, does the right thing. If this resolution passed, it would be a continuation of that tradition.”

A man worried about what the facility would do to the physical health of citizens and talked about what happened in other communities where data centers were built. “People have poisoned air and poisoned water and are getting headaches from the generators,” he said. “There’s also reports around the country of energy bills skyrocketing when data centers come in. There’s also reports around the country of local grids becoming much less reliable when the data centers come in…we don’t need to see what it’s like to have a data center in Ypsi. We could just not do that.”

The resolution passed. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

Ypsi has a lot of reasons to be concerned. Data centers tend to bring rising power bills, horrible noise, and dwindling drinking water to every community they touch. “The fact that U of M is using Ypsilanti as a dumping ground, a sacrifice zone, is unacceptable,” Fellows said.

Ypsi’s resolution focused on a different angle though: nuclear weapons. “The Ypsilanti City Council strongly opposes the Los Alamos-University of Michigan data center due to its connections to nuclear weapons modernization and potential environmental harms and calls for a complete and permanent cessation of all efforts to build this data center in any form,” the resolution said.

As part of the resolution, Ypsilanti is applying to join the Mayors for Peace initiative, an international organization of cities opposed to nuclear weapons and founded by the former mayor of Hiroshima. Fellows learned about Mayors for Peace when she visited Hiroshima last year.

0:00
/1:46

This town has officially decided to fight against the construction of an AI data center that would service a nuclear weapons laboratory 1,500 miles away. Amber Fellows, a Ypsilanti city councilmember, tells us why. Via 404 Media on Instagram

Both LANL and the University of Michigan have been vague about what the data center will be used for, but have said it will include one facility for classified federal research and another for non-classified research which students and faculty will have access to. “Applications include the discovery and design of new materials, calculations on climate preparedness and sustainability,” it said in an FAQ about the data center. “Industries such as mobility, national security, aerospace, life sciences and finance can benefit from advanced modeling and simulation capabilities.”

The university FAQ said that the data center will not be used to manufacture nuclear weapons. “Manufacturing” nuclear weapons specifically refers to their creation, something that’s hard to do and only occurs at a handful of specialized facilities across America. I asked both LANL and the University of Michigan if the data generated by the facility would be used in nuclear weapons science in any way. Neither answered the question.

“The federal facility is for research and high-performance computing,” the FAQ said. “It will focus on scientific computation to address various national challenges, including cybersecurity, nuclear and other emerging threats, biohazards, and clean energy solutions.”

LANL is going all in on AI. It partnered with OpenAI to use the company’s frontier models in research and recently announced a partnership with NVIDIA to build two new super computers named “Mission” and “Vision.” It’s true that LANL’s scientific output covers a range of issues but its overwhelming focus, and budget allocation, is nuclear weapons. LANL requested a budget of $5.79 billion in 2026. 84 percent of that is earmarked for nuclear weapons. Only $40 million of the LANL budget is set aside for “science,” according to government documents

💡
Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

“The fact is we don’t really know because Los Alamos and U of M are unwilling to spell out exactly what’s going to happen,” Fellows said. When LANL declined to comment for this story, it told 404 Media to direct its question to the University of Michigan.

The university pointed 404 Media to the FAQ page about the project. “You'll see in the FAQs that the locations being considered are not within the city of Ypsilanti,” it said.

It’s an odd statement given that this is what’s in the FAQ: “The university is currently assessing the viability of locating the facility in Ypsilanti Township on the north side of Textile Road, directly across the street from the Ford Rawsonville Components plant and adjacent to the LG Energy Solutions plant.”

It’s true that this is not technically in the city of Ypsilanti but rather Ypsilanti Township, a collection of communities that almost entirely surrounds the city itself. For Fellows, it’s a distinction without a difference. “[Univeristy of Michigan] can build it in Barton Hills and see how the city of Ann Arbor feels about it,” she said, referencing a village that borders the township where the university's home city of Ann Arbor.

“The university has, and will continue to, explore other sites if they are viable in the timeframe needed for successful completion of the project,” Kay Jarvis, the university’s director of public affairs, told 404 Media.

Fellows said that Ypsilanti will fight the data center with everything it has. “We’re putting pressure on the Ypsi township board to use whatever tools they have to deny permits…and to stand up for their community,” she said. “We’re also putting pressure on the U of M board of trustees, the county, our state legislature that approved these projects and funded them with public funds. We’re identifying all the different entities that have made this project possible so far and putting pressure on them to reverse action.”

For Fellows, the fight is existential. It’s not just about the environmental concerns around the construction project. “I was under the belief that the prevailing consensus was that nuclear weapons are wrong and they should be drawn down as fast as possible. I’m trying to use what little power I have to work towards that goal,” she said.

❌