Vue normale

Reçu aujourd’hui — 19 août 2025
  • ✇404 Media
  • How Tea’s Founder Convinced Millions of Women to Spill Their Secrets, Then Exposed Them to the World
    On March 16, 2023, Paola Sanchez, the founder and administrator of Are We Dating the Same Guy?, a collection of Facebook groups where women share “red flags” about men, received a message from Christianne Burns, then fiancée of Tea CEO and founder Sean Cook. “We have an app ready to go called ‘Tea - Women’s Dating Community’, that could be a perfect transition for the ‘Are we dating the same guy’ facebook groups since it sounds like those are on their way under… Tea has all the safety measures t
     

How Tea’s Founder Convinced Millions of Women to Spill Their Secrets, Then Exposed Them to the World

19 août 2025 à 10:04
How Tea’s Founder Convinced Millions of Women to Spill Their Secrets, Then Exposed Them to the World

On March 16, 2023, Paola Sanchez, the founder and administrator of Are We Dating the Same Guy?, a collection of Facebook groups where women share “red flags” about men, received a message from Christianne Burns, then fiancée of Tea CEO and founder Sean Cook. 

“We have an app ready to go called ‘Tea - Women’s Dating Community’, that could be a perfect transition for the ‘Are we dating the same guy’ facebook groups since it sounds like those are on their way under… Tea has all the safety measures that Facebook lacked and more to ensure that only women are in the group,” Burns said. “We are looking for a face and founder of the app and because of your experience, we think YOU will be the perfect person! This can be your thing and we are happy to take a step back and let you lead all operations of the product.”

The Tea app, much like the Are We Dating the Same Guy Facebook groups, invites women to join and share red flags about men to help other women avoid them. In order to verify that every person who joined the Tea app was a woman, Tea asked users to upload a picture of their ID or their face. Tea was founded in 2022 but largely flew under the radar until July this year, when it reached the top of the Apple App Store chart, earned glowing coverage in the media, and claimed it had more than 1.6 million users. 

Burns’ offer to make Sanchez the “face” of Tea wasn't the first time she had reached out to her, but Sanchez never replied to Burns, despite multiple attempts to recruit her. As it turned out, Tea did not have all the “safety measures” it needed to keep women safe. As 404 Media first reported, Tea users’ images, identifying information, and more than a million private conversations, including some about cheating partners and abortions, were compromised in two separate security breaches in late July. The first of these breaches was immediately abused by a community of misogynists on 4chan to humiliate women whose information was compromised. 

🍵
Do you work or used to work at Tea? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪@emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

A 404 Media investigation now reveals that after Tea failed to recruit Sanchez as the face of the app and adopt the Are We Dating the Same Guy community, Tea shifted tactics to raid those Facebook groups for users. Tea paid influencers to undermine Are We Dating the Same Guy and created competing Facebook groups with nearly identical names. 404 Media also identified a number of seemingly hijacked Facebook accounts that spammed the real Are We Dating The Same Guy groups with links to Tea app. 

Reçu avant avant-hier
  • ✇404 Media
  • Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles
    Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of
     

Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles

5 août 2025 à 11:42
Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop Articles

Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet.

Wikipedia is maintained by a global, collaborative community of volunteer contributors and editors, and part of the reason it remains a reliable source of information is that this community takes a lot of time to discuss, deliberate, and argue about everything that happens on the platform, be it changes to individual articles or the policies that govern how those changes are made. It is normal for entire Wikipedia articles to be deleted, but the main process for deletion usually requires a week-long discussion phase during which Wikipedians try to come to consensus on whether to delete the article. 

However, in order to deal with common problems that clearly violate Wikipedia’s policies, Wikipedia also has a “speedy deletion” process, where one person flags an article, an administrator checks if it meets certain conditions, and then deletes the article without the discussion period. 

For example, articles composed entirely of gibberish, meaningless text, or what Wikipedia calls “patent nonsense,” can be flagged for speedy deletion. The same is true for articles that are just advertisements with no encyclopedic value. If someone flags an article for deletion because it is “most likely not notable,” that is a more subjective evaluation that requires a full discussion. 

At the moment, most articles that Wikipedia editors flag as being AI-generated fall into the latter category because editors can’t be absolutely certain that they were AI-generated. Ilyas Lebleu, a founding member of WikiProject AI Cleanup and an editor that contributed some critical language in the recently adopted policy on AI generated articles and speedy deletion, told me that this is why previous proposals on regulating AI generated articles on Wikipedia have struggled. 

“While it can be easy to spot hints that something is AI-generated (wording choices, em-dashes, bullet lists with bolded headers, ...), these tells are usually not so clear-cut, and we don't want to mistakenly delete something just because it sounds like AI,” Lebleu told me in an email. “In general, the rise of easy-to-generate AI content has been described as an ‘existential threat’ to Wikipedia: as our processes are geared towards (often long) discussions and consensus-building, the ability to quickly generate a lot of bogus content is problematic if we don't have a way to delete it just as quickly. Of course, AI content is not uniquely bad, and humans are perfectly capable of writing bad content too, but certainly not at the same rate. Our tools were made for a completely different scale.”

The solution Wikipedians came up with is to allow the speedy deletion of clearly AI-generated articles that broadly meet two conditions. The first is if the article includes “communication intended for the user.” This refers to language in the article that is clearly an LLM responding to a user prompt, like "Here is your Wikipedia article on…,” “Up to my last training update …,” and "as a large language model.” This is a clear tell that the article was generated by an LLM, and a method we’ve previously used to identify AI-generated social media posts and scientific papers

Lebleu, who told me they’ve seen these tells “quite a few times,” said that more importantly, they indicate the user hasn’t even read the article they’re submitting. 

“If the user hasn't checked for these basic things, we can safely assume that they haven't reviewed anything of what they copy-pasted, and that it is about as useful as white noise,” they said.

The other condition that would make an AI-generated article eligible for speedy deletion is if its citations are clearly wrong, another type of error LLMs are prone to. This can include both the inclusion of external links for books, articles, or scientific papers that don’t exist and don’t resolve, or links that lead to completely unrelated content. Wikipedia's new policy gives the example of “a paper on a beetle species being cited for a computer science article.”

Lebleu said that speedy deletion is a “band-aid” that can take care of the most obvious cases and that the AI problem will persist as they see a lot more AI-generated content that doesn’t meet these new conditions for speedy deletion. They also noted that AI can be a useful tool that could be a positive force for Wikipedia in the future. 

“However, the present situation is very different, and speculation on how the technology might develop in the coming years can easily distract us from solving issues we are facing now," they said. “A key pillar of Wikipedia is that we have no firm rules, and any decisions we take today can be revisited in a few years when the technology evolves.”

Lebleu said that ultimately the new policy leaves Wikipedia in a better position than before, but not a perfect one.

“The good news (beyond the speedy deletion thing itself) is that we have, formally, made a statement on LLM-generated articles. This has been a controversial aspect in the community before: while the vast majority of us are opposed to AI content, exactly how to deal with it has been a point of contention, and early attempts at wide-ranging policies had failed. Here, building up on the previous incremental wins on AI images, drafts, and discussion comments, we workshopped a much more specific criterion, which nonetheless clearly states that unreviewed LLM content is not compatible in spirit with Wikipedia.”

  • ✇404 Media
  • UK Users Need to Post Selfie or Photo ID to View Reddit's r/IsraelCrimes, r/UkraineWarFootage
    Several Reddit communities dedicated to sharing news and media from conflicts around the world now require users in the UK to submit a photo ID or selfie in order to prove they are old enough to view “mature” content. The new age verification system is a result of the recently enacted Online Safety Act in the UK, which aims to protect children from certain types of content and hold platforms like Reddit accountable if they don’t. Some of the Reddit communities that now include this age verifi
     

UK Users Need to Post Selfie or Photo ID to View Reddit's r/IsraelCrimes, r/UkraineWarFootage

29 juillet 2025 à 10:48
UK Users Need to Post Selfie or Photo ID to View Reddit's r/IsraelCrimes, r/UkraineWarFootage

Several Reddit communities dedicated to sharing news and media from conflicts around the world now require users in the UK to submit a photo ID or selfie in order to prove they are old enough to view “mature” content. The new age verification system is a result of the recently enacted Online Safety Act in the UK, which aims to protect children from certain types of content and hold platforms like Reddit accountable if they don’t. 

Some of the Reddit communities that now include this age verification check include:

  • ✇404 Media
  • A Second Tea Breach Reveals Users’ DMs About Abortions and Cheating
    A second, major security issue with women’s dating safety app Tea has exposed much more user data than the first breach we first reported last week, with an independent security researcher now finding it was possible for hackers to access messages between users discussing abortions, cheating partners, and phone numbers they sent to one another. Despite Tea’s initial statement that “the incident involved a legacy data storage system containing information from over two years ago,” the second i
     

A Second Tea Breach Reveals Users’ DMs About Abortions and Cheating

28 juillet 2025 à 13:02
A Second Tea Breach Reveals Users’ DMs About Abortions and Cheating

A second, major security issue with women’s dating safety app Tea has exposed much more user data than the first breach we first reported last week, with an independent security researcher now finding it was possible for hackers to access messages between users discussing abortions, cheating partners, and phone numbers they sent to one another. Despite Tea’s initial statement that “the incident involved a legacy data storage system containing information from over two years ago,” the second issue impacting a separate database is much more recent, affecting messages up until last week, according to the researcher’s findings that 404 Media verified. The researcher said they also found the ability to send a push notification to all of Tea’s users.

It’s hard to overstate how sensitive this data is and how it could put Tea’s users at risk if it fell into the wrong hands. When signing up, Tea encourages users to choose an anonymous screenname, but it was trivial for 404 Media to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private. Users could be easily found via their social media handles, phone numbers, and real names that they shared in these chats. These conversations also frequently make damning accusations against people who are also named in the private messages and in some cases are easy to identify. 

  • ✇404 Media
  • Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan
    Users from 4chan claim to have discovered an exposed database hosted on Google’s mobile app development platform, Firebase, belonging to the newly popular women’s dating safety app Tea. Users say they are rifling through peoples’ personal data and selfies uploaded to the app, and then posting that data online, according to screenshots, 4chan posts, and code reviewed by 404 Media. In a statement to 404 Media, Tea confirmed the breach also impacted some direct messages but said that the data is
     

Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan

25 juillet 2025 à 11:18
Women Dating Safety App 'Tea' Breached, Users' IDs Posted to 4chan

Users from 4chan claim to have discovered an exposed database hosted on Google’s mobile app development platform, Firebase, belonging to the newly popular women’s dating safety app Tea. Users say they are rifling through peoples’ personal data and selfies uploaded to the app, and then posting that data online, according to screenshots, 4chan posts, and code reviewed by 404 Media. In a statement to 404 Media, Tea confirmed the breach also impacted some direct messages but said that the data is from two years ago.

Tea, which claims to have more than 1.6 million users, reached the top of the App Store charts this week and has tens of thousands of reviews there. The app aims to provide a space for women to exchange information about men in order to stay safe, and verifies that new users are women by asking them to upload a selfie.

“Yes, if you sent Tea App your face and drivers license, they doxxed you publicly! No authentication, no nothing. It's a public bucket,” a post on 4chan providing details of the vulnerability reads. “DRIVERS LICENSES AND FACE PICS! GET THE FUCK IN HERE BEFORE THEY SHUT IT DOWN!”

  • ✇404 Media
  • Credit Card Companies Are Hurting the Future of Video Games
    Payment processors are rapidly changing what types of content can and can’t be easily accessed online. Mastercard, Visa, PayPal, Stripe, and other major players that process most of the money people earn on the internet have always had this power, and have long discriminated against sexual content and sex work, but they have been forcing more change recently. Some of the content they've recently pushed to the margins, like AI image generation models on Civitai that were used to create noncons
     

Credit Card Companies Are Hurting the Future of Video Games

24 juillet 2025 à 15:51
Credit Card Companies Are Hurting the Future of Video Games

Payment processors are rapidly changing what types of content can and can’t be easily accessed online. Mastercard, Visa, PayPal, Stripe, and other major players that process most of the money people earn on the internet have always had this power, and have long discriminated against sexual content and sex work, but they have been forcing more change recently. 

Some of the content they've recently pushed to the margins, like AI image generation models on Civitai that were used to create nonconsensual sexual content of real people, were actively used to cause harm, as 404 Media has reported many times. Other media they’ve asked companies to remove, like exploitative “rape and incest” games on Steam, did not have many defenders, but did not actively harm any specific person. 

But last night, when the independent game distribution platform Itch.io suddenly deindexed much of its adult content, creative works that people are ready to passionately defend became collateral damage. 

Itch.io, an alternative to Steam that makes it easier for anyone to upload almost any game and charge anything for it, including not charging at all, has become a critical piece of infrastructure in video game development in the past decade. It’s where many aspiring game developers and students get their start and share their work, especially when it doesn’t fit into traditional ideas of what a video game can be. Which is precisely what makes Itch.io, and particularly many of its NSFW games so valuable: they allow small teams and individual creators to push the boundaries of the medium.

We're really hamstringing the future of arts and communication and creating meaningful culture if we adhere to the kind of position that says you can't make games about serious things.

In order to better understand what the impact of Itch.io’s policy changes will have on video games broadly, I called Naomi Clark, a game designer and chair of NYU Game Center, where many students share their first games on Itch.io.

This interview has been condensed and edited for clarity.

404 Media: Where do you think things with Steam and Itch stand right now?

Naomi Clark: It’s been a wild ride. Ever since the news first appeared about Steam removing some games there were glimmers that might herald bigger problems. At first it appeared to only be going after games in some very taboo sexual categories—incest, prison, some slaves and violence and things like that. And I haven't talked to anybody about this issue who is super hardcore about Steam absolutely needing to sell games about incest. That doesn't really seem to be the issue at hand. 

The problem is that it wasn't clear what was in these forbidden categories. It's kind of whatever payment processors object to. 

I think that's just extremely disturbing for a lot of people. Not because everyone's rushing to defend like a daddy/daughter incest game, number 12 or whatever. But because of the potential for it to be very nebulous and to spread to other categories, or any category that you can convince the CEO of MasterCard is objectionable and that his company should have no business with.

I think that playbook could be replicated in ways that could get really dangerous for LGBTQ communities, especially in this political environment, where anybody can weaponize the opinions of a banker or a payment processor against certain types of content, there are huge swaths of people who are in powerful positions who don't understand what games can potentially be about. Maybe they think like this is all garbage, or it's just for titillation or pure entertainment, no serious topics should be allowed. 

I can't think of a more harmful position for the future of a creative form, which is already so, so influential for anybody under the age of 40. We're really hamstringing the future of arts and communication and creating meaningful culture if we adhere to the kind of position that says you can't make games about serious things. You certainly can't make games if there's anything that we wouldn't want a child to see because they want to protect the children. 

Itch.io is a huge platform when it comes to accessibility for the maximum number of creators, where anybody can make a game and express themselves and find the audience for something that they've made. Every single student that I teach in the game program at NYU, where we have hundreds of students making games uses itch.io. They all put games on Itch. It’s where young and upcoming creators post games, but it's supported by a small team, and so we saw them trying to respond to this payment processors’ demands. Right now, every game that's flagged by creators or Itch moderators as having sensitive content, which includes games that are not sexual at all, that just have difficult topics, cannot be found by searching on Itch. 

Some games, just as on Steam, have been removed for having content that payment processors object to and nobody is totally sure what that list includes right now. So it kind of leaves everybody floundering and a little bit disturbed and scared in the dark, especially people who are trying to build a career or trying to support themselves by expressing themselves with games that not everybody is going to like. When you have people in power who think games are not important and who can be persuaded that some category shouldn't be allowed, then we end up in this really bad, extralegal mess with no accountability or transparency.

What do you think about people who are mad at Itch.io for complying with credit companies’ demands and who are encouraging people to not support them, to not give them any money?

I can understand the anger there, especially yesterday, when the stuff was happening [without an explanation]. I just didn't know what was going on, and was really disturbed. I think that the fuller picture has become a little bit more clear. And I suspect a lot of people don't know exactly how to interpret the official announcement from Itch, but my read of it is that this is a small team. They're not as vast as Valve. They are trying to figure out what to do very quickly, without a lot of the same kind of resources and infrastructure that's in place for Steam, and they had to respond quickly, probably, seemingly to some sort of deadline from the payment processors. Like, ‘remove the stuff or have the relationship terminated.’ Which would be a huge disaster. That would basically make it impossible for anyone without a source of funds to support game development, to really publish a game online. It would leave a gigantic vacuum in the whole creative community. So I think I understand the upset and anger when it wasn't clear what was going on. But now I think I'm a little bit more inclined to agree with people who say Itch is facing annihilation here. You can't expect them to sacrifice the whole platform for adult games within certain categories. 

I think some people maybe wanted it to “stand up against the fascists,” which it is not even exactly clear what that means. There are people who are already operating on the assumption that if Itch capitulates to this demand from Visa, MasterCard, and whoever else, that it's going to mean that they're also going to throw LGBTQ creators under the bus eventually, and have those games completely removed from their site. I'm hopeful that's not true. I really think that the first line of defending this creative industry has to be in the hands of people that are running platforms, and those are big businesses, and they have to sort of figure out how they negotiate with the even larger multinational financial corporations that they're beholden to. 

I get why people are mad at Itch. They seem to be trying to create a path forward for people that are making various types of adult content and maybe allowing other types of payment processors, or not having games that fall into some categories. So we'll see how they do. It would be a heroic feat if they managed to get through it.

It seems to me that the payment processors don’t really want to negotiate. 

That's my assumption of why Valve and Itch are trying to avert the apocalyptic scenario where they do get cut off from payment processing. I assume that’s why the Itch team kind of leaped to these very hasty and disturbing moves to make all these things unsearchable, and to show they're complying immediately with these orders.

When I say negotiating, I don't mean trying to get Visa or MasterCard to change their mind. It's more like, ‘Hey, let us show you, yes, we are in compliance with everything that you're saying.’ I don't think there's too much choice there, but I think maybe they are not fully considering there's a fair amount of latitude in how platforms show that they're complying. One approach would be a scorched earth approach, to completely annihilate all mature rated games from the website forever. And that would probably work and that would have horrendous costs for the business in other ways, because nobody would trust them anymore. I think people who play video games are still sensitive in a multi-generational way to the threat of censorship coming down and taking away games that have any amount of sex or violence or serious content in them.

The platforms have to find some way of threading this needle. They can't go all the way to one extreme. I don’t think they can reject the request outright. They have to figure out how far to go. Valve is somewhat experienced in this. It's noteworthy that Valve did not take an incredibly scorched earth approach. They got rid of hundreds of games, not thousands, and they are games that I haven't seen a lot of people rushing to defend. 

I've seen some of the types of content that Itch removed completely from the site and I do not understand exactly what the logic is there. It seems to be some kind of intersection between violence and and sexual situations. There are a lot of visual novels, or even just straight up text novels that are about, I don't know, like two queer girls in giant mechs fighting each other, like very anime, and then they start to make out or have sex or something. It’s not clear why something like that would be removed. 

Can you talk about some of the games on Itch that are affected by this? I think people know about Steam sex games, and people know that violence and sex can be parts of mainstream games, but there’s a different type of game that’s more common on Itch that’s impacted by this policy that I think a lot of people don’t know exists.

 For the past 15 to 20 years we've been in a period in games where there's been a massive explosion in what kinds of games can be made. And it's not really just about technology. It's about accessibility of tools, how quickly games can be made, how many people it takes to make a game, and it's just become much easier. It's sort of similar to the advent of home movie cameras. Suddenly, all sorts of people can make little films or document their everyday life, and we're in a period like that with games. We're seeing way more games that actually reflect people's lived experience. Some of the games that have been caught up in the last day’s changes on Itch are games that up-and-coming creators have made about their own experiences in abusive relationships, or dealing with trauma, or coming out of the closet and finding their first romance as an LGBTQ person. I think most notably, my own student, 

Jenny Jiao Hsia, who won a bunch of awards at the Independent Games Festival this year for her game Consume Me. That is an autobiographical story about when she was a teenager struggling with eating disorders and her own relationship with her body, and she had it marked as sensitive content. I was one of the advisors on that project and I agree it’s sensitive content because there's some disturbing, difficult, teenage-girl-dealing-with-their-body stuff in there. It's the game equivalent of a Judy Blume novel, but it expresses that autobiographical truth in a very, very different way, a 21st Century way, rather than the 20th century way.

Judy Blume books were also subject to censorship in school libraries because they were about sexual topics and I think that this is a similar moment for games. What Consume Me does that a Judy Blume novel doesn't do is it sort of puts you very much inside of the mind of the main character, how she kind of systematizes food and starts thinking about it like a game that she has to win, how she sort of tricks herself into trying to over perform. This is something that no other medium could do.

Robert Yang uses a lot of the language of video games, but his pieces are often kind of interactive art experience where they don't resemble a traditional game in terms of trying to win or lose or get a score or complete a story experience. They kind of refer to and riff off of a lot of the language of games. I would probably compare them more to the work of a photographer like Robert Mapplethorpe, who was also subject to a lot of censorship in the 20th century because of the way that he was portraying the nude male form. Robert Yang is doing similar stuff and exploring the portrayal of male bodies and what that means in the age of the internet, in the way that bodies are now also 3D models, and then kind of also reflecting on queer history. 

These games are very, very clearly artistic expressions, and they're caught up in this thing. They're delisted from search right now because they're clearly adult games, but they're meant for adults, who have a right to understand them and play them as art objects. I've seen over and over again that people who take this topic seriously, they play some games, or they experience something, and they kind of wake up and they're like, ‘Oh, wow. I didn't realize games could do all of this.’ 

I work in a larger art school where there are people who teach dance and music and film, and I get to see this happen a lot with people who have never played games, but who are artists, and they get it right away. But I think a lot of society has not reached that point yet. They don't understand that games can do all of this stuff. I'm hopeful that continued coverage and good criticism of games in all sorts of outlets shifts the conversation, but it's kind of a generational change, so maybe a while before everybody gets it.

  • ✇404 Media
  • Google’s AI Is Destroying Search, the Internet, and Your Brain
    Yesterday the Pew Research Center released a report based on the internet browsing activity of 900 U.S. adults which found that Google users who encounter an AI summary are less likely to click on links to other websites than users who don’t encounter an AI summary. To be precise, only 1 percent of Google searches resulted in the users clicking on the link in the AI summary, which takes them to the page Google is summarizing. Essentially, the data shows that Google’s AI Overview feature intro
     

Google’s AI Is Destroying Search, the Internet, and Your Brain

23 juillet 2025 à 14:53
Google’s AI Is Destroying Search, the Internet, and Your Brain

Yesterday the Pew Research Center released a report based on the internet browsing activity of 900 U.S. adults which found that Google users who encounter an AI summary are less likely to click on links to other websites than users who don’t encounter an AI summary. To be precise, only 1 percent of Google searches resulted in the users clicking on the link in the AI summary, which takes them to the page Google is summarizing. 

Essentially, the data shows that Google’s AI Overview feature introduced in 2023 replacing the “10 blue links” format that turned Google into the internet’s de facto traffic controller will end the flow of all that traffic almost completely and destroy the business of countless blogs and news sites in the process. Instead, Google will feed people into a faulty AI-powered alternative that is prone to errors it presents with so much confidence, we won’t even be able to tell that they are errors. 

  • ✇404 Media
  • Spotify Publishes AI-Generated Songs From Dead Artists Without Permission
    Spotify is publishing new, AI-generated songs on the official pages of artists who died years ago without the permission of their estates or record labels. According to his official Spotify page, Blaze Foley, a country music singer-songwriter who was murdered in 1989, released a new song called “Together” last week. The song, which features a male country singer, piano, and an electric guitar, vaguely sounds like a new, slow country song. The Spotify page for the song also features an image o
     

Spotify Publishes AI-Generated Songs From Dead Artists Without Permission

21 juillet 2025 à 14:41
Spotify Publishes AI-Generated Songs From Dead Artists Without Permission

Spotify is publishing new, AI-generated songs on the official pages of artists who died years ago without the permission of their estates or record labels. 

According to his official Spotify page, Blaze Foley, a country music singer-songwriter who was murdered in 1989, released a new song called “Together” last week. The song, which features a male country singer, piano, and an electric guitar, vaguely sounds like a new, slow country song. The Spotify page for the song also features an image of an AI-generated image of a man who looks nothing like Foley singing into a microphone.  

Craig McDonald, the owner of Lost Art Records, the label that distributes all of Foley’s music and manages his Spotify page, told me that any Foley fan would instantly realize “Together” is not one of his songs. 

  • ✇404 Media
  • Steam Bends to Payment Processors on Porn Games
    Steam, the dominant digital storefront for PC games operated by Valve, updated its guidelines to forbid “certain kinds of adult content” and blamed restrictions from payment processors and financial institutions. The update was initially spotted by SteamDB.info , a platform that tracks and publishes data about Steam, and reported by the Japanese gaming site Gamespark.The update is yet another signal that payment processors are lately becoming more vigilant about what online platforms that hos
     

Steam Bends to Payment Processors on Porn Games

16 juillet 2025 à 11:44
Steam Bends to Payment Processors on Porn Games

Steam, the dominant digital storefront for PC games operated by Valve, updated its guidelines to forbid “certain kinds of adult content” and blamed restrictions from payment processors and financial institutions. The update was initially spotted by SteamDB.info , a platform that tracks and publishes data about Steam, and reported by the Japanese gaming site Gamespark.

The update is yet another signal that payment processors are lately becoming more vigilant about what online platforms that host adult content they’ll provide services to and another clear sign that they are currently the ultimate arbiter of what kind of content can be made easily available online, or not. 

Steam’s policy change appears under the onboarding portion of its Steamworks documentation for developers and publishers. The 15th item on a list of “what you shouldn’t publish on Steam” now reads: “Content that may violate the rules and standards set forth by Steam’s payment processors and related card networks and banks, or internet network providers. In particular, certain kinds of adult only content.”

It’s not clear when exactly Valve updated this list, but an archive of this page from April shows that it only had 14 items then. Other items that were already on the list included “nude or sexually explicit images of real people” and “adult content that isn’t appropriately labeled and age-gated,” but Valve did not previously mention payment processors specifically. 

"We were recently notified that certain games on Steam may violate the rules and standards set forth by our payment processors and their related card networks and banks," Valve spokesperson Kaci Aitchison Boyle told me in an email. "As a result, we are retiring those games from being sold on the Steam Store, because loss of payment methods would prevent customers from being able to purchase other titles and game content on Steam. We are directly notifying developers of these games, and issuing app credits should they have another game they’d like to distribute on Steam in the future."

Valve did not respond to questions about where developers might find more details about payment processors’ rules and standards. 

SteamDB.info, which also tracks when games are added or removed from Steam, noted many adult games have been removed from Steam in the last 24 hours. Sex games, many of which are of very low quality and sometimes include very extreme content, have been common on Steam for years. In April, I wrote about a “rape and incest” game called No Mercy which the developers eventually voluntarily removed from Steam after pressure from users, media, and lawmakers in the UK. The majority of games I saw that were removed from Steam recently revolve around similar themes, but we don’t know if they were removed by the developers or Valve, and if they were removed by Valve because of the recent policy change. Games are removed from Steam every day for a variety of reasons, including expired licensing deals or developers no longer wanting to support a game. 

However, Steam’s policy change comes at a time that we’ve seen increased pressure from payment processors around adult content. We recently reported that payment processors have forced two major AI models sharing platforms, Civitai and Tensor.Art, to remove certain adult content.

Update: This story has been updated with comment from Valve. 

  • ✇404 Media
  • Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People
    Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors. Users downloaded the mo
     

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

15 juillet 2025 à 09:20
Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors. 

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them “a short period of time” before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new “real people” model policy. 

At the time of writing, the Discord channel has hundreds of members who are still finding and sharing models that have been removed from Civitai and are reuploading them to Hugging Face. Some users have even shared a piece of software, also hosted on Hugging Face, which allows users to automatically upload Civitai models to Hugging Face in batches. 

Hugging Face did not respond to multiple requests for comment. It also did not respond to specific questions about how and if it plans to moderate these models given the fact that they were previously hosted on a platform primarily used for AI generating pornography, and which our reporting shows were used to create noncensual pornography. 

I found the Civitai models of real people that were reuploaded to Hugging Face thanks to a paper I covered where researchers scraped Civitai. The paper showed that the platform was primarily used for pornographic content, and that it deleted at least 50,000 AI models designed to recreate the likeness of real people once it changed its policy in May. The researchers, Laura Wagner and Eva Cetinic from the University of Zurich, provided me with a spreadsheet of all the deleted models, which included the name of the models (which is almost always the name of a female celebrity or lesser known internet personality), a link to where it was previously hosted on Civitai, and the SHA256 hash Civitai uses to identify all the models hosted on its site. 

The people who are reuploading the Civitai models to Hugging Face are seemingly trying to hide the purpose of those models on Hugging Face. On Hugging Face, these models have generic names and URLs like “LORA” or “Test model.” Users can’t tell that these models are used to generate the likeness of real people just by looking at their Hugging Face page, nor would they be able to find them by searching for the names of celebrities on Hugging Face. In order to find them, users can go to a separate website the Civitai archivists created. There, they can enter the name of a Civitai model, the link where it used to be hosted on Civitai before it was deleted, or the model’s SHA256 hash. All of these will lead users to a page which explains what the model is, show its name, as well as several images showing the kind of images it can generate. At the bottom of that page is a link to one or more Hugging Face “mirrors” where the model has been reuploaded. 

By using Wagner’s and Cetinic’s data and entering it into this Civitai archive site, I was able to find the Civitai models hosted on Hugging Face. 

Hugging Face’s content policy bans “Unlawful, defamatory, fraudulent, or intentionally deceptive Content (e.g., disinformation, phishing, scams, inauthentic behavior),” as well as “Sexual Content used for harassment, bullying, or created without explicit consent.” Models that generate the likeness of real people don’t have to be used for unlawful or defamatory ends, and they only produce sexual content if people choose to use them that way. There’s nothing in Hugging Face’s content policy that explicitly forbids AI models that recreate the likeness of real people. 

However, the Hugging Face Ethics & Society group, which is “committed to operationalizing ethics at the cutting-edge of machine learning,” has identified six “high-level categories for describing ethical aspects of machine learning work,” one of which is that AI should be “Consentful.”

“Consentful technology supports the self-determination of people who use and are affected by these technologies,” the company explains. Examples of this, the company says, includes “Avoiding extractive, chauvinist, ‘dark,’ and otherwise ‘unethical’ patterns of engagement.”

Other AI models that recreate the likeness of real people could conceivably not violate any of these principles. For example, two of the deleted Civitai models that were reuploaded to Hugging Face were designed to recreate the likeness of Vladimir Putin, which in theory people would want to use in order to mock or criticize the Russian president. However, the vast majority of the models are of female celebrities, which my reporting has shown is being used to create nonconsensual sexual content, and which were deleted en masse from Civitai because of pressure from payment processors who didn’t want to be associated with that type of media. 

❌