Vue lecture

Steam Bends to Payment Processors on Porn Games

Steam Bends to Payment Processors on Porn Games

Steam, the dominant digital storefront for PC games operated by Valve, updated its guidelines to forbid “certain kinds of adult content” and blamed restrictions from payment processors and financial institutions. The update was initially spotted by SteamDB.info , a platform that tracks and publishes data about Steam, and reported by the Japanese gaming site Gamespark.

The update is yet another signal that payment processors are lately becoming more vigilant about what online platforms that host adult content they’ll provide services to and another clear sign that they are currently the ultimate arbiter of what kind of content can be made easily available online, or not. 

Steam’s policy change appears under the onboarding portion of its Steamworks documentation for developers and publishers. The 15th item on a list of “what you shouldn’t publish on Steam” now reads: “Content that may violate the rules and standards set forth by Steam’s payment processors and related card networks and banks, or internet network providers. In particular, certain kinds of adult only content.”

It’s not clear when exactly Valve updated this list, but an archive of this page from April shows that it only had 14 items then. Other items that were already on the list included “nude or sexually explicit images of real people” and “adult content that isn’t appropriately labeled and age-gated,” but Valve did not previously mention payment processors specifically. 

"We were recently notified that certain games on Steam may violate the rules and standards set forth by our payment processors and their related card networks and banks," Valve spokesperson Kaci Aitchison Boyle told me in an email. "As a result, we are retiring those games from being sold on the Steam Store, because loss of payment methods would prevent customers from being able to purchase other titles and game content on Steam. We are directly notifying developers of these games, and issuing app credits should they have another game they’d like to distribute on Steam in the future."

Valve did not respond to questions about where developers might find more details about payment processors’ rules and standards. 

SteamDB.info, which also tracks when games are added or removed from Steam, noted many adult games have been removed from Steam in the last 24 hours. Sex games, many of which are of very low quality and sometimes include very extreme content, have been common on Steam for years. In April, I wrote about a “rape and incest” game called No Mercy which the developers eventually voluntarily removed from Steam after pressure from users, media, and lawmakers in the UK. The majority of games I saw that were removed from Steam recently revolve around similar themes, but we don’t know if they were removed by the developers or Valve, and if they were removed by Valve because of the recent policy change. Games are removed from Steam every day for a variety of reasons, including expired licensing deals or developers no longer wanting to support a game. 

However, Steam’s policy change comes at a time that we’ve seen increased pressure from payment processors around adult content. We recently reported that payment processors have forced two major AI models sharing platforms, Civitai and Tensor.Art, to remove certain adult content.

Update: This story has been updated with comment from Valve. 

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors. 

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them “a short period of time” before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new “real people” model policy. 

At the time of writing, the Discord channel has hundreds of members who are still finding and sharing models that have been removed from Civitai and are reuploading them to Hugging Face. Some users have even shared a piece of software, also hosted on Hugging Face, which allows users to automatically upload Civitai models to Hugging Face in batches. 

Hugging Face did not respond to multiple requests for comment. It also did not respond to specific questions about how and if it plans to moderate these models given the fact that they were previously hosted on a platform primarily used for AI generating pornography, and which our reporting shows were used to create noncensual pornography. 

I found the Civitai models of real people that were reuploaded to Hugging Face thanks to a paper I covered where researchers scraped Civitai. The paper showed that the platform was primarily used for pornographic content, and that it deleted at least 50,000 AI models designed to recreate the likeness of real people once it changed its policy in May. The researchers, Laura Wagner and Eva Cetinic from the University of Zurich, provided me with a spreadsheet of all the deleted models, which included the name of the models (which is almost always the name of a female celebrity or lesser known internet personality), a link to where it was previously hosted on Civitai, and the SHA256 hash Civitai uses to identify all the models hosted on its site. 

The people who are reuploading the Civitai models to Hugging Face are seemingly trying to hide the purpose of those models on Hugging Face. On Hugging Face, these models have generic names and URLs like “LORA” or “Test model.” Users can’t tell that these models are used to generate the likeness of real people just by looking at their Hugging Face page, nor would they be able to find them by searching for the names of celebrities on Hugging Face. In order to find them, users can go to a separate website the Civitai archivists created. There, they can enter the name of a Civitai model, the link where it used to be hosted on Civitai before it was deleted, or the model’s SHA256 hash. All of these will lead users to a page which explains what the model is, show its name, as well as several images showing the kind of images it can generate. At the bottom of that page is a link to one or more Hugging Face “mirrors” where the model has been reuploaded. 

By using Wagner’s and Cetinic’s data and entering it into this Civitai archive site, I was able to find the Civitai models hosted on Hugging Face. 

Hugging Face’s content policy bans “Unlawful, defamatory, fraudulent, or intentionally deceptive Content (e.g., disinformation, phishing, scams, inauthentic behavior),” as well as “Sexual Content used for harassment, bullying, or created without explicit consent.” Models that generate the likeness of real people don’t have to be used for unlawful or defamatory ends, and they only produce sexual content if people choose to use them that way. There’s nothing in Hugging Face’s content policy that explicitly forbids AI models that recreate the likeness of real people. 

However, the Hugging Face Ethics & Society group, which is “committed to operationalizing ethics at the cutting-edge of machine learning,” has identified six “high-level categories for describing ethical aspects of machine learning work,” one of which is that AI should be “Consentful.”

“Consentful technology supports the self-determination of people who use and are affected by these technologies,” the company explains. Examples of this, the company says, includes “Avoiding extractive, chauvinist, ‘dark,’ and otherwise ‘unethical’ patterns of engagement.”

Other AI models that recreate the likeness of real people could conceivably not violate any of these principles. For example, two of the deleted Civitai models that were reuploaded to Hugging Face were designed to recreate the likeness of Vladimir Putin, which in theory people would want to use in order to mock or criticize the Russian president. However, the vast majority of the models are of female celebrities, which my reporting has shown is being used to create nonconsensual sexual content, and which were deleted en masse from Civitai because of pressure from payment processors who didn’t want to be associated with that type of media. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In the two years that I’ve been reporting about Civitai, a platform for sharing AI image generation models that has been instrumental in the production of AI generated non-consensual porn, Civitai has consistently argued that the amount of adult content on the site has been overstated. But new research shows that, if anything, the amount of adult content on Civitai has been underestimated.

In their paper, “Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm,” researchers Laura Wagner and Eva Cetinic from the University of Zurich studied more than 40 million user-generated images on Civitai and over 230,000 models. They found “a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals” on the platform, they write in the paper.

“What began as a promising creative breakthrough in TTI [text-to-image] generation and model personalization, has devolved into a pipeline for the large-scale production of sensational, biased, and abusive content. The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes,” Wagner and Cetinic write in their paper. “Several indicators suggest a descent into a self-reinforcing feedback loop of platform decay. These include a dramatic increase in NSFW imagery, from 41% to 80% in two years, as well as the community’s normalization of deepfakes, misogynistic tropes, and other exploitative content.”

To visualize just how dominant adult content was on Civitai, check the chart below, which shows the distribution of images by “NSFW browsing levels” over time. These categories, which are inspired by the Motion Picture Association film rating system and are used by Civitai to tag images, show that adult content was always a significant portion of all images hosted on the site, but that the portion of “overtly sexual, or disturbing” content only grew as the site became more popular, and exploded starting in 2024. The chart is based on Civitai’s own numbers and categorization system which the researchers scraped from the site. It likely undercounts the number of explicit images on the site since as both the researchers and I observed during my reporting, not all adult content is tagged as such. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In December, 2023, Civitai CEO Justin Maier told Venture Beat that “less than 20% of the posted content is what we would consider ‘PG-13’ or above.” When I reached Maier for comment for this article, he told me that “The VentureBeat figure cited a December 2023 snapshot, when adult posts were a minority. The mix shifted in 2024 as many NSFW creators migrated from platforms that no longer allow that content.”

However, the data in the paper shows that by October of 2023, 56 percent of all images on the site were tagged as “NSFW” and were designated by Civitai as “PG-13” or above.

In May, Civitai announced it’s banning all AI image generation models designed to recreate the likeness of real people because of pressure from payment processors. Since the authors of the paper were already tracking hundreds of thousands of models hosted on Civitai, they could easily see which models were removed, giving us a first clear look at how common those models were. 

Overall, they saw that more than 50,000 models designed to AI-generate the likeness of real people were removed because of the ban. These are models that Civitai itself tagged as “person of interest,” the tag it uses to indicate a model recreates the likeness of a real person, so the actual number of models depicting real people is likely higher. 

It’s hard to say if the most popular AI models on Civitai were all popular just because they were used to generate explicit images, because people could use models tagged as NSFW to generate non-nude images and vice versa. For example, according to the data collected by the researchers the most popular AI image generation model on Civitai was EasyNegative with almost 600,000 downloads. It’s not tagged or promoted as a model for generating pornography, but images that users created with it, which are shared on its Civitai model page, show it is commonly used that way. 

Other very popular models on Civitai are clearly designed to generate explicit images. The sixth most popular model with 360,000 downloads is Nudify XL: Better Bodies, which its creator says is for “nude female frontals.” A model called Realistic Vaginas - God Pussy 1 had 256,000 downloads. The POV Squatting Cowgirl LoRA model, which Civitai tagged as a “sex” model, had 189,000 downloads. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

The authors of the paper also conducted deeper analysis of the 40,000 most downloaded models on Civitai. In the 11,151 models where they could extract textual training data, meaning text that indicates what kind of images the models were trained on, they found “specifically abusive terms.” 5.6 percent included the keywords “loli” (558 models) and/or “shota” (69 models), Japanese terms commonly used to refer to sexualized depictions of pre-pubescent girls and boys. About 2.1 percent (189 models) included the keyword “rape.”

The data shows with clear numbers what we have long argued at 404 Media: adult content drives technological innovation and early adoption, and this has been especially true in the world of generative AI. Despite its protestation to the contrary, Civitai, which is one of the fastest growing platforms in that industry, and that the influential Silicon Valley venture capital firm Andreessen Horowitz invested in, grew because of explicit content, much of which was nonconsensual. 

“The rapid rise of NSFW content, the over-representation of young female subjects, and the prioritization of sensational content to drive engagement reflect an exploitative, even abusive dynamic,” the researchers wrote. “Additionally, structural discrimination embedded in today’s open-source TTI tools and models have the potential to cause significant downstream harm as they might become widely adopted and even integrated into future consumer applications.” 

Adult content driving innovation and early adoption doesn’t have to be harmful. As the researchers write, it’s the choices platforms like Civitai make that give us these outcomes. 

“The contingent nature of technology, shaped by online communities, platform operators, lawmakers, and society as a whole, also creates opportunities for intervention,” they write. “Model-sharing hubs and social media platforms both have the capacity to implement safeguards that can limit the spread of abusive practices such as deepfake creation and abusive imagery.”

Payment Processors Are Pushing AI Porn Off Its Biggest Platforms

Payment Processors Are Pushing AI Porn Off Its Biggest Platforms

Tensor.Art, an AI image creating and model sharing site announced on Friday that it is “temporarily” restricting AI models, tools, and posts related to pornographic content or the depiction of real-world celebrities due to pressure from payment processors.

The announcement is yet another example of payment processors acting as the ultimate arbiter of what kind of content can be easily made available online and those companies’ seemingly increased focus on AI-generated adult or nonconsensual content. 

The news is especially significant following a similar change in policy from Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography. After Civitai banned AI models designed to generate the likeness of real people and certain types of adult content in May, many Civitai users and model creators migrated their models to Tensor.Art. The announcement listed three items in the “Scope of Impact” of the decision: Banning “NSFW” content, banning content based on real-world celebrities, and  temporarily disabling its “Civitai Import” feature, which allowed people to easily move their Civitai models to Tensor.Art.

💡
Do you know anything else about where else these models are hosted? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪@emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

“We fully understand that this is very frustrating for many creators and users 😞,” Tensor.Art said in its announcement on Discord. “Due to the previous controversy over real-person content on Civitai, TA [Tensor.Art] has unfortunately been affected because of the ‘Civitai import’ feature. Owing to mandatory requirements from credit card organizations and regulatory authorities, we are compelled to make this temporary decision.”

Tensor.Art also listed the “Reasons for Adjustment” as:

- Review requirements for high-risk content from credit card organizations and multiple national regulatory bodies
- Compliance measures necessary to maintain platform openness and creators’ ability to monetize

Tensor.Art said that these changes will take place within the next 72 hours, and asked model creators to clarify if their models are “safe for work” in order to “prevent unintended impact.”

It’s not clear what Tensor.Art will look like or what its policies will be at the end of this “temporary” period. Civitai made similar changes permanently and still hasn’t been able to renew service from its payment processing providers or find new ones. Tensor.Art, however, is suggesting it’s not ready to give up on that type of content. 

“This is not the end,” Tensor.Art said in the announcement. “We are actively seeking solutions to minimize the impact of these restrictions and exploring compliant ways to restore currently hidden content. We remain committed to our original mission.”

Tensor.Art did not immediately respond to a request for comment.  

The Open-Source Software Saving the Internet From AI Bot Scrapers

The Open-Source Software Saving the Internet From AI Bot Scrapers

For someone who says she is fighting AI bot scrapers just in her free time, Xe Iaso seems to be putting up an impressive fight. Since she launched it in January, Anubis, a “program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies,” has been downloaded nearly 200,000 times, and is being used by notable organizations including GNOME, the popular open-source desktop environment for Linux, FFmpeg, the open-source software project for handling video and other media, and UNESCO, the United Nations organization for educations, science, and culture. 

Iaso decided to develop Anubis after discovering that her own Git server was struggling with AI scrapers, bots that crawl the web hoovering up anything that can be used for the training data that power AI models. Like many libraries, archives, and other small organizations, Iaso discovered her Git server was getting slammed only when it stopped working.  

‘FuckLAPD.com’ Lets Anyone Use Facial Recognition to Instantly Identify Cops

‘FuckLAPD.com’ Lets Anyone Use Facial Recognition to Instantly Identify Cops

A new site, FuckLAPD.com, is using public records and facial recognition technology to allow anyone to identify police officers in Los Angeles they have a picture of. The tool, made by artist Kyle McDonald, is designed to help people identify cops who may otherwise try to conceal their identity, such as covering their badge or serial number.

“We deserve to know who is shooting us in the face even when they have their badge covered up,” McDonald told me when I asked if the site was made in response to police violence during the LA protests against ICE that started earlier this month. “fucklapd.com is a response to the violence of the LAPD during the recent protests against the horrific ICE raids. And more broadly—the failure of the LAPD to accomplish anything useful with over $2B in funding each year.”

“Cops covering up their badges? ID them with their faces instead,” the site, which McDonald said went live this Saturday. The tool allows users to upload an image of a police officer’s face to search over 9,000 LAPD headshots obtained via public record requests. The site says image processing happens on the device, and no photos or data are transmitted or saved on the site. “Blurry, low-resolution photos will not match,” the site says. 

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums

AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline, according to a new survey published today. While the impact of AI bots on open collections has been reported anecdotally, the survey is the first attempt at measuring the problem, which in the worst cases can make valuable, public resources unavailable to humans because the servers they’re hosted on are being swamped by bots scraping the internet for AI training data. 

❌