AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline, according to a new survey published today. While the impact of AI bots on open collections has been reported anecdotally, the survey is the first attempt at measuring the problem, which in the worst cases can make valuable, public resources unavailable to humans because the servers they’re hosted on are being swamped by bots scraping the internet for AI training data.
Meta said it is suing a nudify app that 404 Media reported bought thousands of ads on Instagram and Facebook, repeatedly violating its policies.
Meta is suing Joy Timeline HK Limited, the entity behind the CrushAI nudify app that allows users to take an image of anyone and AI-generate a nude image of them without their consent. Meta said it has filed the lawsuit in Hong Kong, where Joy Timeline HK Limited is based, “to prevent them from advertising CrushAI apps on Meta platforms,” Meta said.
The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.
“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”
A crowd of people dressed in rags stare up at a tower so tall it reaches into the heavens. Fire rains down from the sky on to a burning city. A giant in armor looms over a young warrior. An ocean splits as throngs of people walk into it. Each shot only lasts a couple of seconds, and in that short time they might look like they were taken from a blockbuster fantasy movie, but look closely and you’ll notice that each carries all the hallmarks of AI-generated slop: the too smooth faces, the impossible physics, subtle deformations, and a generic aesthetic that’s hard to avoid when every pixel is created by remixing billions of images and videos in training data that was scraped from the internet.
“Every story. Every miracle. Every word,” the text flashes dramatically on screen before cutting to silence and the image of Jesus on the cross. With 1.7 million views, this video, titled “What if The Bible had a movie trailer…?” is the most popular on The AI Bible YouTube channel, which has more than 270,000 subscribers, and it perfectly encapsulates what the channel offers. Short, AI-generated videos that look very much like the kind of AI slop we have covered at 404 Media before. Another YouTube channel of AI-generated Bible content, Deep Bible Stories, has 435,000 subscribers, and is the 73rd most popular podcast on the platform according to YouTube’s own ranking. This past week there was also a viral trend of people using Google’s new AI video generator, Veo 3, to create influencer-style social media videos of biblical stories. Jesus-themed content was also some of the earliest and most viral AI-generated media we’ve seen on Facebook, starting with AI-generated images of Jesus appearing on the beach and escalating to increasingly ridiculous images, like shrimp Jesus.
The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.
“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.”
The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.
The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”
The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,”
From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stonewrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots.
As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.
The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote.
“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”
This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems.
“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”
OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.”
“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.
The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users
The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”
Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.
Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”
“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”
On this, the r/accelerate moderator seems to agree.
“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”
Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of their peers, according to a new report from researchers at Stanford Cyber Policy Center.
The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual content causes, the practice has been normalized by mainstream online platforms and certain online communities.
“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”
The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”
The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”
One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.
One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”
Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”
One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”
In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”
One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.
💡
Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404. Otherwise, send me an email at emanuel@404media.co.
In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.”
According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents
involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”
The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.”
The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline.
One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.
Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”
Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z) that 404 Media has repeatedly shown is being used to generate nonconsensual adult content, is banning AI models designed to generate the likeness of real people, the site announced Friday.
The policy change, which Civitai attributes in part to new AI regulations in the U.S. and Europe, is the most recent in a flurry of updates Civitai has made under increased pressure from payment processing service providers and 404 Media’s reporting. This recent change, will, at least temporarily, significantly hamper the ecosystem for creating nonconsensual AI-generated porn.
“We are removing models and images depicting real-world individuals from the platform. These resources and images will be available to the uploader for a short period of time before being removed,” Civitai said in its announcement. “This change is a requirement to continue conversations with specialist payment partners and has to be completed this week to prepare for their service.”
Earlier this month, Civitai updated its policies to ban certain types of adult content and introduced further restrictions around content depicting the likeness of real people in order to comply with requests from an unnamed payment processing service provider. This attempt to appease the payment processing service provider ultimately failed. On May 20, Civitai announced that the provider cut off the site, which currently can’t process credit card payments, though it says it will get a new provider soon.
“We know this will be frustrating for many creators and users. We’ve spoken at length about the value of likeness content, and this decision wasn’t made lightly,” Civitai’s statement about banning content depicting the likeness of real people said. “But we’re now facing an increasingly strict regulatory landscape - one evolving rapidly across multiple countries.”
The announcement specifically cites President Donald Trump’s recent signing of the Take It Down Act, which criminalizes and holds platforms liable for nonconsensual AI-generated adult content, and the EU AI Act, a comprehensive piece of AI regulation that was enacted last year.
💡
Do you know other sites that allow people to share models of real people? I would love to hear from you. Using a non-work device, you can message me securely on Signal at (609) 678-3204. Otherwise, send me an email at emanuel@404media.co.
As I’ve reported since 2023, Civitai’s policies against nonconsensual adult content did little to diminish the site’s actual crucial role in the AI-generated nonconsensual content ecosystem. Civitai’s policy allowed people to upload custom AI image generation models (LoRAs, checkpoints, etc) designed to recreate the likeness of real people. These models were mostly of huge movie stars and minor internet celebrities, but as our reporting has shown, also completely random, private people. Civitai also allowed users to share custom AI image generation models designed to depict extremely specific and graphic sex acts and fetishes, but it always banned users from producing nonconsensual nudity or porn.
However, by embedding in huge online spaces dedicated to creating and sharing nonconsensual content, I saw how easily people put these two types of models together. Civitai users couldn’t generate and share those models on Civitai, but they could download the models, combine them, generate nonconsensual porn of real people locally on their machines or on various cloud computing services, and post them to porn sites, Telegram, and social media. I’ve seen people in these spaces explain over and over again how easy it was to create nonconsensual porn of YouTubers, Twitch streamers, or barely known Instagram users by using models to Civitai and linking to those models hosted on Civitai.
One Telegram channel dedicated to AI-generating nonconsensual porn reacted to Civitai’s announcement with several users encouraging others to grab as many AI models of real people as they could before Civitai removed them. On this Telegram, users complained that these models were already removed, and my searches of the site have shown the same.
“The removal of those models really affect me [sic],” one prolific creator of nonconsensual content in the Telegram channel said.
When Civitai first announced that it was being pressured by its payment processing service provider several users started an archiving project to save all the models on the site before they were removed. A Discord server dedicated to this project now has over 100 members, but it appears Civitai has made many models inaccessible sooner than these users anticipated. One member of the archiving project said that there “are many thousands such models which cannot be backed up.”
Unfortunately, while Civitai’s recent policy changes and especially its removal of AI models of real people for now appears to have impacted people who make nonconsensual AI-generated porn, it’s unlikely that the change will slow them down for long. The people who originally created the models can always upload them to other sites, including some that have already positioned themselves as Civitai competitors.
It’s also unclear how Civitai intends to keep users from uploading AI models designed to generate the likeness of real people who are not well-known celebrities, as automated systems would not be able to detect these models.
Civitai's CEO Justin Maier told me in an email that "Uploaders must identify any content that depicts a real person; those uploads are automatically rejected." He also said the site uses a company called Clavata to flag well-known public figures, that people can "file a likeness claim" that will be reviewed and removed in 24 hours, and that it's piloting "an opt-in service with a third-party vendor so individuals can register a privacy-preserving face hash and have future uploads blocked at submission."
"No system is perfect with billions of unique faces, but combining these layers gives us the best coverage currently available for both celebrities and private individuals," Maier said. "We’ll keep tuning the models and expanding the registry pilot as the technology matures."
Update: This story has been updated with comment from Civitai CEO Justin Maier.
We begin this week with some scatalogical salvation. I dare not say more.
Then, swimming without a brain: It happens more often than you might think. Next, what was bigger as a baby than it is today? Hint: It’s still really big! And to close out, imagine the sights you’ll see with your infrared vision as you ride an elevator down to Mars.
The path to a more stable climate in Antarctica runs through the buttholes of penguins.
Penguin guano, the copious excrement produced by the birds, is rich in ammonia and methylamine gas. Scientists have now discovered that these guano-borne gasses stimulate particle formation that leads to clouds and aerosols which, in turn, cool temperatures in the remote region. As a consequence, guano “may represent an important climate feedback as their habitat changes,” according to a new study.
“Our observations show that penguin colonies are a large source of ammonia in coastal Antarctica, whereas ammonia originating from the Southern Ocean is, in comparison, negligible,” said researchers led by Matthew Boyer of the University of Helsinki. “Dimethylamine, likely originating from penguin guano, also participates in the initial steps of particle formation, effectively boosting particle formation rates up to 10,000 times.”
Boyer and his colleagues captured their measurements from a site near Marambio Base on the Antarctica Peninsula, in the austral summer of 2023. At times when the site was downwind of a nearby colony of 60,000 Adelie penguins, the atmospheric ammonia concentration spiked to 1,000 times higher than baseline. Moreover, the ammonia levels remained elevated for more than a month after the penguins migrated from the area.
“The penguin guano ‘fertilized’ soil, also known as ornithogenic soil, continued to be a strong source of ammonia long after they left the site,” said the team. “Our data demonstrates that there are local hotspots around the coast of Antarctica that can yield ammonia concentrations similar in magnitude to agricultural plots during summer…This suggests that coastal penguin/bird colonies could also comprise an important source of aerosol away from the coast.”
“It is already understood that widespread loss of sea ice extent threatens the habitat, food sources, and breeding behavior of most penguin species that inhabit Antarctica,” the researchers continued. “Consequently, some Antarctic penguin populations are already declining, and some species could be nearly extinct by the end of the 21st century. We provide evidence that declining penguin populations could cause a positive climate warming feedback in the summertime Antarctic atmosphere, as proposed by a modeling study of seabird emissions in the Arctic region.”
The power of penguin poop truly knows no earthly bounds. Guano, already famous as a super-fertilizer and a pillar of many ecosystems, is also creating clouds out of thin air, with macro knock-on effects. These guano hotspots act as a bulwark against a rapidly changing climate in Antarctica, which is warming twice as fast as the rest of the world. We’ll need every tool we can get to curb climate change: penguin bums, welcome aboard.
The word “brainless” is bandied about as an insult, but the truth is that lots of successful lifeforms get around just fine without a brain. For instance, microbes can locomote through fluids—a complex action—with no centralized nervous system. Naturally, scientists were like, “what’s that all about?”
“So far, it remains unclear how decentralized decision-making in a deformable microswimmer can lead to efficient collective locomotion of its body parts,” said researchers led by Benedikt Hartl of TU Wien and Tufts University. “We thus investigate biologically motivated decentralized yet collective decision-making strategies of the swimming behavior of a generalized…swimmer.”
Bead-based simulated microorganism. Image: TU Wien
The upshot: Decentralized circuits regulate movements in brainless swimmers, an insight that could inspire robotic analogs for drug delivery and other functions. However, the real tip-of-the-hat goes to the concept artist for the above depiction of the team’s bead-based simulated microbe, who shall hereafter be known as Beady the Deformable Microswimmer.
Jupiter is pretty dang big at this current moment. More than 1,000 Earths could fit inside the gas giant; our planet is a mere gumball on these scales. But at the dawn of our solar system 4.5 billion years ago, Jupiter was at least twice as massive as it is today, and its magnetic field was 50 times stronger, according to a new study.
“Our calculations reveal that Jupiter was 2 to 2.5 times as large as it is today, 3.8 [million years] after the formation of the first solids in the Solar System,” said authors Konstantin Batygin of the California Institute of Technology and Fred Adams of the University of Michigan. “Our findings…provide an evolutionary snapshot that pins down properties of the Jovian system at the end of the protosolar nebula’s lifetime.”
The team based their conclusions on the subtle orbital tilts of two of Jupiter’s tiny moons Amalthea and Thebe, which allowed them to reconstruct conditions in the early Jovian system. It’s nice to see Jupiter’s more offbeat moons get some attention; Europa is always hogging the spotlight. (Fun fact: lots of classic sci-fi stories are set on Amalthea, from Boris and Arkady Strugatsky’s “The Way to Amaltha” to Arthur C. Clarke’s “Jupiter Five.”)
I was hooked on this new study by the second sentence, which reads: “However, the capability to detect invisible multispectral infrared light with the naked eye is highly desirable.”
Okay, let's assume that the public is out there, highly desiring infrared vision, though I would like to see some poll-testing. A team has now developed an upconversion contact-lens (UCL) that detects near-infrared light (NIR) and converts it into blue, green and red wavelengths. While this is not the kind of inborn infrared vision you’d see in sci-fi, it does expand our standard retinal retinue, with fascinating results.
A participant having lenses fitted. Image: Yuqian Ma, Yunuo Chen, Hang Zhao
“Humans wearing upconversion contact lenses (UCLs) could accurately recognize near-infrared (NIR) temporal information like Morse code and discriminate NIR pattern images,” said researchers led by Yuqian Ma of the University of Science and Technology of China. “Interestingly, both mice and humans with UCLs exhibited better discrimination of NIR light compared with visible light when their eyes were closed, owing to the penetration capability of NIR light.”
The study reminds me of the legendary scene in Battlestar Galactica where Dean Stockwell, as John Cavil, exclaims: “I don't want to be human. I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter.” Maybe he just needed some upgraded contact lenses!
This week in space elevator news, why not set one up on the Martian moon Phobos? A new study envisions anchoring a tether to Phobos, a dinky little space potato that’s about the size of Manhattan, and extending it out some 3,700 miles, almost to the surface of Mars. Because Phobos is tidally locked to Mars (the same side always faces the planet), it might be possible to shuttle back and forth between Mars and Phobos on a tether.
“The building of such a space elevator [is] a feasible project in the not too distant future,” said author Vladimir Aslanov of the Moscow Aviation Institute. “Such a project could form the basis of many missions to explore Phobos, Mars and the space around them.”
Indeed, this is far from the first time scientists have pondered the advantages of a Phobian space elevator. Just don’t be the jerk that pushes all the buttons.
On Tuesday, Google revealed the latest and best version of its AI video generator, Veo 3. It’s impressive not only in the quality of the video it produces, but also because it can generate audio that is supposed to seamlessly sync with the video. I’m probably going to test Veo 3 in the coming weeks like we test many new AI tools, but one odd feature I already noticed about it is that it’s obsessed with one particular dad joke, which raises questions about what kind of content Veo 3 is able to produce and how it was trained.
This morning I saw that an X user who was playing with Veo 3 generated a video of a stand up comedian telling a joke. The joke was: “I went to the zoo the other day, there was only one dog in it, it was a Shih Tzu.” As in: “shit zoo.”
Other users quickly replied that the joke was posted to Reddit’s r/dadjokes community two years ago, and to the r/jokes community 12 years ago.
I started testing Google’s new AI video generator to see if I could get it to generate other jokes I could trace back to specific Reddit posts. This would not be definitive proof that Reddit provided the training data that resulted in a specific joke, but is a likely theory because we know Google is paying Reddit $60 million a year to license its content for training its AI models.
To my surprise, when I used the same prompt as the X user above—”a man doing stand up comedy in a small venue tells a joke (include the joke in the dialogue)”—I got a slightly different looking video, but the exact same joke.
And when I changed the prompt a bit—”a man doing stand up comedy tells a joke (include the joke in the dialogue)”—I still got a slightly different looking video with the exact same joke.
Google did not respond to a request for comment, so it’s impossible to say why its AI video generator is producing the same exact dad joke even when it’s not prompted to do so, and where exactly it sourced that joke. It could be from Reddit, but it could also be from many other places where the Shih Tzu joke has appeared across the internet, including YouTube, Threads, Instagram, Quora, icanhazdadjoke.com, houzz.com, Facebook, Redbubble, and Twitter, to name just a few. In other words, it’s a canonical corny dad joke of no clear origin that’s been posted online many times over the years, so it’s impossible to say where Google got it.
But it’s also not clear why this is the only coherent joke Google’s new AI video generator will produce. I’ve tried changing the prompts several times, and the result is either the Shih Tzu joke, gibberish, or incomplete fragments of speech that are not jokes.
One prompt that was almost identical to the one that produced the Shih Tzu joke resulted in a video of a stand up comedian saying he got a letter from the bank.
The prompt “a man telling a joke at a bar” resulted in a video of a man saying the idiom “you can’t have your cake and eat it too.”
The prompt “man tells a joke on stage” resulted in a video of a man saying some gibberish, then saying he went to the library.
Admittedly, these videos are hilarious in an absurd Tim & Eric kind of way because no matter what nonsense the comedian is saying the crowd always erupts into laughter, but it also clearly shows Google’s latest and greatest AI video generator is creatively limited in some ways. This is not the case with other generative AI tools, including Google’s own Gemini. When I asked Gemini to tell me a joke, the chatbot instantly produced different, coherent dad jokes. And when I asked it to do it over and over again, it always produced a different joke.
Again, it’s impossible to say what Veo 3 is doing behind the scenes without Google’s input, but one possible theory is that its falling back to a safe, known joke, rather than producing the type of content that embarrassed the company in the past, be it instructing users to eat glue or, or generating Nazi soldiers as people of color.
Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z) that 404 Media has repeatedly shown is being used to generate nonconsensual adult content, lost access to its credit card payment processor.
According to an announcement posted to Civitai on Monday, the site will “pause” credit card payments starting Friday, May 23. At that time, users will no longer be able to buy “Buzz,” the on-site currency users spend to generate images, or start new memberships.
“Some payment companies label generative-AI platforms high risk, especially when we allow user-generated mature content, even when it’s legal and moderated,” the announcement said. “That policy choice, not anything users did, forced the cutoff.”
Civitai’s CEO Justin Maier told me in an email that the site has not been “cut off” from payment processing.
“Our current provider recently informed us that they do not wish to support platforms that allow AI-generated explicit content,” he told me. “Rather than remove that category, we’re onboarding a specialist high-risk processor so that service to creators and customers continues without interruption. Out of respect for ongoing commercial negotiations, we’re not naming either the incumbent or the successor until the transition is complete.”
The announcement tells users that they can “stock up on Buzz” or switch to annual memberships to prepare for May 23. It also says that it should start accepting crypto and ACH checkout (direct transfer from a bank account) within a week, and that it should start taking credit card payments again with a new provider next month.
“Civitai is not shutting down,” the announcement says. “We have months of runway. The site, community, and creator payouts continue unchanged. We just need a brief boost from you while we finish new payment rails.”
In April, Civitai announced new policies it put in place because payment processors were threatening to cut it off unless it made changes to the kind of adult content that was allowed on the site. This included new policies against adult content that included diapers, guns, and further restrictions on content including the likeness of real people.
The announcement on Civitai Monday said that “Those changes opened some doors, but the processors ultimately decided Civitai was still outside their comfort zone.”
In the comments below the announcement, Civitai users debated how the site is handling the situations.
“This might be an unpopular opinion, but I think you need to get rid of all celebrity LoRA [custom AI models] on the site, honestly,” the top comment said. “Especially with the Take It Down Act, the risk is too high. Sorry this is happening to you guys. I do love this site. Edit: bought an annual sub to try and help.”
“If it wasn't for the porn there would be considerably less revenue and traffic,” another commenter replied. “And technically it's not about the porn, it's about the ability to have free expression to create what you want to create without being blocked to do so.”
404 Media has published severalstories since 2023 showing that Civitai is often used by people to produce nonconsnesual content. Earlier today we published a story showing its on-site AI video generator was producing nonconsensual porn of anyone.
Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z), is allowing users to AI generate nonconsensual porn of real people, despite the site’s policies against this type of content, increased moderation efforts, and threats from payment processors to deny Civitai service.
After I reached out for comment about this issue, Civitai told me it fixed the site’s moderation “configuration issue” that allowed users to do this. After Civitai said it fixed this issue, its AI video generator no longer created nonconsensual videos of celebrities, but at the time of writing it is still allowing people to generate nonconsensual videos of non-celebrities.
Doom: The Dark Ages, Bethesda’s recently released prequel to the demon slaughtering first-person shooter, is using anti-piracy software that’s locking out Linux users who paid for the game.
According to multiple posts on Reddit, Doom: The Dark Ages uses the infamous anti-piracy software Denuvo. One Reddit user on the Linux gaming subreddit said that they were getting a black screen in the game when using FSR, AMD’s technology for upscaling and frame generation which basically makes games look better and run faster. In an attempt to troubleshoot the problem, this person tried testing the game on different versions of Proton, a compatibility layer developed by Valve that allows games that were designed to run on Windows to work on Linux-based operating systems. Denuvo detected these tests as “multiple activations” of the game, and locked the Reddit user out of the game for 24 hours.
Industrial Light & Magic (ILM), the visual effects studio that practically invented the field as we know it today, revealed how it thinks it will use generative AI in the future, and that future looks really bad.
Much of what we understand today as special effects in movies was born at Industrial ILM, which was built to produce many of the iconic shots in Star Wars: A New Hope. Since 1977, through the ages of miniature models, puppeteering, and the bleeding edge of computer generated images, ILM has remained at the forefront of making the impossible come alive on movie screens.
Game engine company Unity is threatening to pull the licenses for RocketWerkz, the studio founded by DayZ developer Deal Hall, for reasons Hall told me are unfounded.
Hall first posted about this situation to the Reddit game development community r/gamedev on Friday, where he said “Unity is currently sending emails threatening longtime developers with disabling their access completely over bogus data about private versus public licenses.”
According to the initial email from Unity, which was provided to me by Hall, Unity claimed that RocketWerkz is “mixing” Unity license types and demanded that the studio “take immediate action” to fix this or Unity reserves the right to revoke the developer’s access to existing licenses on May 16. Essentially, Unity is accusing RocketWerkz of using free “Personal” licenses to work on commercial products that Unity says require paid “Pro” licenses. Hall says this is not true. If the company’s licenses are revoked, RocketWerkz will not be able to keep updating and maintain Stationeers, a game it released in 2017, and continue development on its upcoming project Torpedia.
Hall told me that one of his concerns is not just that Unity is threatening to pull its licenses, but that it’s not clear how it collected and used the data to make that decision.
“How is Unity gathering data to decide whether a company ‘has enough’ pro licenses?” he told me in an email. “It appears to me, they are scraping a lot of personal data and drawing disturbing conclusions from this data. Is this data scraping meeting GDPR requirements?”
Unity has a variety of plans developers can use, ranging from a free “Personal” version and an up to $4,950 a year “Industry” version. The more expensive plans come with more features and support. More importantly, games with revenue or funding greater than $200,000 in a 12-month period have to at least pay for a Unity Pro license, which costs $2,200 a year per license. Unity infamously outraged the games industry when it tried to add additional fees to this pricing scheme in 2023, a strategy that was so disastrous for the company it reversed course and dumped its CEO.
According to Hall’s Reddit post, RocketWerkz pays for multiple licenses which it has spent about $300,000 on since the company was founded in 2014. He also shared an invoice with me showing the company paid $36,420 for 18 Unity Pro Licenses in December of 2024, which are good until December of 2025. Game developers need to buy a license for each of their employees, or one license per “seat” or person who will be using it. Paying for monthly or annual access to software, instead of buying, owning, and using software however you like, is increasingly common. Even very popular consumer software like Adobe and Microsoft Office have shifted to this model in recent years.
According to an email Unity sent to RocketWerkz after it asked for clarification, which I have viewed, Unity claims that there are five people at the studio who are using Personal licenses who should be using Pro licenses. Unity lists their emails, which show two @Rocketwerkz.com emails and three emails with domain names obscured by Unity.
Hall says that of those people, one is a RocketWerkz employee who has a Personal Unity account but does not work on a Unity project at the studio and one is a RocketWerkz employee who the company currently pays for a Pro license for. Another email belongs to a contractor who did some work for RocketWerkz in 2024, and who the company paid for their Pro license at the time, and the other two belong to two employees at different companies, which like RocketWerkz are also based in New Zealand. These two employees never worked at RocketWerkz. One works at Weta workshop, the visual effects company that worked on Lord of the Rings. Hall also shared an image of the Unity dashboard showing it’s currently paying for Pro licenses for the employees Unity says are using Personal licenses.
“There is a lot of unknowns here, and I don't have much to go on yet—but I do wonder if there are serious data violations going on with Unity—and they appear to be threatening to use this data to close down developer accounts,” Hall told me. “How will this affect users who don't have the clout I do?”
Essentially, it’s not clear how Unity, which can see what RocketWerkz is paying it, what for, and who is using those licenses, is determining that the studio doesn’t have enough licenses. Especially since Unity claims RocketWerkz should be paying for licenses for people who never worked at the studio and are seemingly not connected to it other than being located in the same country.
Unity did not respond to a request for comment.
On Reddit, Hall said that on one hand he feels “vindicated” that Unity’s recent strategies, which have been hostile to game developers, lead to bad business outcomes, but that many small developers rely on it and that will be bad for them also.
“They will take with them so many small studios. They are the ones that will pay the price. So many small developers, amazing teams, creating games just because they love making games,” Hall said. “One day, after some private equity picks up Unity's rotting carcass, these developers will to login to the Unity launcher but won't be able to without going through some crazy hoops or paying a lot more.”