Vue lecture

Podcast: The Life Changing Power of Lifting

Podcast: The Life Changing Power of Lifting

For this week’s podcast, I’m talking to our friend Casey Johnston, a tech journalist turned fitness journalist turned independent journalist. Casey studied physics, which led her to tech journalism; she did some of my favorite coverage of Internet culture as well as Apple’s horrendous butterfly laptop keyboards. We worked together at VICE, where Casey was an editor and where she wrote Ask a Swole Woman, an advice column about weightlifting. After she left VICE, Casey founded She’s a Beast, an independent site about weightlifting, but also about the science of diet culture, fitness influencers on the internet, the intersections of all those things, etc. 

She just wrote A Physical Education: How I Escaped Diet Culture and Gained the Power of Lifting, a really great reported memoir about how our culture and the media often discourages people from lifting, and how this type of exercise can be really beneficial to your brain and your body. I found the book really inspiring and actually started lifting right after I read it. In this interview we talk about her book, about journalism, about independent media, and how doing things like lifting weights and touching grass helps us navigate the world.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Gone Fishin': 404 Media Summer Break 2025

Gone Fishin': 404 Media Summer Break 2025

This week, we’re going to try something new at 404 Media. Which is to say we’re going to try doing nothing at all. The TL;DR is that 404 Media is taking the week off, so this is the only email you’ll get from us this week. No posts on the website (except a scheduled one for the podcast). We will be back with your regularly scheduled dystopia Monday, July 7. 

We’re doing this to take a quick break to recharge. Over the nearly two years since we founded 404 Media, each of us have individually taken some (very limited) vacations. And when one of us takes off time it just means that the others have to carry their workload. We’re not taking this time to do an offsite, or brainstorm blue sky ideas. Some of us are quite literally gone fishin’. So, for the first time ever: A break!

We are not used to breaks, because we know that the best way to build an audience and a business of people who read our articles is to actually write a lot of articles, and so that’s what we’ve been doing. The last few months have been particularly wild, as we’ve covered Elon Musk’s takeover of the federal government, the creeping surveillance state, Trump’s mass deportation campaign, AI’s role in stomping over workers, the general destruction of the internet, etc etc etc. At the moment we have more story leads than we can possibly get to and are excited for the second half of the year. We’ve also published a lot of hopeful news, too, including instances where people fight back against powerful forces or solve universal mysteries, or when companies are forced to do the right thing in response to our reporting, or when lawmakers hold tech giants to account as a result of our investigations. But in an industry that has become obsessed with doing more with less and publishing constantly, we have found that publishing quality journalism you can’t find anywhere else is a good way to run a business, which means we thankfully don’t have to cover everything, everywhere, all at once.

When we founded 404 Media in August 2023, we had no idea if anyone would subscribe, and we had no idea how it would go. We took zero investment from anyone and hoped that if we did good work often enough, enough people would decide that they wanted to support independent journalism that we could make a job out of it, and that we could make a sustainable business that would work for the long haul. We did not and do not take that support for granted. But because of your support, we now feel like we don’t have to scratch and claw for every possible new dollar we can get, and you have given us the breathing room in our business to quite literally take a breather, and to let the other folks who make this website possible, such as those who help us out with our social accounts, take a paid breather as well. 

And if you want to subscribe to support our work, you can do so here.

We are not tired, exactly. In fact, we all feel more energized and ambitious than ever, knowing there are so many people out there who enjoy our work and are willing to financially support it. But we also don’t want to burn ourselves out and therefore, school’s out for summer (for one week). This week’s podcast is an interview Jason recorded with our friend Casey Johnston a few weeks ago; it’ll be the only new content this week. We’ll be back to it next Monday. Again, thank you all. Also, if you want, open thread in the comments to chat about whatever is going on out there or whatever is on your mind.

Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not

A federal judge in California ruled Monday that Anthropic likely violated copyright law when it pirated authors’ books to create a giant dataset and "forever" library but that training its AI on those books without authors' permission constitutes transformative fair use under copyright law. The complex decision is one of the first of its kind in a series of high-profile copyright lawsuits brought by authors and artists against AI companies, and it’s largely a very bad decision for authors, artists, writers, and web developers. 

This case, in which authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic, maker of the Claude family of large language models, is one of dozens of high-profile lawsuits brought against AI giants. The authors sued Anthropic because the company scraped full copies of their books for the purposes of training their AI models from a now-notorious dataset called Books3, as well as from the piracy websites LibGen and Pirate Library Mirror (PiLiMi). The suit also claims that Anthropic bought used physical copies of books and scanned them for the purposes of training AI. 

"From the start, Anthropic ‘had many places from which’ it could have purchased books, but it preferred to steal them to avoid ‘legal/practice/business slog,’ as cofounder and chief executive officer Dario Amodei put it. So, in January or February 2021, another Anthropic cofounder, Ben Mann, downloaded Books3, an online library of 196,640 books that he knew had been assembled from unauthorized copies of copyrighted books — that is, pirated," William Alsup, a federal judge for the Northern District of California, wrote in his decision Monday. "Anthropic’s next pirated acquisitions involved downloading distributed, reshared copies of other pirate libraries. In June 2021, Mann downloaded in this way at least five million copies of books from Library Genesis, or LibGen, which he knew had been pirated. And, in July 2022, Anthropic likewise downloaded at least two million copies of books from the Pirate Library Mirror, or PiLiMi, which Anthropic knew had been pirated."

Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media. 

If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them. 

Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.
Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

Local police in Oregon casually offered various surveillance services to federal law enforcement officials from the FBI and ICE, and to other state and local police departments, as part of an informal email and meetup group of crime analysts, internal emails shared with 404 Media show. 

In the email thread, crime analysts from several local police departments and the FBI introduced themselves to each other and made lists of surveillance tools and tactics they have access to and felt comfortable using, and in some cases offered to perform surveillance for their colleagues in other departments. The thread also includes a member of ICE’s Homeland Security Investigations (HSI) and members of Oregon’s State Police. In the thread, called the “Southern Oregon Analyst Group,” some members talked about making fake social media profiles to surveil people, and others discussed being excited to learn and try new surveillance techniques. The emails show both the wide array of surveillance tools that are available to even small police departments in the United States and also shows informal collaboration between local police departments and federal agencies, when ordinarily agencies like ICE are expected to follow their own legal processes for carrying out the surveillance. 

In one case, a police analyst for the city of Medford, Oregon, performed Flock automated license plate reader (ALPR) lookups for a member of ICE’s HSI; later, that same police analyst asked the HSI agent to search for specific license plates in DHS’s own border crossing license plate database. The emails show the extremely casual and informal nature of what partnerships between police departments and federal law enforcement can look like, which may help explain the mechanics of how local police around the country are performing Flock automated license plate reader lookups for ICE and HSI even though neither group has a contract to use the technology, which 404 Media reported last month

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police
An email showing HSI asking for a license plate lookup from police in Medford, Oregon

Kelly Simon, the legal director for the American Civil Liberties Union of Oregon, told 404 Media “I think it’s a really concerning thread to see, in such a black-and-white way. I have certainly never seen such informal, free-flowing of information that seems to be suggested in these emails.”

In that case, in 2021, a crime analyst with HSI emailed an analyst at the Medford Police Department with the subject line “LPR Check.” The email from the HSI analyst, who is also based in Medford, said they were told to “contact you and request a LPR check on (2) vehicles,” and then listed the license plates of two vehicles. “Here you go,” the Medford Police Department analyst responded with details of the license plate reader lookup. “I only went back to 1/1/19, let me know if you want me to check further back.” In 2024, the Medford police analyst emailed the same HSI agent and told him that she was assisting another police department with a suspected sex crime and asked him to “run plates through the border crossing system,” meaning the federal ALPR system at the Canada-US border. “Yes, I can do that. Let me know what you need and I’ll take a look,” the HSI agent said. 

More broadly, the emails, obtained using a public records request by Information for Public Use, an anonymous group of researchers in Oregon who have repeatedly uncovered documents about government surveillance, reveal the existence of the “Southern Oregon Analyst Group.” The emails span between 2021 and 2024 and show local police eagerly offering various surveillance services to each other as part of their own professional development. 

In a 2023 email thread where different police analysts introduced themselves, they explained to each other what types of surveillance software they had access to, which ones they use the most often, and at times expressed an eagerness to try new techniques. 

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

“This is my first role in Law Enforcement, and I've been with the Josephine County Sheriff's Office for 6 months, so I'm new to the game,” an email from a former Pinkerton security contractor to officials at 10 different police departments, the FBI, and ICE, reads. “Some tools I use are Flock, TLO, Leads online, WSIN, Carfax for police, VIN Decoding, LEDS, and sock puppet social media accounts. In my role I build pre-raid intelligence packages, find information on suspects and vehicles, and build link charts showing connections within crime syndicates. My role with [Josephine Marijuana Enforcement Team] is very intelligence and research heavy, but I will do the occasional product with stats. I would love to be able to meet everyone at a Southern Oregon analyst meet-up in the near future. If there is anything I can ever provide anyone from Josephine County, please do not hesitate to reach out!” The surveillance tools listed here include automatic license plate reading technology, social media monitoring tools, people search databases, and car ownership history tools. 

An investigations specialist with the Ashland Police Department messaged the group, said she was relatively new to performing online investigations, and said she was seeking additional experience. “I love being in a support role but worry patrol doesn't have confidence in me. I feel confident with searching through our local cad portal, RMS, Evidence.com, LeadsOnline, carfax and TLO. Even though we don't have cameras in our city, I love any opportunity to search for something through Flock,” she said. “I have much to learn with sneaking around in social media, and collecting accurate reports from what is inputted by our department.”

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

A crime analyst with the Medford Police Department introduced themselves to the group by saying “The Medford Police Department utilizes the license plate reader systems, Vigilant and Flock. In the next couple months, we will be starting our transition to the Axon Fleet 3 cameras. These cameras will have LPR as well. If you need any LPR searches done, please reach out to me or one of the other analysts here at MPD. Some other tools/programs that we have here at MPD are: ESRI, Penlink PLX, CellHawk, TLO, LeadsOnline, CyberCheck, Vector Scheduling/CrewSense & Guardian Tracking, Milestone XProtect city cameras, AXON fleet and body cams, Lexipol, HeadSpace, and our RMS is Central Square (in case your agency is looking into purchasing any of these or want more information on them).”

A fourth analyst said “my agency uses Tulip, GeoShield, Flock LPR, LeadsOnline, TLO, Axon fleet and body cams, Lexipol, LEEP, ODMap, DMV2U, RISS/WSIN, Crystal Reports, SSRS Report Builder, Central Square Enterprise RMS, Laserfiche for fillable forms and archiving, and occasionally Hawk Toolbox.” Several of these tools are enterprise software solutions for police departments, which include things like police report management software, report creation software, and stress management and wellbeing software, but many of them are surveillance tools.  

At one point in the 2023 thread, an FBI intelligence analyst for the FBI’s Portland office chimes in, introduces himself, and said “I think I've been in contact with most folks on this email at some point in the past […] I look forward to further collaboration with you all.”

The email thread also planned in-person meetups and a “mini-conference” last year that featured a demo from a company called CrimeiX, a police information sharing tool.  

A member of Information for Public Use told 404 Media “it’s concerning to me to see them building a network of mass surveillance.”

“Automated license plate recognition software technology is something that in and of itself, communities are really concerned about,” the member of Information for Public Use said. “So I think when we combine this very obvious mass surveillance technology with a network of interagency crime analysts that includes local police who are using sock puppet accounts to spy on anyone and their mother and then that information is being pretty freely shared with federal agents, you know, including Homeland Security Investigations, and we see the FBI in the emails as well. It's pretty disturbing.” They added, as we have reported before, that many of these technologies were deployed under previous administrations but have become even more alarming when combined with the fact that the Trump administration has changed the priorities of ICE and Homeland Security Investigations. 

“The whims of the federal administration change, and this technology can be pointed in any direction,” they said. “Local law enforcement might be justifying this under the auspices of we're fighting some form of organized crime, but one of the crimes HSI investigates is work site enforcement investigations, which sound exactly like the kind of raids on workplaces that like the country is so upset about right now.”

Simon, of ACLU Oregon, said that such informal collaboration is not supposed to be happening in Oregon.

“We have, in Oregon, a lot of really strong protections that ensure that our state resources, including at the local level, are not going to support things that Oregonians disagree with or have different values around,” she said. “Oregon has really strong firewalls between local resources, and federal resources or other state resources when it comes to things like reproductive justice or immigrant justice. We have really strong shield laws, we have really strong sanctuary laws, and when I see exchanges like this, I’m very concerned that our firewalls are more like sieves because of this kind of behind-the-scenes, lax approach to protecting the data and privacy of Oregonians.”

Simon said that collaboration between federal and local cops on surveillance should happen “with the oversight of the court. Getting a warrant to request data from a local agency seems appropriate to me, and it ensures there’s probable cause, that the person whose information is being sought is sufficiently suspected of a crime, and that there are limits to the scope, about of information that's being sought and specifics about what information is being sought. That's the whole purpose of a warrant.”

Over the last several weeks, our reporting has led multiple municipalities to reconsider how the license plate reading technology Flock is used, and it has spurred an investigation by the Illinois Secretary of State office into the legality of using Flock cameras in the state for immigration-related searches, because Illinois specifically forbids local police from assisting federal police on immigration matters.

404 Media contacted all of the police departments on the Southern Oregon Analyst Group for comment and to ask them about any guardrails they have for the sharing of surveillance tools across departments or with the federal government. Geoffrey Kirkpatrick, a lieutenant with the Medford Police Department, said the group is “for professional networking and sharing professional expertise with each other as they serve their respective agencies.” 

“The Medford Police Department’s stance on resource-sharing with ICE is consistent with both state law and federal law,” Kirkpatrick said. “The emails retrieved for that 2025 public records request showed one single instance of running LPR information for a Department of Homeland Security analyst in November 2021. Retrieving those files from that single 2021 matter to determine whether it was an DHS case unrelated to immigration, whether a criminal warrant existed, etc would take more time than your publication deadline would allow, and the specifics of that one case may not be appropriate for public disclosure regardless.” (404 Media reached out to Medford Police Department a week before this article was published). 

A spokesperson for the Central Point Police Department said it “utilizes technology as part of investigations, we follow all federal, state, and local law regarding use of such technology and sharing of any such information. Typically we do not use our tools on behalf of other agencies.”

A spokesperson for Oregon’s Department of Justice said it did not have comment and does not participate in the group. The other police departments in the group did not respond to our request for comment.

John Deere Must Face FTC Lawsuit Over Its Tractor Repair Monopoly, Judge Rules

John Deere Must Face FTC Lawsuit Over Its Tractor Repair Monopoly, Judge Rules

A judge ruled that John Deere must face a lawsuit from the Federal Trade Commission and five states over its tractor and agricultural equipment repair monopoly, and rejected the company’s argument that the case should be thrown out. This means Deere is now facing both a class action lawsuit and a federal antitrust lawsuit over its repair practices.

The FTC’s lawsuit against Deere was filed under former FTC chair Lina Khan in the final days of Joe Biden’s presidency, but the Trump administration’s FTC has decided to continue to pursue the lawsuit, indicating that right to repair remains a bipartisan issue in a politically divided nation in which so few issues are agreed on across the aisle. Deere argued that both the federal government and state governments joining in the case did not have standing to sue it and argued that claims of its monopolization of the repair market and unfair labor practices were not sufficient; Illinois District Court judge Iain D. Johnston did not agree, and said the lawsuit can and should move forward. 

Johnston is also the judge in the class action lawsuit against Deere, which he also ruled must proceed. In his pretty sassy ruling, Johnston said that Deere repeated many of its same arguments that also were not persuasive in the class action suit.

“Sequels so rarely beat their originals that even the acclaimed Steve Martin couldn’t do it on three tries. See Cheaper by the Dozen II, Pink Panther II, Father of the Bride II,” Johnston wrote. “Rebooting its earlier production, Deere sought to defy the odds. To be sure, like nearly all sequels, Deere edited the dialogue and cast some new characters, giving cameos to veteran stars like Humphrey’s Executor [a court decision]. But ultimately the plot felt predictable, the script derivative. Deere I received a thumbs-down, and Deere II fares no better. The Court denies the Motion for judgment on the pleadings.”

Johnston highlighted, as we have repeatedly shown with our reporting, that in order to repair a newer John Deere tractor, farmers need access to a piece of software called Service Advisor, which is used by John Deere dealerships. Parts are also difficult to come by. 

“Even if some farmers knew about the restrictions (a fact question), they might not be aware of or appreciate at the purchase time how those restrictions will affect them,” Johnston wrote. “For example: How often will repairs require Deere’s ADVISOR tool? How far will they need to travel to find an Authorized Dealer? How much extra will they need to pay for Deere parts?”

You can read more about the FTC’s lawsuit against Deere here and more about the class action lawsuit in our earlier coverage here

GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

The federal government is working on a website and API called “ai.gov” to “accelerate government innovation with AI” that is supposed to launch on July 4 and will include an analytics feature that shows how much a specific government team is using AI, according to an early version of the website and code posted by the General Services Administration on Github. 

The page is being created by the GSA’s Technology Transformation Services, which is being run by former Tesla engineer Thomas Shedd. Shedd previously told employees that he hopes to AI-ify much of the government. AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows. 

“Accelerate government innovation with AI,” an early version of the website, which is linked to from the GSA TTS Github, reads. “Three powerful AI tools. One integrated platform.” The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services’ Bedrock and Meta’s LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn’t explain what it will do. 

The Github says “launch date - July 4.” Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text.

Elon Musk’s Department of Government Efficiency made integrating AI into normal government functions one of its priorities. At GSA’s TTS, Shedd has pushed his team to create AI tools that the rest of the government will be required to use. In February, 404 Media obtained leaked audio from a meeting in which Shedd told his team they would be creating “AI coding agents” that would write software across the entire government, and said he wanted to use AI to analyze government contracts. 

“We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI … that’s one example of something that we’re looking for people to work on,” Shedd said. “Things like making AI coding agents available for all agencies. One that we've been looking at and trying to work on immediately within GSA, but also more broadly, is a centralized place to put contracts so we can run analysis on those contracts.”

Government employees we spoke to at the time said the internal reaction to Shedd’s plan was “pretty unanimously negative,” and pointed out numerous ways this could go wrong, which included everything from AI unintentionally introducing security issues or bugs into code or suggesting that critical contracts be killed. 

The GSA did not immediately respond to a request for comment.

Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire

Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire

Waymo told 404 Media that it is still operating in Los Angeles after several of its driverless cars were lit on fire during anti-ICE protests over the weekend, but that it has temporarily disabled the cars’ ability to drive into downtown Los Angeles, where the protests are happening. 

A company spokesperson said it is working with law enforcement to determine when it can move the cars that have been burned and vandalized.

Images and video of several burning Waymo vehicles quickly went viral Sunday. 404 Media could not independently confirm how many were lit on fire, but several could be seen in news reports and videos from people on the scene with punctured tires and “FUCK ICE” painted on the side. 

Waymo car completely engulfed in flames.

Alejandra Caraballo (@esqueer.net) 2025-06-09T00:29:47.184Z

The fact that Waymos need to use video cameras that are constantly recording their surroundings in order to function means that police have begun to look at them as sources of surveillance footage. In April, we reported that the Los Angeles Police Department had obtained footage from a Waymo while investigating another driver who hit a pedestrian and fled the scene. 

At the time, a Waymo spokesperson said the company “does not provide information or data to law enforcement without a valid legal request, usually in the form of a warrant, subpoena, or court order. These requests are often the result of eyewitnesses or other video footage that identifies a Waymo vehicle at the scene. We carefully review each request to make sure it satisfies applicable laws and is legally valid. We also analyze the requested data or information, to ensure it is tailored to the specific subject of the warrant. We will narrow the data provided if a request is overbroad, and in some cases, object to producing any information at all.”

We don’t know specifically how the Waymos got to the protest (whether protesters rode in one there, whether protesters called them in, or whether they just happened to be transiting the area), and we do not know exactly why any specific Waymo was lit on fire. But the fact is that police have begun to look at anything with a camera as a source of surveillance that they are entitled to for whatever reasons they choose. So even though driverless cars nominally have nothing to do with law enforcement, police are treating them as though they are their own roving surveillance cameras.

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality

The Department of Homeland Security (DHS) and Transportation Security Administration (TSA) are researching an incredibly wild virtual reality technology that would allow TSA agents to use VR goggles and haptic feedback gloves to allow them to pat down and feel airline passengers at security checkpoints without actually touching them. The agency calls this a “touchless sensor that allows a user to feel an object without touching it.” 

Information sheets released by DHS and patent applications describe a series of sensors that would map a person or object’s “contours” in real time in order to digitally replicate it within the agent’s virtual reality system. This system would include a “haptic feedback pad” which would be worn on an agent’s hand. This would then allow the agent to inspect a person’s body without physically touching them in order to ‘feel’ weapons or other dangerous objects. A DHS information sheet released last week describes it like this: 

“The proposed device is a wearable accessory that features touchless sensors, cameras, and a haptic feedback pad. The touchless sensor system could be enabled through millimeter wave scanning, light detection and ranging (LiDAR), or backscatter X-ray technology. A user fits the device over their hand. When the touchless sensors in the device are within range of the targeted object, the sensors in the pad detect the target object’s contours to produce sensor data. The contour detection data runs through a mapping algorithm to produce a contour map. The contour map is then relayed to the back surface that contacts the user’s hand through haptic feedback to physically simulate a sensation of the virtually detected contours in real time.”

The system “would allow the user to ‘feel’ the contour of the person or object without actually touching the person or object,” a patent for the device reads. “Generating the mapping information and physically relaying it to the user can be performed in real time.” The information sheet says it could be used for security screenings but also proposes it for "medical examinations."

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A screenshot from the patent application that shows a diagram of virtual hands roaming over a person's body

The seeming reason for researching this tool is that a TSA agent would get the experience and sensation of touching a person without actually touching the person, which the DHS researchers seem to believe is less invasive. The DHS information sheet notes that a “key benefit” of this system is it “preserves privacy during body scanning and pat-down screening” and “provides realistic virtual reality immersion,” and notes that it is “conceptual.” But DHS has been working on this for years, according to patent filings by DHS researchers that date back to 2022.

Whether it is actually less invasive to have a TSA agent in VR goggles and haptics gloves feel you up either while standing near you or while sitting in another room is something that is going to vary from person to person. TSA patdowns are notoriously invasive, as many have pointed out through the years. One privacy expert who showed me the documents but was not authorized to speak to the press about this by their employer said “I guess the idea is that the person being searched doesn't feel a thing, but the TSA officer can get all up in there?,” they said. “The officer can feel it ... and perhaps that’s even more invasive (or inappropriate)? All while also collecting a 3D rendering of your body.” (The documents say the system limits the display of sensitive parts of a person’s body, which I explain more below).

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A screenshot from the patent application that explains how a "Haptic Feedback Algorithm" would map a person's body

There are some pretty wacky graphics in the patent filings, some of which show how it would be used to sort-of-virtually pat down someone’s chest and groin (or “belt-buckle”/“private body zone,” according to the patent). One of the patents notes that “embodiments improve the passenger’s experience, because they reduce or eliminate physical contacts with the passenger.” It also claims that only the goggles user will be able to see the image being produced and that only limited parts of a person’s body will be shown “in sensitive areas of the body, instead of the whole body image, to further maintain the passenger’s privacy.” It says that the system as designed “creates a unique biometric token that corresponds to the passenger.” 

A separate patent for the haptic feedback system part of this shows diagrams of what the haptic glove system might look like and notes all sorts of potential sensors that could be used, from cameras and LiDAR to one that “involves turning ultrasound into virtual touch.” It adds that the haptic feedback sensor can “detect the contour of a target (a person and/or an object) at a distance, optionally penetrating through clothing, to produce sensor data.”

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
Diagram of smiling man wearing a haptic feedback glove
TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A drawing of the haptic feedback glove

DHS has been obsessed with augmented reality, virtual reality, and AI for quite some time. Researchers at San Diego State University, for example, proposed an AR system that would help DHS “see” terrorists at the border using HoloLens headsets in some vague, nonspecific way. Customs and Border Patrol has proposed “testing an augmented reality headset with glassware that allows the wearer to view and examine a projected 3D image of an object” to try to identify counterfeit products. 

DHS acknowledged a request for comment but did not provide one in time for publication. 

The IRS Tax Filing Software TurboTax Is Trying to Kill Just Got Open Sourced

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
The IRS Tax Filing Software TurboTax Is Trying to Kill Just Got Open Sourced

The IRS open sourced much of its incredibly popular Direct File software as the future of the free tax filing program is at risk of being killed by Intuit’s lobbyists and Donald Trump’s megabill. Meanwhile, several top developers who worked on the software have left the government and joined a project to explore the “future of tax filing” in the private sector. 

Direct File is a piece of software created by developers at the US Digital Service and 18F, the former of which became DOGE and is now unrecognizable, and the latter of which was killed by DOGE. Direct File has been called a “free, easy, and trustworthy” piece of software that made tax filing “more efficient.” About 300,000 people used it last year as part of a limited pilot program, and those who did gave it incredibly positive reviews, according to reporting by Federal News Network

But because it is free and because it is an example of government working, Direct File and the IRS’s Free File program more broadly have been the subject of years of lobbying efforts by financial technology giants like Intuit, which makes TurboTax. DOGE sought to kill Direct File, and currently, there is language in Trump’s massive budget reconciliation bill that would kill Direct File. Experts say that “ending [the] Direct File program is a gift to the tax-prep industry that will cost taxpayers time and money.”

That means it’s quite big news that the IRS released most of the code that runs Direct File on Github last week. And, separately, three people who worked on it—Chris Given, Jen Thomas, Merici Vinton—have left government to join the Economic Security Project’s Future of Tax Filing Fellowship, where they will research ways to make filing taxes easier, cheaper, and more straightforward. They will be joined by Gabriel Zucker, who worked on Direct File as part of Code for America.

Teachers Are Not OK

Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses. 

One thing is clear: teachers are not OK. 

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

💡
Have you lost your job to an AI? Has AI radically changed how you work (whether you're a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all. 

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you. 

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so? 

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do. 

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

  • I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.
  • It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class. 

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one. 

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police. 

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified? 

Kate Conroy 

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded. 

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot. 

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom. 

Jeffrey Fisher

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are. 

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void. 

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself. 

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students. 

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for. 

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated. 

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection. 

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences). 

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship. 

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it. 

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment. 

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration. 

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!” 

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!). 

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor. 

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!! 

[Article continues after wall]

❌