Vue normale

Hier — 17 juin 2025Flux principal

Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

17 juin 2025 à 10:47
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

I was sick last week, so I did not have time to write about the Discover Tab in Meta’s AI app, which, as Katie Notopoulos of Business Insider has pointed out, is the “saddest place on the internet.” Many very good articles have already been written about it, and yet, I cannot allow its existence to go unremarked upon in the pages of 404 Media. 

If you somehow missed this while millions of people were protesting in the streets, state politicians were being assassinated, war was breaking out between Israel and Iran, the military was deployed to the streets of Los Angeles, and a Coinbase-sponsored military parade rolled past dozens of passersby in Washington, D.C., here is what the “Discover” tab is: The Meta AI app, which is the company’s competitor to the ChatGPT app, is posting users’ conversations on a public “Discover” page where anyone can see the things that users are asking Meta’s chatbot to make for them. 

Meta Invents New Way to Humiliate Users With Feed of People's Chats With AI

This includes various innocuous image and video generations that have become completely inescapable on all of Meta’s platforms (things like “egg with one eye made of black and gold,” “adorable Maltese dog becomes a heroic lifeguard,” “one second for God to step into your mind”), but it also includes entire chatbot conversations where users are seemingly unknowingly leaking a mix of embarrassing, personal, and sensitive details about their lives onto a public platform owned by Mark Zuckerberg. In almost all cases, I was able to trivially tie these chats to actual, real people because the app uses your Instagram or Facebook account as your login.

À partir d’avant-hierFlux principal
  • ✇404 Media
  • Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police
    📄This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.Local police in Oregon casually offered various surveillance services to federal law enforcement officials from the FBI and ICE, and to other state and local police departments, as part of an informal email and meetup gro
     

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

16 juin 2025 à 11:10
📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.
Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

Local police in Oregon casually offered various surveillance services to federal law enforcement officials from the FBI and ICE, and to other state and local police departments, as part of an informal email and meetup group of crime analysts, internal emails shared with 404 Media show. 

In the email thread, crime analysts from several local police departments and the FBI introduced themselves to each other and made lists of surveillance tools and tactics they have access to and felt comfortable using, and in some cases offered to perform surveillance for their colleagues in other departments. The thread also includes a member of ICE’s Homeland Security Investigations (HSI) and members of Oregon’s State Police. In the thread, called the “Southern Oregon Analyst Group,” some members talked about making fake social media profiles to surveil people, and others discussed being excited to learn and try new surveillance techniques. The emails show both the wide array of surveillance tools that are available to even small police departments in the United States and also shows informal collaboration between local police departments and federal agencies, when ordinarily agencies like ICE are expected to follow their own legal processes for carrying out the surveillance. 

In one case, a police analyst for the city of Medford, Oregon, performed Flock automated license plate reader (ALPR) lookups for a member of ICE’s HSI; later, that same police analyst asked the HSI agent to search for specific license plates in DHS’s own border crossing license plate database. The emails show the extremely casual and informal nature of what partnerships between police departments and federal law enforcement can look like, which may help explain the mechanics of how local police around the country are performing Flock automated license plate reader lookups for ICE and HSI even though neither group has a contract to use the technology, which 404 Media reported last month

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police
An email showing HSI asking for a license plate lookup from police in Medford, Oregon

Kelly Simon, the legal director for the American Civil Liberties Union of Oregon, told 404 Media “I think it’s a really concerning thread to see, in such a black-and-white way. I have certainly never seen such informal, free-flowing of information that seems to be suggested in these emails.”

In that case, in 2021, a crime analyst with HSI emailed an analyst at the Medford Police Department with the subject line “LPR Check.” The email from the HSI analyst, who is also based in Medford, said they were told to “contact you and request a LPR check on (2) vehicles,” and then listed the license plates of two vehicles. “Here you go,” the Medford Police Department analyst responded with details of the license plate reader lookup. “I only went back to 1/1/19, let me know if you want me to check further back.” In 2024, the Medford police analyst emailed the same HSI agent and told him that she was assisting another police department with a suspected sex crime and asked him to “run plates through the border crossing system,” meaning the federal ALPR system at the Canada-US border. “Yes, I can do that. Let me know what you need and I’ll take a look,” the HSI agent said. 

More broadly, the emails, obtained using a public records request by Information for Public Use, an anonymous group of researchers in Oregon who have repeatedly uncovered documents about government surveillance, reveal the existence of the “Southern Oregon Analyst Group.” The emails span between 2021 and 2024 and show local police eagerly offering various surveillance services to each other as part of their own professional development. 

In a 2023 email thread where different police analysts introduced themselves, they explained to each other what types of surveillance software they had access to, which ones they use the most often, and at times expressed an eagerness to try new techniques. 

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

“This is my first role in Law Enforcement, and I've been with the Josephine County Sheriff's Office for 6 months, so I'm new to the game,” an email from a former Pinkerton security contractor to officials at 10 different police departments, the FBI, and ICE, reads. “Some tools I use are Flock, TLO, Leads online, WSIN, Carfax for police, VIN Decoding, LEDS, and sock puppet social media accounts. In my role I build pre-raid intelligence packages, find information on suspects and vehicles, and build link charts showing connections within crime syndicates. My role with [Josephine Marijuana Enforcement Team] is very intelligence and research heavy, but I will do the occasional product with stats. I would love to be able to meet everyone at a Southern Oregon analyst meet-up in the near future. If there is anything I can ever provide anyone from Josephine County, please do not hesitate to reach out!” The surveillance tools listed here include automatic license plate reading technology, social media monitoring tools, people search databases, and car ownership history tools. 

An investigations specialist with the Ashland Police Department messaged the group, said she was relatively new to performing online investigations, and said she was seeking additional experience. “I love being in a support role but worry patrol doesn't have confidence in me. I feel confident with searching through our local cad portal, RMS, Evidence.com, LeadsOnline, carfax and TLO. Even though we don't have cameras in our city, I love any opportunity to search for something through Flock,” she said. “I have much to learn with sneaking around in social media, and collecting accurate reports from what is inputted by our department.”

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

A crime analyst with the Medford Police Department introduced themselves to the group by saying “The Medford Police Department utilizes the license plate reader systems, Vigilant and Flock. In the next couple months, we will be starting our transition to the Axon Fleet 3 cameras. These cameras will have LPR as well. If you need any LPR searches done, please reach out to me or one of the other analysts here at MPD. Some other tools/programs that we have here at MPD are: ESRI, Penlink PLX, CellHawk, TLO, LeadsOnline, CyberCheck, Vector Scheduling/CrewSense & Guardian Tracking, Milestone XProtect city cameras, AXON fleet and body cams, Lexipol, HeadSpace, and our RMS is Central Square (in case your agency is looking into purchasing any of these or want more information on them).”

A fourth analyst said “my agency uses Tulip, GeoShield, Flock LPR, LeadsOnline, TLO, Axon fleet and body cams, Lexipol, LEEP, ODMap, DMV2U, RISS/WSIN, Crystal Reports, SSRS Report Builder, Central Square Enterprise RMS, Laserfiche for fillable forms and archiving, and occasionally Hawk Toolbox.” Several of these tools are enterprise software solutions for police departments, which include things like police report management software, report creation software, and stress management and wellbeing software, but many of them are surveillance tools.  

At one point in the 2023 thread, an FBI intelligence analyst for the FBI’s Portland office chimes in, introduces himself, and said “I think I've been in contact with most folks on this email at some point in the past […] I look forward to further collaboration with you all.”

The email thread also planned in-person meetups and a “mini-conference” last year that featured a demo from a company called CrimeiX, a police information sharing tool.  

A member of Information for Public Use told 404 Media “it’s concerning to me to see them building a network of mass surveillance.”

“Automated license plate recognition software technology is something that in and of itself, communities are really concerned about,” the member of Information for Public Use said. “So I think when we combine this very obvious mass surveillance technology with a network of interagency crime analysts that includes local police who are using sock puppet accounts to spy on anyone and their mother and then that information is being pretty freely shared with federal agents, you know, including Homeland Security Investigations, and we see the FBI in the emails as well. It's pretty disturbing.” They added, as we have reported before, that many of these technologies were deployed under previous administrations but have become even more alarming when combined with the fact that the Trump administration has changed the priorities of ICE and Homeland Security Investigations. 

“The whims of the federal administration change, and this technology can be pointed in any direction,” they said. “Local law enforcement might be justifying this under the auspices of we're fighting some form of organized crime, but one of the crimes HSI investigates is work site enforcement investigations, which sound exactly like the kind of raids on workplaces that like the country is so upset about right now.”

Simon, of ACLU Oregon, said that such informal collaboration is not supposed to be happening in Oregon.

“We have, in Oregon, a lot of really strong protections that ensure that our state resources, including at the local level, are not going to support things that Oregonians disagree with or have different values around,” she said. “Oregon has really strong firewalls between local resources, and federal resources or other state resources when it comes to things like reproductive justice or immigrant justice. We have really strong shield laws, we have really strong sanctuary laws, and when I see exchanges like this, I’m very concerned that our firewalls are more like sieves because of this kind of behind-the-scenes, lax approach to protecting the data and privacy of Oregonians.”

Simon said that collaboration between federal and local cops on surveillance should happen “with the oversight of the court. Getting a warrant to request data from a local agency seems appropriate to me, and it ensures there’s probable cause, that the person whose information is being sought is sufficiently suspected of a crime, and that there are limits to the scope, about of information that's being sought and specifics about what information is being sought. That's the whole purpose of a warrant.”

Over the last several weeks, our reporting has led multiple municipalities to reconsider how the license plate reading technology Flock is used, and it has spurred an investigation by the Illinois Secretary of State office into the legality of using Flock cameras in the state for immigration-related searches, because Illinois specifically forbids local police from assisting federal police on immigration matters.

404 Media contacted all of the police departments on the Southern Oregon Analyst Group for comment and to ask them about any guardrails they have for the sharing of surveillance tools across departments or with the federal government. Geoffrey Kirkpatrick, a lieutenant with the Medford Police Department, said the group is “for professional networking and sharing professional expertise with each other as they serve their respective agencies.” 

“The Medford Police Department’s stance on resource-sharing with ICE is consistent with both state law and federal law,” Kirkpatrick said. “The emails retrieved for that 2025 public records request showed one single instance of running LPR information for a Department of Homeland Security analyst in November 2021. Retrieving those files from that single 2021 matter to determine whether it was an DHS case unrelated to immigration, whether a criminal warrant existed, etc would take more time than your publication deadline would allow, and the specifics of that one case may not be appropriate for public disclosure regardless.” (404 Media reached out to Medford Police Department a week before this article was published). 

A spokesperson for the Central Point Police Department said it “utilizes technology as part of investigations, we follow all federal, state, and local law regarding use of such technology and sharing of any such information. Typically we do not use our tools on behalf of other agencies.”

A spokesperson for Oregon’s Department of Justice said it did not have comment and does not participate in the group. The other police departments in the group did not respond to our request for comment.

  • ✇404 Media
  • John Deere Must Face FTC Lawsuit Over Its Tractor Repair Monopoly, Judge Rules
    A judge ruled that John Deere must face a lawsuit from the Federal Trade Commission and five states over its tractor and agricultural equipment repair monopoly, and rejected the company’s argument that the case should be thrown out. This means Deere is now facing both a class action lawsuit and a federal antitrust lawsuit over its repair practices.The FTC’s lawsuit against Deere was filed under former FTC chair Lina Khan in the final days of Joe Biden’s presidency, but the Trump administratio
     

John Deere Must Face FTC Lawsuit Over Its Tractor Repair Monopoly, Judge Rules

11 juin 2025 à 11:03
John Deere Must Face FTC Lawsuit Over Its Tractor Repair Monopoly, Judge Rules

A judge ruled that John Deere must face a lawsuit from the Federal Trade Commission and five states over its tractor and agricultural equipment repair monopoly, and rejected the company’s argument that the case should be thrown out. This means Deere is now facing both a class action lawsuit and a federal antitrust lawsuit over its repair practices.

The FTC’s lawsuit against Deere was filed under former FTC chair Lina Khan in the final days of Joe Biden’s presidency, but the Trump administration’s FTC has decided to continue to pursue the lawsuit, indicating that right to repair remains a bipartisan issue in a politically divided nation in which so few issues are agreed on across the aisle. Deere argued that both the federal government and state governments joining in the case did not have standing to sue it and argued that claims of its monopolization of the repair market and unfair labor practices were not sufficient; Illinois District Court judge Iain D. Johnston did not agree, and said the lawsuit can and should move forward. 

Johnston is also the judge in the class action lawsuit against Deere, which he also ruled must proceed. In his pretty sassy ruling, Johnston said that Deere repeated many of its same arguments that also were not persuasive in the class action suit.

“Sequels so rarely beat their originals that even the acclaimed Steve Martin couldn’t do it on three tries. See Cheaper by the Dozen II, Pink Panther II, Father of the Bride II,” Johnston wrote. “Rebooting its earlier production, Deere sought to defy the odds. To be sure, like nearly all sequels, Deere edited the dialogue and cast some new characters, giving cameos to veteran stars like Humphrey’s Executor [a court decision]. But ultimately the plot felt predictable, the script derivative. Deere I received a thumbs-down, and Deere II fares no better. The Court denies the Motion for judgment on the pleadings.”

Johnston highlighted, as we have repeatedly shown with our reporting, that in order to repair a newer John Deere tractor, farmers need access to a piece of software called Service Advisor, which is used by John Deere dealerships. Parts are also difficult to come by. 

“Even if some farmers knew about the restrictions (a fact question), they might not be aware of or appreciate at the purchase time how those restrictions will affect them,” Johnston wrote. “For example: How often will repairs require Deere’s ADVISOR tool? How far will they need to travel to find an Authorized Dealer? How much extra will they need to pay for Deere parts?”

You can read more about the FTC’s lawsuit against Deere here and more about the class action lawsuit in our earlier coverage here

  • ✇404 Media
  • GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government
    The federal government is working on a website and API called “ai.gov” to “accelerate government innovation with AI” that is supposed to launch on July 4 and will include an analytics feature that shows how much a specific government team is using AI, according to an early version of the website and code posted by the General Services Administration on Github. The page is being created by the GSA’s Technology Transformation Services, which is being run by former Tesla engineer Thomas Shedd. S
     

GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

10 juin 2025 à 09:47
GitHub is Leaking Trump’s Plans to 'Accelerate' AI Across Government

The federal government is working on a website and API called “ai.gov” to “accelerate government innovation with AI” that is supposed to launch on July 4 and will include an analytics feature that shows how much a specific government team is using AI, according to an early version of the website and code posted by the General Services Administration on Github. 

The page is being created by the GSA’s Technology Transformation Services, which is being run by former Tesla engineer Thomas Shedd. Shedd previously told employees that he hopes to AI-ify much of the government. AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows. 

“Accelerate government innovation with AI,” an early version of the website, which is linked to from the GSA TTS Github, reads. “Three powerful AI tools. One integrated platform.” The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services’ Bedrock and Meta’s LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn’t explain what it will do. 

The Github says “launch date - July 4.” Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text.

Elon Musk’s Department of Government Efficiency made integrating AI into normal government functions one of its priorities. At GSA’s TTS, Shedd has pushed his team to create AI tools that the rest of the government will be required to use. In February, 404 Media obtained leaked audio from a meeting in which Shedd told his team they would be creating “AI coding agents” that would write software across the entire government, and said he wanted to use AI to analyze government contracts. 

“We want to start implementing more AI at the agency level and be an example for how other agencies can start leveraging AI … that’s one example of something that we’re looking for people to work on,” Shedd said. “Things like making AI coding agents available for all agencies. One that we've been looking at and trying to work on immediately within GSA, but also more broadly, is a centralized place to put contracts so we can run analysis on those contracts.”

Government employees we spoke to at the time said the internal reaction to Shedd’s plan was “pretty unanimously negative,” and pointed out numerous ways this could go wrong, which included everything from AI unintentionally introducing security issues or bugs into code or suggesting that critical contracts be killed. 

The GSA did not immediately respond to a request for comment.

  • ✇404 Media
  • Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire
    Waymo told 404 Media that it is still operating in Los Angeles after several of its driverless cars were lit on fire during anti-ICE protests over the weekend, but that it has temporarily disabled the cars’ ability to drive into downtown Los Angeles, where the protests are happening. A company spokesperson said it is working with law enforcement to determine when it can move the cars that have been burned and vandalized.Images and video of several burning Waymo vehicles quickly went viral Sun
     

Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire

9 juin 2025 à 10:53
Waymo Pauses Service in Downtown LA Neighborhood Where They're Getting Lit on Fire

Waymo told 404 Media that it is still operating in Los Angeles after several of its driverless cars were lit on fire during anti-ICE protests over the weekend, but that it has temporarily disabled the cars’ ability to drive into downtown Los Angeles, where the protests are happening. 

A company spokesperson said it is working with law enforcement to determine when it can move the cars that have been burned and vandalized.

Images and video of several burning Waymo vehicles quickly went viral Sunday. 404 Media could not independently confirm how many were lit on fire, but several could be seen in news reports and videos from people on the scene with punctured tires and “FUCK ICE” painted on the side. 

Waymo car completely engulfed in flames.

Alejandra Caraballo (@esqueer.net) 2025-06-09T00:29:47.184Z

The fact that Waymos need to use video cameras that are constantly recording their surroundings in order to function means that police have begun to look at them as sources of surveillance footage. In April, we reported that the Los Angeles Police Department had obtained footage from a Waymo while investigating another driver who hit a pedestrian and fled the scene. 

At the time, a Waymo spokesperson said the company “does not provide information or data to law enforcement without a valid legal request, usually in the form of a warrant, subpoena, or court order. These requests are often the result of eyewitnesses or other video footage that identifies a Waymo vehicle at the scene. We carefully review each request to make sure it satisfies applicable laws and is legally valid. We also analyze the requested data or information, to ensure it is tailored to the specific subject of the warrant. We will narrow the data provided if a request is overbroad, and in some cases, object to producing any information at all.”

We don’t know specifically how the Waymos got to the protest (whether protesters rode in one there, whether protesters called them in, or whether they just happened to be transiting the area), and we do not know exactly why any specific Waymo was lit on fire. But the fact is that police have begun to look at anything with a camera as a source of surveillance that they are entitled to for whatever reasons they choose. So even though driverless cars nominally have nothing to do with law enforcement, police are treating them as though they are their own roving surveillance cameras.

  • ✇404 Media
  • TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
    The Department of Homeland Security (DHS) and Transportation Security Administration (TSA) are researching an incredibly wild virtual reality technology that would allow TSA agents to use VR goggles and haptic feedback gloves to allow them to pat down and feel airline passengers at security checkpoints without actually touching them. The agency calls this a “touchless sensor that allows a user to feel an object without touching it.” Information sheets released by DHS and patent applications d
     

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality

5 juin 2025 à 09:38
TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality

The Department of Homeland Security (DHS) and Transportation Security Administration (TSA) are researching an incredibly wild virtual reality technology that would allow TSA agents to use VR goggles and haptic feedback gloves to allow them to pat down and feel airline passengers at security checkpoints without actually touching them. The agency calls this a “touchless sensor that allows a user to feel an object without touching it.” 

Information sheets released by DHS and patent applications describe a series of sensors that would map a person or object’s “contours” in real time in order to digitally replicate it within the agent’s virtual reality system. This system would include a “haptic feedback pad” which would be worn on an agent’s hand. This would then allow the agent to inspect a person’s body without physically touching them in order to ‘feel’ weapons or other dangerous objects. A DHS information sheet released last week describes it like this: 

“The proposed device is a wearable accessory that features touchless sensors, cameras, and a haptic feedback pad. The touchless sensor system could be enabled through millimeter wave scanning, light detection and ranging (LiDAR), or backscatter X-ray technology. A user fits the device over their hand. When the touchless sensors in the device are within range of the targeted object, the sensors in the pad detect the target object’s contours to produce sensor data. The contour detection data runs through a mapping algorithm to produce a contour map. The contour map is then relayed to the back surface that contacts the user’s hand through haptic feedback to physically simulate a sensation of the virtually detected contours in real time.”

The system “would allow the user to ‘feel’ the contour of the person or object without actually touching the person or object,” a patent for the device reads. “Generating the mapping information and physically relaying it to the user can be performed in real time.” The information sheet says it could be used for security screenings but also proposes it for "medical examinations."

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A screenshot from the patent application that shows a diagram of virtual hands roaming over a person's body

The seeming reason for researching this tool is that a TSA agent would get the experience and sensation of touching a person without actually touching the person, which the DHS researchers seem to believe is less invasive. The DHS information sheet notes that a “key benefit” of this system is it “preserves privacy during body scanning and pat-down screening” and “provides realistic virtual reality immersion,” and notes that it is “conceptual.” But DHS has been working on this for years, according to patent filings by DHS researchers that date back to 2022.

Whether it is actually less invasive to have a TSA agent in VR goggles and haptics gloves feel you up either while standing near you or while sitting in another room is something that is going to vary from person to person. TSA patdowns are notoriously invasive, as many have pointed out through the years. One privacy expert who showed me the documents but was not authorized to speak to the press about this by their employer said “I guess the idea is that the person being searched doesn't feel a thing, but the TSA officer can get all up in there?,” they said. “The officer can feel it ... and perhaps that’s even more invasive (or inappropriate)? All while also collecting a 3D rendering of your body.” (The documents say the system limits the display of sensitive parts of a person’s body, which I explain more below).

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A screenshot from the patent application that explains how a "Haptic Feedback Algorithm" would map a person's body

There are some pretty wacky graphics in the patent filings, some of which show how it would be used to sort-of-virtually pat down someone’s chest and groin (or “belt-buckle”/“private body zone,” according to the patent). One of the patents notes that “embodiments improve the passenger’s experience, because they reduce or eliminate physical contacts with the passenger.” It also claims that only the goggles user will be able to see the image being produced and that only limited parts of a person’s body will be shown “in sensitive areas of the body, instead of the whole body image, to further maintain the passenger’s privacy.” It says that the system as designed “creates a unique biometric token that corresponds to the passenger.” 

A separate patent for the haptic feedback system part of this shows diagrams of what the haptic glove system might look like and notes all sorts of potential sensors that could be used, from cameras and LiDAR to one that “involves turning ultrasound into virtual touch.” It adds that the haptic feedback sensor can “detect the contour of a target (a person and/or an object) at a distance, optionally penetrating through clothing, to produce sensor data.”

TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
Diagram of smiling man wearing a haptic feedback glove
TSA Working on Haptic Tech To 'Feel' Your Body in Virtual Reality
A drawing of the haptic feedback glove

DHS has been obsessed with augmented reality, virtual reality, and AI for quite some time. Researchers at San Diego State University, for example, proposed an AR system that would help DHS “see” terrorists at the border using HoloLens headsets in some vague, nonspecific way. Customs and Border Patrol has proposed “testing an augmented reality headset with glassware that allows the wearer to view and examine a projected 3D image of an object” to try to identify counterfeit products. 

DHS acknowledged a request for comment but did not provide one in time for publication. 

The IRS Tax Filing Software TurboTax Is Trying to Kill Just Got Open Sourced

4 juin 2025 à 10:04
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
The IRS Tax Filing Software TurboTax Is Trying to Kill Just Got Open Sourced

The IRS open sourced much of its incredibly popular Direct File software as the future of the free tax filing program is at risk of being killed by Intuit’s lobbyists and Donald Trump’s megabill. Meanwhile, several top developers who worked on the software have left the government and joined a project to explore the “future of tax filing” in the private sector. 

Direct File is a piece of software created by developers at the US Digital Service and 18F, the former of which became DOGE and is now unrecognizable, and the latter of which was killed by DOGE. Direct File has been called a “free, easy, and trustworthy” piece of software that made tax filing “more efficient.” About 300,000 people used it last year as part of a limited pilot program, and those who did gave it incredibly positive reviews, according to reporting by Federal News Network

But because it is free and because it is an example of government working, Direct File and the IRS’s Free File program more broadly have been the subject of years of lobbying efforts by financial technology giants like Intuit, which makes TurboTax. DOGE sought to kill Direct File, and currently, there is language in Trump’s massive budget reconciliation bill that would kill Direct File. Experts say that “ending [the] Direct File program is a gift to the tax-prep industry that will cost taxpayers time and money.”

That means it’s quite big news that the IRS released most of the code that runs Direct File on Github last week. And, separately, three people who worked on it—Chris Given, Jen Thomas, Merici Vinton—have left government to join the Economic Security Project’s Future of Tax Filing Fellowship, where they will research ways to make filing taxes easier, cheaper, and more straightforward. They will be joined by Gabriel Zucker, who worked on Direct File as part of Code for America.

  • ✇404 Media
  • Teachers Are Not OK
    Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and co
     

Teachers Are Not OK

2 juin 2025 à 10:08
Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses. 

One thing is clear: teachers are not OK. 

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

💡
Have you lost your job to an AI? Has AI radically changed how you work (whether you're a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all. 

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you. 

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so? 

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do. 

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

  • I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.
  • It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class. 

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one. 

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police. 

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified? 

Kate Conroy 

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded. 

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot. 

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom. 

Jeffrey Fisher

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are. 

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void. 

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself. 

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students. 

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for. 

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated. 

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection. 

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences). 

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship. 

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it. 

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment. 

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration. 

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!” 

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!). 

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor. 

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!! 

[Article continues after wall]

Texas Solicitor General Resigned After Fantasizing Colleague Would Get 'Anally Raped By a Cylindrical Asteroid'

28 mai 2025 à 12:11
Texas Solicitor General Resigned After Fantasizing Colleague Would Get 'Anally Raped By a Cylindrical Asteroid'

Content warning: This article contains descriptions of sexual harassment.

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.

Judd Stone, the former Solicitor General of Texas resigned from his position in 2023 following sexual harassment complaints from colleagues in which he allegedly discussed “a disturbing sexual fantasy [he] had about me being violently anally raped by a cylindrical asteroid in front of my wife and children,” according to documents filed this week as part of a lawsuit against Judd.

“Judd publicly described this in excruciating detail over a long period of time, to a group of Office of Attorney General employees,” an internal letter written by Brent Webster, the first assistant attorney general of Texas, about the incident reads. The lawsuit was first reported by Bloomberg Law.

  • ✇404 Media
  • ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows
    Data from a license plate-scanning tool that is primarily marketed as a surveillance solution for small towns to combat crimes like car jackings or finding missing people is being used by ICE, according to data reviewed by 404 Media. Local police around the country are performing lookups in Flock’s AI-powered automatic license plate reader (ALPR) system for “immigration” related searches and as part of other ICE investigations, giving federal law enforcement side-door access to a tool that it
     

ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows

27 mai 2025 à 09:36
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows

Data from a license plate-scanning tool that is primarily marketed as a surveillance solution for small towns to combat crimes like car jackings or finding missing people is being used by ICE, according to data reviewed by 404 Media. Local police around the country are performing lookups in Flock’s AI-powered automatic license plate reader (ALPR) system for “immigration” related searches and as part of other ICE investigations, giving federal law enforcement side-door access to a tool that it currently does not have a formal contract for.

The massive trove of lookup data was obtained by researchers who asked to remain anonymous to avoid potential retaliation and shared with 404 Media. It shows more than 4,000 nation and statewide lookups by local and state police done either at the behest of the federal government or as an “informal” favor to federal law enforcement, or with a potential immigration focus, according to statements from police departments and sheriff offices collected by 404 Media. It shows that, while Flock does not have a contract with ICE, the agency sources data from Flock’s cameras by making requests to local law enforcement. The data reviewed by 404 Media was obtained using a public records request from the Danville, Illinois Police Department, and shows the Flock search logs from police departments around the country.

As part of a Flock search, police have to provide a “reason” they are performing the lookup. In the “reason” field for searches of Danville’s cameras, officers from across the U.S. wrote “immigration,” “ICE,” “ICE+ERO,” which is ICE’s Enforcement and Removal Operations, the section that focuses on deportations; “illegal immigration,” “ICE WARRANT,” and other immigration-related reasons. Although lookups mentioning ICE occurred across both the Biden and Trump administrations, all of the lookups that explicitly list “immigration” as their reason were made after Trump was inaugurated, according to the data.

💡
Do you know anything else about Flock? We would love to hear from you. Using a non-work device, you can message Jason securely on Signal at jason.404 and Joseph at joseph.404

Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst

20 mai 2025 à 14:10
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst

The “Heat Index” summer guide newspaper insert published by the Chicago Sun-Times and the Philadelphia Inquirer that contained AI-generated misinformation and reading lists full of books that don’t exist was created by a subsidiary of the magazine giant Hearst, 404 Media has learned.

Victor Lim, the vice president of marketing and communications at Chicago Public Media, which owns the Chicago Sun-Times, told 404 Media in a phone call that the Heat Index section was licensed from a company called King Features, which is owned by the magazine giant Hearst. He said that no one at Chicago Public Media reviewed the section and that historically it has not reviewed newspaper inserts that it has bought from King Features.

“Historically, we don’t have editorial review from those mainly because it’s coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for this,” Lim said. “We are updating our policy to require internal editorial oversight over content like this.”

King Features syndicates comics and columns such as Car Talk, Hints from Heloise, horoscopes, and a column by Dr. Oz to newspapers, but it also makes special inserts that newspapers can buy and put into their papers. King Features calls itself a "unit of Hearst."

Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

20 mai 2025 à 10:46
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

Update: We have published a follow-up to this article with more details about how this happened.

The Chicago Sun-Times newspaper’s “Best of Summer” section published over the weekend contains a guide to summer reads that features real authors and fake books that they did not write was partially generated by artificial intelligence, the person who generated it told 404 Media.

The article, called “Summer Reading list for 2025,” suggests reading Tidewater by Isabel Allende, a “multigenerational saga set in a coastal town where magical realism meets environmental activism. Allende’s first climate fiction novel explores how one family confronts rising sea levels while uncovering long-buried secrets.” It also suggests reading The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian. “This time, the story follows a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.” Neither of these books exist, and many of the books on the list either do not exist or were written by other authors than the ones they are attributed to. 

  • ✇404 Media
  • 23andMe Sale Shows Your Genetic Data Is Worth $17
    Monday, the genetic pharmaceutical company Regeneron announced that it is buying genetic sequencing company 23andMe out of bankruptcy for $256 million. The purchase gives us a rough estimate for the current monetary value of a single person’s genetic data: $17.Regeneron is a drug company that “intends to acquire 23andMe’s Personal Genome Service (PGS), Total Health and Research Services business lines, together with its Biobank and associated assets, for $256 million and for 23andMe to contin
     

23andMe Sale Shows Your Genetic Data Is Worth $17

19 mai 2025 à 12:53
23andMe Sale Shows Your Genetic Data Is Worth $17

Monday, the genetic pharmaceutical company Regeneron announced that it is buying genetic sequencing company 23andMe out of bankruptcy for $256 million. The purchase gives us a rough estimate for the current monetary value of a single person’s genetic data: $17.

Regeneron is a drug company that “intends to acquire 23andMe’s Personal Genome Service (PGS), Total Health and Research Services business lines, together with its Biobank and associated assets, for $256 million and for 23andMe to continue all consumer genome services uninterrupted,” the company said in a press release Monday. Regeneron is working on personalized medicine and new drug discovery, and the company itself has “sequenced the genetic information of nearly three million people in research studies,” it said. This means that Regeneron itself has the ability to perform DNA sequencing, and suggests that the critical thing that it is acquiring is 23andMe’s vast trove of genetic data. 

CBP Seizes Shipment of T-Shirts Featuring Swarm of Bees Attacking a Cop

15 mai 2025 à 15:41
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
CBP Seizes Shipment of T-Shirts Featuring Swarm of Bees Attacking a Cop

Customs and Border Protection seized a shipment of t-shirts from a streetwear brand that sells an “Eliminate ICE” t-shirt and multiple shirts critical of police and capitalism. Among the shirts seized was a design that features a swarm of bees attacking a police officer. Emails seen by 404 Media indicate that the shirts are going to be shipped back to China or will be “destroyed.”

Last we checked in with Cola Corporation, they were getting threatened with bogus copyright threats from the Los Angeles Police Department over their “FUCK THE LAPD” shirts. The Streisand Effect being what it is, the attention from that naturally led the store to sell out much of its stock. The cops, broadly speaking, appear to be messing with Cola again.

Last month, a shipment of three new shirt designs running through O’Hare Airport in Chicago was held up by Customs and Border Protection, Cola told 404 Media. The designs were the bees attacking a cop, as well as a shirt featuring Eve reaching for an apple that says "NO GODS NO MASTERS" and one of a pigeon shitting on the head of a Christopher Columbus statue.

  • ✇404 Media
  • American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
    📄This article was primarily reported using public records requests. We are making it available to all readers with email signup for free. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here. Subscribe Join the newsletter to get the latest updates. Success
     

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show

15 mai 2025 à 10:28
📄
This article was primarily reported using public records requests. We are making it available to all readers with email signup for free. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show

In February 2023, a brief national scandal erupted: Several students at a high school in Florida were accused of using a tool called “ChatGPT” to write their essays. The tool was four months old at the time, and it already seemed like a technology that, at the very least, students would try to cheat with. That scandal now feels incredibly quaint.

Immediately after that story broke, I filed 60 public records requests with state departments of education and a few major local school districts to learn more about how—and if—they were training teachers to think about ChatGPT and generative AI. Over the last few years, I have gotten back thousands of pages of documents from all over the country that show, at least in the early days, a total crapshoot: Some states claimed that they had not thought about ChatGPT at all, while other state departments of education brought in consulting firms to give trainings to teachers and principals about how to use ChatGPT in the classroom. Some of the trainings were given by explicitly pro-AI organizations and authors, and organizations backed by tech companies. The documents, taken in their totality, show that American public schools were wildly unprepared for students’ widespread adoption of ChatGPT, which has since become one of the biggest struggles in American education.

Last week, New York magazine ran an article called “Everyone Is Cheating Their Way Through College,” which is full of anecdotes about how generative AI and ChatGPT in particular has become ubiquitous in the education system, and how some students are using it to do essentially all of their work for them. This is creating a class of students who are “functionally illiterate,” one expert told New York. In the years since generative AI was introduced, we’ve written endlessly about how companies, spammers, and some workers have become completely reliant on AI to do basic tasks for them. Society as a whole has not done a very good job of resisting generative AI because big tech companies have become insistent on shoving it down our throats, and so it is asking a lot for an underfunded and overtaxed public school system to police its use.

The documents I obtained are a snapshot in time: They are from the first few months after ChatGPT was released in November 2022. AI and ChatGPT in particular have obviously escaped containment and it’s not clear that anything schools did would have prevented AI from radically changing education. At the time I filed these public records requests, it was possible to capture everything being said about ChatGPT by school districts; now, its use is so commonplace that doing this would be impossible because my request would encompass so many documents it would be considered “overbroad” by any public records officer. All documents and emails referenced in this article are from January, February, or March 2023, though in some cases it took years for the public records officers to actually send me the documents.

💡
Are you a teacher? I want to hear how AI has affected your classroom and how your students use it. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

And yet, the documents we obtained showed that, in the early days of ChatGPT, some state and local school districts brought in pro-AI consultants to give presentations that largely encouraged teachers to use generative AI in their classrooms. Each of these presentations noted potential “challenges” with the technology but none of them anticipated anything as extreme as what is described in the New York magazine article or as troublesome as what I have heard anecdotally from my friends who are teachers, who say that some students rely almost entirely on ChatGPT to make it through school.

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
An excerpt from a slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
  • ✇404 Media
  • The Simulation Says the Orioles Should Be Good
    The Baltimore Orioles should be good, but they are not good. At 15-24, they are one of the worst teams in all of Major League Baseball this season, an outcome thus far that fans, experts, and the team itself will tell you are either statistically improbable or nearing statistically impossible based on thousands upon thousands of simulations run before the season started. Trying to figure out why this is happening is tearing the fanbase apart and has turned a large portion of them against mana
     

The Simulation Says the Orioles Should Be Good

13 mai 2025 à 10:48
The Simulation Says the Orioles Should Be Good

The Baltimore Orioles should be good, but they are not good. At 15-24, they are one of the worst teams in all of Major League Baseball this season, an outcome thus far that fans, experts, and the team itself will tell you are either statistically improbable or nearing statistically impossible based on thousands upon thousands of simulations run before the season started. 

Trying to figure out why this is happening is tearing the fanbase apart and has turned a large portion of them against management, which has put a huge amount of its faith, on-field strategy, and player acquisition decision making into predictive AI systems, advanced statistics, probabilistic simulations, expected value positive moves, and new-age baseball thinking in which statistical models and AI systems try to reduce human baseball players into robotic, predictable chess pieces. Teams have more or less tried to “solve” baseball like researchers try to solve games with AI. Technology has changed not just how teams play the game, but how fans like me experience it, too. 

“Some of the underperformance that we’ve gotten, I hope is temporary. This is toward the extreme of outcomes,” Orioles General Manager Mike Elias said last week when asked why the team is so bad. “So far in a small sample this year, it just hasn’t worked. And then we’ve got guys that have been hitting into tough luck if you kind of look at their expected stats … we’ve got a record that is not reflective of who we believe our team is, that I don’t think anyone thought our team was.”

Embedded in these quotes are current baseball buzzwords that have taken over how teams think about their rosters, and how fans are meant to experience the game. The “extreme of outcomes” refers to whatever probabilistic statistical model the Orioles are running that suggests they should be good, even though in the real world they are bad. “Small sample” is analogous to a poker or blackjack player who is making expected value positive moves (a statistically optimal decision that may not work out in a small sample size) but is losing their money because of the statistical noise inherent within not playing for long enough (another: “markets can remain irrational longer than you can remain solvent”); basically, the results are bad now but they shouldn’t stay that way forever. “Tough luck” is the reason for the bad performance, which can be determined via “expected stats,” which are statistical analyses of the expected outcome of any play (but crucially not the actual outcome of any play) based on how hard a ball was hit, where it was hit, the ball’s launch angle, exit velocity, defender positioning, etc. Elias has repeatedly said that the Orioles must remain “consistent with your approach” and that they should not change much of anything, because their process is good, which is what poker players say when they are repeatedly losing but believe they have made the statistically correct decision.

Before the season, a model called the Player Empirical Comparison and Optimization Test Algorithm (PECOTA), which simulates the season thousands of times before and during the season, projected that the Orioles would win 89 games; they are on pace right now to win barely 60. The PECOTA projections simulations did not show the Orioles being this bad even in its worst-case preseason simulations. A Redditor recently ran an unscientific simulation 100,000 times and estimated that there was only a 1.5 percent chance that the Orioles would be this bad. 

The Simulation Says the Orioles Should Be Good
The likely range of outcomes for the Orioles as predicted by Baseball Prospectus's PECOTA before the season started. The Orioles actual winning percentage so far—.385, is not represented on this chart.

Right now, none of this is working out for the Orioles, who in recent years have become industry darlings based on their embrace of this type of statistical thinking. The last two years the simulations have suggested the Orioles should be near the top of the league, and in the millions of simulations run for these projections they have surely won thousands of simulated World Series. But under Elias they have not even won a single real life playoff game.

Here is how the fanbase is taking this year’s underperformance: 

The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good
The Simulation Says the Orioles Should Be Good

The team has been so bad that for several days the Orioles subreddit began to talk only about the actual Baltimore Oriole bird and not the baseball team.

The Orioles’ obsession with simulations training and treating their players like robots has become a constant punchline. On the popular Orioles Hangout forums, which I have lurked on for 25 years, posters have started calling the team the “Expected Stat All Stars” but real-life losers. 

The Orioles are my favorite team in the only sport I care about. I have been a daily lurker on the popular orioleshangout.com forums since my posting account was banned there in 2003 for a beef I got into in high school with the site’s owner. I listen to podcasts about the Orioles, read articles about the Orioles, and, most importantly, watch as many Orioles games as I can. I listen to the postgame press conferences, follow all of the beat reporters. When I cannot watch the game, I will follow it on MLB Gameday or will, at the least, check the score a few times then watch the highlights afterwards. 

The Orioles have not won a World Series since 1983, five years before I was born. They were good in 1996 and 1997, when I was eight years old and simulated heartbreaking playoff games in my backyard pitching the ball into a pitchback rebounder as Armando Benitez blew a critical save or as Jeffrey Maier—the most hated child in DC-Baltimore Metropolitan Area—leaned over the scoreboard and fan-interfered a home run for Derek Jeter and the hated Yankees in the 1996 ALCS. They were good again from 2012-2016. Besides that, they have been laughingstocks for my entire life.

The Orioles of the late 2010s, after a very brief 2016 playoff appearance, were known for ignoring advanced statistics, the kinds made popular by the Oakland Athletics in Moneyball, which allowed a small-market team to take advantage of overlooked players who got on base at a high rate (guys with high on base percentage) and to eschew outdated strategies like sacrifice bunting to achieve great success with low payrolls. Teams like the A’s, Cleveland Guardians, Houston Astros, and Tampa Bay Rays eventually figured out that one of the only ways to compete with the New York Yankees and Los Angeles Dodgers of the world was to take advantage of players in the first few years they were in the big leagues because they had very low salaries. These teams traded their stars as they were about to get expensive and reloaded with younger players, then augmented them over time with a few veterans. I’ll gloss over the specifics because this is a tech site, not a baseball blog, but, basically the Orioles did not do that for many years and aspired to mediocrity while signing medium priced players who sucked and who did not look good by any baseball metrics. They had an aging, disinterested, widely-hated owner who eventually got very sick and turned the team over to his son, who ran the team further into the ground, sued his brother, and threatened to move the team to Nashville. It was a dark time. 

The team’s philosophy, if not its results, changed overnight in November 2018, when the Orioles hired Mike Elias, who worked for the Houston Astros and had a ton of success there, and, crucially, Sig Mejdal, a former NASA biomathematician, quantitative analyst, blackjack dealer, and general math guy, to be the general manager and assistant general manager for the Orioles, respectively. The hiring of Elias and Mejdal was a triumphant day for Orioles fans, a signal that they would become an enlightened franchise who would use stats and science and general best practices to construct their rosters. 

Under Elias and Mejdal, the Orioles announced that they would rebuild their franchise using a forward-thinking, analytics-based strategy for nearly everything in the organization. The team would become “data driven” and invested in “various technology tools – Edgertronic cameras, Blast motion bat sensors, Diamond Kinetic swing trackers and others. They recently entered a partnership with the 3-D biofeedback company K-Motion they hope further advances those goals,” according to MLB.com. The general strategy was that the Orioles would trade all of their players who had any value, would “tank,” for a few years (meaning, essentially, that they would lose on purpose to get high draft picks), and would rebuild the entire organizational thinking and player base to create a team that could compete year-in and year-out. Fans understood that we would suck for a few years but then would become good, and, for once in my life, the plan actually worked.

The Orioles were not the only team to do this. By now, every team in baseball is “data driven” and is obsessed with all sorts of statistics, and, more importantly, AI and computer aided biomechanics programs, offensive strategies, defensive positioning, etc. Under Elias and Mejdal, the Orioles were very bad for a few years but drafted a slew of highly-rated prospects and were unexpectedly respectable in 2022 and then unexpectedly amazing in 2023, winning a league-high 103 games. They were again good in 2024, and made the playoffs again, though they were swept out of the playoffs in both 2023 and 2024. Expectations in Baltimore went through the roof before the 2024 season when the long-hated owner sold the team to David Rubenstein, a private equity billionaire who grew up in Baltimore and who has sworn here wants the team to win.

Because of this success, the Orioles have become one of the poster children of modern baseball game theory. This is oversimplifying, but basically the Orioles drafted a bunch of identical-looking blonde guys, put them through an AI-ified offensive strategy regimen in the minor leagues, attempted to deploy statistically optimal in-game decisions spit out by a computer, and became one of the best teams in the league. (Elias and Mejdal’s draft strategy suggests that position players should be drafted instead of pitchers because pitchers get injured so often. Their bias toward drafting position players is so extreme that it has become a meme, and the Orioles have, for the last few years, had dozens of promising position players and very few pitchers. This year they have had so many pitching injuries that they sort of have no one to pitch and lost one game by the score of 24-2 and rushed back Kyle Gibson, a 37-year-old emergency signing who promptly lost to the Yankees 15-3 in his first start back).

Behind this “young core” of homegrown talent (Adley Rutschman, Gunnar Henderson, Jackson Holliday, Colton Cowser, Jordan Westburg, Heston Kjerstad, etc.), the Orioles were expected and still are expected to be perennial contenders for years to come. But they have been abysmal this year. They may very well still turn it around this year—long season, baseball fans love to say—and they will need to turn it around for me to have a bearable summer. 

Mejdal’s adherence to advanced analytics and his various proprietary systems for evaluating players means that many Orioles fans call him “Sigbot,” as a term of endearment when the team is playing well and as a pejorative when it is playing poorly. Rather than sign or develop good pitchers, the Orioles famously decided to move the left field wall at Camden Yards back 30 feet and raise the wall (a move known as “Walltimore”), making it harder to hit (or give up) home runs for right handed batters. The team then signed and drafted a slew of lefties with the goal of hitting home runs onto Eutaw Street in right field. Because of platoon splits (lefties pitch better to left-handed hitters, righties to right-handed hitters), the Orioles’ lefty-heavy lineup performed poorly against lefties. So, this last offseason, the team moved the wall back in and signed a bunch of righties who historically hit left-handed pitchers well, in hopes of creating two different, totally optimized lineups against both lefties and righties (this has not worked, the Orioles have sucked against lefties this year). 

Orioles fans have suggested all these changes were made because “Sigbot’s” simulations said we should. When the Orioles fail to integrate a left-handed top prospect into the lineup because their expected stats against lefties are poor, well, that’s a Sigbot decision. When manager Brandon Hyde pulls a pitcher who is performing well and the reliever blows it, they assume that it was a Sigbot decision, and that the team has essentially zero feel for the human part of the game that suggests a hot player should keep playing or that a reliever who is performing well might possibly be able to pitch more than one inning every once in a while. The Orioles have occasionally benched the much-hyped 21-year-old Jackson Holliday, who is supposed to be a generational talent, against some lefties because he is also left handed in favor of Jorge Mateo, a right-handed 29-year-old journeyman who cannot hit his way out of a wet paper bag. The fans don’t like this. Sigbot’s fault.   

Fans will also argue that much of the Orioles minor league and major league coaching staff is made up of people who either did not play in the major leagues or who played poorly or briefly in the major leagues, and that the team has too many coaches—various “offensive strategy” experts, and things like this—rather than, say, experienced, hard-nosed former star players.

Baseball has always been a statistically driven sport, and the beef between old school players and analysts who care about “back of the baseball card” stats like average and home runs versus “sabermetrics” like on base percentage, WAR (wins above replacement), OAA (outs above average, a defensive stat) is mostly over. The sport has evolved so far beyond “Moneyball” that to even say “oh, like Moneyball?” when talking about advanced statistics and ways of playing the game now makes you a dinosaur who doesn’t know what they’re talking about. 

The use of technology, AI simulations, probabilistic thinking, etc is not just deployed when compiling a roster, making in-game decisions, crafting a lineup, or deciding a specific strategy. It has completely changed how players train and how they play the game. Advanced biomechanics labs like Driveline Baseball use slow-motion cameras, AI simulations, and advanced sensors to retrain pitchers how to throw the baseball, teaching them new “pitch shapes” that are harder to hit, have elite “spin rates,” meaning the pitch will move in ways that are harder to hit, and how to “tunnel” different pitches, which means the pitches are thrown from the same arm slot in the same manner but move differently, making them harder to detect and therefore hit. The major leagues are now full of players who were not good, went to Driveline and used technology to retrain their body how to do something exceptionally well, and are now top players.

Batters, meanwhile, are taught to optimize for “exit velocity,” meaning they should swing hard and try to hit the ball hard. They need to make good “swing decisions,” meaning that they only swing at pitches they can hit hard in certain quadrants of the plate in specific counts. They are taught to optimize their “swing plane” for “launch angle,” meaning the ball should leave between a 10 and 35-degree angle, creating a higher likelihood of line drives and home runs. A ball hit with an optimal launch angle and exit velocity is “barreled,” which is very good for a hitter and very bad for a pitcher. Hard-hit and “barreled” balls have high xBA (expected batting average), meaning the simulations have determined that, over a large enough sample size, you are likely to be better. Countless players across the league (maybe all of them, at this point) have changed how they hit based on optimizing for expected stats. 

Prospects with good raw strength and talent but a poor “hit tool” are drafted, and then the team tries to remake them in the image of the simulation. Advanced pitching machines are trained on specific pitchers’ arsenals, meaning that you can simulate hitting against that day’s starting pitcher. Players are regularly looking at iPads in the dugout after many at bats to determine if they have made good swing decisions. 

Everything that occurs on the baseball field is measured and stored on a variety of websites, including MLB’s filmroom to Baseball Savant, which is full of graphs like this:

The Simulation Says the Orioles Should Be Good

Everything that happens on the field is then fed back into these models, which are freely available, are updated constantly, and can be used for in-game analysis discussion, message board fodder, and further simulations.

So now, the vast majority of baseball discourse, and especially discourse about the Orioles, is whether good players are actually good, and whether bad players are actually bad, or if there is some unexplained gulf between their expected stats and their actual stats, and whether that difference is explained by normal variance or something that is otherwise unaccounted for. Baseball is full of random variance, and it is a game of failure. The season is long, the best teams lose about 60 times a year, and even superstars regularly go 0-4. Expected stats are a way to determine whether a player or team’s poor results is a result of actual bad play or of statistical noise and bad luck. We are no longer discussing only what is actually happening on the field, but what the expected stats suggest should be happening on the field, according to the simulations. Over the last few years, these stats have been integrated into everything, most of all the broadcasts and the online discourse. It has changed how we experience, talk about, and should feel about a player, game, season, and team.

The Simulation Says the Orioles Should Be Good
Sugano's Baseball Savant page. Red is good, blue is bad.

Rather than celebrate bright spots like when a pitcher like Tomoyuki Sugano—a softish-throwing 35-year-old Japanese pitcher the Orioles signed this year—pitches a gem, fans hop over to Baseball Savant and note that his whiff rate is only 13th percentile, his expected batting average against is 13th percentile, and his K percentage is unsustainable for good pitchers. His elite walk and chase percentage offer some hope and we should happy he played well, but they surmise based on his Baseball Savant page that he will likely regress. Fans break down the pitch shapes, movement, and velocity on closer Felix Bautista’s pitches as he returns from Tommy John (elbow) surgery, looking for signs of progression or regression, and comparing what his pitches look like today versus in 2023, when he was MLB’s best pitcher. The fact that he remains a statistically amazing and imposing pitcher even with slightly lesser stuff is celebrated in the moment but is cause for concern, because the simulations tell us to expect lesser results in the future unless his velocity ticks up from “only” 98 MPH to 99-100 MPH. 

The Simulation Says the Orioles Should Be Good
Felix Bautista's statistics on Statcast. These aren't even the complicated charts.

We rail against Elias’s signing of Charlie Morton, a washed-up 41-year-old who has been the worst pitcher in the entire league while collecting a whopping $15 million. The Orioles are 0-10 in games Morton has pitched and are 15-14 in games he has not pitched, meaning that in the simulated universe where we didn’t sign Morton or perhaps signed someone better we wouldn’t be in this mess at all; can we live in that reality instead? Even Morton’s expected stats are up for debate. He should merely be “pretty bad” and not “cataclysmically bad” according to his pitch charts; Morton speaks in long, philosophical paragraphs when asked about this, and says that he would have long ago retired if he felt his pitch shapes and spin rate were worse than they currently are: “It would be way easier to go, ‘You know what, I don’t have it anymore. I just don’t have the physical talent to do it anymore.’ But the problem is I do … it would be way easier if I was throwing 89-91 [mph] and my curve wasn’t spinning and my changeup wasn’t sinking and running” he said after a loss to the Twins last week. “There are just the outcomes and the results are so bad there will be times just randomly in the day I’ll think about it. I’ll think about how poorly I’ve pitched and I’ll think about how bad the results are. And honestly, it feels like it’s almost shocking to me.” 

MASN, the Orioles-owned sports network, speculated that perhaps Morton’s horrible performance thus far can be boiled down to “bad luck” because of what the simulations suggest: “When these sorts of metrics are consistent with past years but the results are drastically different, we’re left with an easier takeaway to swallow: perhaps there’s nothing wrong with the pitch itself, and Morton has just run into some bad luck on the offering in a small sample size.”

Adley Rutschman, meanwhile, our franchise catcher who has been one of the least valuable players in all of Major League Baseball for nearly a calendar year, has just been unlucky because he is swinging the bat harder, has elite strike zone discipline, 98th percentile squared up percentage, and good expected stats (though absolutely dreadful actual stats). The discourse about this is all over the place, ranging from carefully considered posts about how, probabilistically, this possibly cannot last to psychological and physiological explanations that suggest he is broken forever and should be launched into the sun. On message boards, Rutschman is either due for a breakout because his expected stats are so good, or he sucks and will never get better, is possibly hiding an injury, is sad because he and his girlfriend broke up, perhaps he is not in good shape. We then note that Ryan Mountcastle’s launch angle on fastballs has declined every year since 2022, wonder if trying and failing to hit the ball over Walltimore psychologically broke him forever, and decry Heston Kjerstad’s swing decisions and lackluster bat speed, and wonder if it’s due to a concussion he had last summer. On message boards, these players—and I’m guilty of it myself—are both interchangeable robots that can be statistically represented by thousands of simulations and fragile humans who aren’t living up to their potential, are weak, have bad attitudes, are psychologically soft, etc. 

The umpires, too, are possibly at fault. Their performance is also closely analyzed, and have been biased against the Orioles more than almost any other team, leading to additional expected runs for their opponents (and sometimes real runs) and fewer for the Orioles, which are broken down every day on Umpire Scorecard. The Orioles have the second worst totFav in the league, a measure of “The sum of the Total Batter Impact for the team and Total Pitcher Impact for the team,” and a statistic that I cannot even begin to understand. If only we had that expected ball, which would lead to an expected walk, which would lead to an expected run, which would lead to an expected win, which could have happened in reality, we would have won that game. 

The Simulation Says the Orioles Should Be Good
What an "Umpire Scorecard" looks like. Image: umpscorecards.com

All of this leads to discussions among fans that allow for both unprecedented levels of cope and distress. We can take solace in a good expected outcome at-bat, say the team has just been “unlucky,” or, when they win or catch a break, suggest the exact opposite. Case in point: On Sunday, Rutschman hit a popup that an Angels player lost in the sun that is caught almost every time (xBA: .020) and went for a triple. Later in the game, he crushed a ball over the center field fence that an Angels player made an amazing catch on (xBA: .510). Fans must now consider all of this when determining whether a player sucks or not, and hold it in their mental model of the player and the team. (Also, the Orioles have had a lot of injuries so far this season, which can explain a lot of the underperformance by the team but not from individual players.) This has all led to widespread calls for everyone involved to be fired, namely manager Brandon Hyde, hitting coach Cody Asche, and possibly Elias and Mejdal, too.

So, what is actually wrong?

Last August, The Athletic wrote an article called “What’s the Orioles’ secret to developing great hitters? Rival teams have theories.” The article surmised that the Orioles were optimizing for “VBA,” which is “Vertical Bat Angle,” as well as “they draft guys with present power and improve their launch angle and swing decisions … they teach better Vertical Bat Angle to reduce ground-ball rates. Swing decisions plus better VBA equals power production when those top-end exit velocities exist.” The Athletic’s article was written at a time when the Orioles’ lineup was very feared, and when Mike Elias and Sigbot were considered by many in the sport as “the smartest guys in the room.” What they had done with the Orioles and, especially, with its lineup, was the envy of everyone.

I am not a baseball reporter but I do watch tons of baseball, and this makes sense to me. What it means, essentially, is that they have been training all of their players to swing very hard, with an upward arc, and to try to swing at pitches that they think they can do damage with. This intuitively makes sense: Hitting the ball hard is good, hitting home runs is good. 

But something has changed so far this year, and it’s still not clear whether we can chalk it up to injuries, random underperformance, small sample size, or the fragility of the human psyche. But so far this season, the Orioles cannot hit. They cannot hit lefties, they cannot hit with runners in scoring position, and often, they simply cannot hit at all. It is as though the game has been patched, and the Orioles are continuing to play with the old, outdated meta. 

The Athletic explains that optimizing for things like VBA and swinging hard often leads to more swing-and-miss, and therefore more strikeouts. Growing up playing baseball, and watching baseball, we were taught “situational hitting,” which maybe means yes, swing for the fences if you’re ahead in the count. But also: choke up, foul pitches off, and just put the ball in play with a runner on third and less than two outs. The Orioles hitting woes this year feel like they are swinging for the fences and striking out or popping up when a simple sacrifice fly or ground ball would do; rather than fouling off close pitches with two strikes, they are making good “swing decisions” by taking pitches barely off the plate and getting rung up for strike three by fallibly human umpires, etc. Either this is random variance at the beginning of a long season, the Orioles’ players are not nearly as good as their track record and the simulations have shown them to be, or some hole in the Orioles approach has been identified and other teams are taking advantage of it and the Orioles have yet to adjust. 

Bashing “analytics” has become a worn-out trope among former players and announcers, and yet, it is as though much of the Orioles team has suddenly forgotten how to hit. Watching the games, the Orioles are regularly missing or fouling off pitches thrown right down the middle and are swinging for the fences (and missing) on pitches that are well outside the strike zone. Former Oriole Mike Bordick, known for his fundamentals but not necessarily his bat, ranted on the radio the other day that this obsession with advanced pitching and hitting statistics is what he sees wrong with the team: “Charlie Morton stood there and said ‘My spin rate is better than it’s ever been, my fastball velocity is better than it’s ever been, and for some reason it’s just not working for me.’ Therein lies the problem. If we’re thinking about our spin rates and velocities, which carries over to offensive performance too,” Bordick said. “They’re chasing these [advanced analytical] numbers, and they’re not chasing competition. Putting the barrel on the ball, and throwing strikes. I mean, what are we doing? … You can’t rely on bat speed and exit velocity if you can’t put the barrel on the ball.” 

Old-man-yells-at-cloud is a time-honored sports tradition, and despite writing this article, I am mostly all for the new, optimized version of baseball, as it adds a lot of strategy and thinking to a game that has always been dominated by statistics. But I am sick of losing. I do not know how to explain, when my partner asks me if the Orioles are winning or how the game is going, that “not good” actually often means “the delta between Adley Rutschman's xBA and actual BA is wildly outside the statistically expected probabilities and it’s pissing me off.” But, unless the Orioles figure out something soon, they will be one of the simulated best teams in Major League Baseball, and one of the worst teams in real life. A simulated World Series championship, unfortunately, doesn’t bring me any real-life joy. 

❌
❌