Google AI Falsely Says YouTuber Visited Israel, Forcing Him to Deal With Backlash
Science and music YouTuber Benn Jordan had a rough few days earlier this week after Google’s AI Summary falsely said he recently visited Israel and caused people to believe he supported the country during its war on Gaza. Jordan does not support Israel and has previously donated to Palestinian charities.
Jordan told 404 Media that when people type his name into Google it’s often followed by “eyebrows” or “wife.” That changed when popular political Twitch streamer Hasan Piker decided to react to his video about Flock AI, an AI powered camera company that 404 Media has covered extensively. Jordan’s videos have appeared on Piker’s stream before, so he knew he was in for a bit of a ride. “Anytime that he has reacted to my content I’m always like ‘Oh no, I’m going to get eviscerated in front of millions of people for being a libertarian without being able to explain my views,” he said.
This time it was a little different, however. “I looked at it and in the middle of it, his chat was kind of going crazy, saying that I support Israel’s genocidal behavior,” Jordan said. “And then I started getting a bunch of messages from people asking me why I don’t make myself clear about Israel, or why I support Israel and I’ve donated plenty of money to the Palestinian Children’s Relief Fund. I’ve been pretty vocal in the past about not supporting Israel and supporting a free Palestinian state.”
Then someone sent him a screenshot of the AI generated summary of a Google search result that explained the deluge of messages. If you typed “Benn Jordan Israel” into Google and looked only at its AI summary, this is what it told you:
“Electronic musician and science YouTuber Benn Jordan has recently become involved in the Israeli-Palestinian conflict, leading to significant controversy and discussion online. He has shared his experiences from a trip to Israel, during which he interviewed people from kibbutzim near the Gaza border,” the AI summary said, according to a screenshot Jordan shared on Bluesky. “On August 18, 2025, Benn Jordan uploaded a YouTube video titled I Was Wrong About Israel: What I Learned On the Ground, which detailed his recent trip to Israel.”
Jordan had never been to Israel and he doesn't make content about war. His videos live at the cross section of science and sound and he went viral earlier this year when he converted a PNG sketch into an audio waveform and taught the song to a young starling, effectively saving a digital image in the memory of a bird. He’s also covered the death of Spotify, crumbling American capitalism, and the unique dangers AI poses to musicians.
It seemed that Google’s AI had confused Jordan with the YouTuber Ryan McBeth, a guy who does make videos about war. McBeth is a chainsmoking NEWSMAX commentator who has a video titled “I Was Wrong About Israel: What I Learned on the Ground,” the exact same title Google thought Jordan was responsible for.
It’s a weird mistake for AI to make, but AI makes a lot of mistakes. AI generated songs are worse than real ones, AI generated search results funnel traffic away from the sites where Google gets the information it is summarizing and are often wrong. Jordan’s experience is just one small sample of what happens when people take AI at face value without doing five minutes of extra research.
When Jordan learned he was being misrepresented by the AI summary, he started sharing the story on Bluesky and Threads. He told 404 Media that the AI summary updated itself about 24 hours later. “Eventually the AI picked up me posting about it and then said that there was a rumor about me, a false rumor, spread about me going to Israel. And then I was just kind of ripping the hair out of my head. I was like, ‘you don’t even know that you created the rumor!’”
He told 404 Media that he thought it might be possible that Google’s AI had defamed him and he reached out to lawyers for an opinion, not as a prelude to a lawsuit but more out of curiosity. One told him he may have a case. “I’m going to Yellowstone next week for 10 days. I’m going to be completely off the grid,” Jordan said. “Had this happened, and had this continued to spread around and become a giant controversy, I would probably lose YouTube subscribers, I would lose Patreon members.”
Jordan has covered AI in the past and said he wasn’t shocked by the system breaking down. “Everybody’s rushing LLMs to be part of our daily lives [...] But the actual LLM itself is not good. It’s just not what they claim it is. It may never be what they claim it is due to the limitations of how LLMs work and AI works, and despite the promises that are made. It’s just a really bad algorithm for gaining any sort of useful information that you can trust and it’s prioritizing that above journalists to keep the money.”
In the aftermath of the whole thing, Jordan clarified his position on the Israel-Palestine conflict. In a thread on Bluesky he said he does think that Israel is committing a genocide in Gaza and why. “Hopefully, somebody sees that before they waste their time to message me to lecture me about genocide,” he said. “Although, now I’m being lectured about genocide from the other side. Now I have skin in it. Now I’m dealing with messages from people defending Israel, telling me that I’m antisemetic.”
This isn’t the first time Google’s AI summary has screwed up the basic facts about someone with a public profile. In July, humorist Dave Barry discovered that Google’s AI summary thought he had died last year after a battle with cancer. Barry is very much alive and detailed his fight to correct the record of his demise in his newsletter. Like Jordan, Google’s AI overview shifted. Unlike Jordan, it changed after Barry fought with Google’s various automated complaint systems.
When an AI makes mistakes like this we tend to call it a hallucination. Jordan used the word when he posted the updated summary of his life. “I’ve thought about it the last few days, and that’s giving it so much credit, that it could hallucinate something” Jordan said. “Generally, it’s not great at scraping data and retrieving it in a way that’s reputable.”
“The vast majority of AI Overviews are factual and we’ve continued to make improvements to both the helpfulness and quality of responses," a Google spokesperson told 404 Media. "When issues arise—like if our features misinterpret web content or miss some context—we use those examples to improve our systems, and we take action as appropriate under our policies.”
Update: This story has been updated with a statement from Google.