It is so irresponsible to call this a "glitch". This isn't something going wrong at any point, this is just technology that is not sufficient for what it is being asked to do, and only because we know the real truth and are able to fact check does that register.
Claiming it is a glitch gives the opportunity for AI companies to hide behind the excuse of "mistakes in the code", instead of recognizing the fundamental flaw of the technology in question.
At the same time, this article attempts to politicize a wider issue by relating the failings of AI to current events. In fact, this hallucination and failure is near constant, but it is no coincedence. It is the product of both technology being used before readiness, and the (hilarious) attempt by Elon to use AI as a propaganda machine to spread and legitimize his beliefs.
Use before readiness? I get immense and even life changing use out of Chatbots like Grok. I agree with your first sentiment that it’s just a misunderstanding of what a chat bot should be used for.
But I disagree that they’re not “ready” for use. I’ve never once thought to upload a photo from a CURRENT event and see what it found. That’s just silly.
If a tool like this is currently only suitable for specific and minor cases under human oversight, how does it prove any better than a human? Wouldn't one of the only novel and useful cases of "AI" be general intelligence that is able to parse events in real time, instead of from manually selected information that we are so quickly running out of? We are so far from that and being sold Siri / Cortana for the third time.
I admit I'm definitely biased in this - even if information presented by one of these "AI" was factual, I would still take it upon myself to check. I don't trust their works at all.
> Gizmodo reached out to Grok-developer xAI for comment, but they have only responded with the usual automated reply, “Legacy Media Lies.”
European here, so perhaps not my place to have an opinion on domestic U.S. legal policies, and I don't want to make this political (although I guess it kind of is…) BUT:
Why are no media outlets on the offense when companies use these kinds of statements? Shouldn't Gizmodo, or its owner Keleops Media, treat this as slander and take it to court? If Grok's behavior can be objectively verified, why is it so easy for a company to get off the hook so easily just by saying "lies" and move on?
USA citizen. I've so much lost faith in our media that this hadn't even occurred to me. You're right. This should be front and center and embarrassing the owner (that guy) every day.
To get anywhere filing some kind of claim over this, Gizmodo would have to prove in court:
- The "Legacy Media Lies" was targeted at Gizmodo
- It was a false allegation (i.e. they might have to go through huge amounts of discovery as the defense tried to establish a single instance of dishonesty in past reporting)
- Grok/xAI knew the allegation was false
- The allegation caused such-and-such amount in damages
You’ll find it easy to prove that the legacy media has lied an uncountable amount of times, so it’s going to be hard to prove that this statement is slanderous.
Fellow european here, the problem is they need to prove both than the statements are false ("legacy media lies" probably means you need to prove you haven't ever lied) plus show actual malice (intent to harm the plaintiff, in this case, Gizmodo, or acting with reckless disregard for the truth).
When you have the pure truth, why would you silently dismiss questions about your truth bot and not blast it 24/7?
Because right wingers cant handle criticsm. They dont want to correct, they want to silence their outgroups. Professionals would have at least replied with some meaningless PR text wall.
For example, "Mom, there's a candy wrapper under (my brother)'s bed!" is a true statement, but the pure truth is "Mom, I ate a candy without permission and put the wrapper under (my brother)'s bed so he would be blamed for the missing candy!"
I am attempting to convey a lie by telling a truth and omitting details that would give context to that truth.
I believe you are referring to "whole truths," which yes we teach to children and swear on the stand in court. A "pure" truth carries different connotation here I think, and is not said in general.
It's funny when I see it used in a heated political conversation on X, and when it disproves a conservative talking point, I've seen it then called, unironically, liberal and woke.
The other day I read an article that said Grok claims Elmo is more athletically fit than Lebron James. I thought "Elmo must just be trolling us, and getting a laugh from all the anger he's causing among his haters". (of which I'm one.) He claims to be a serious businessman and then he pulls shit like that. Or dumb actors in spandex robot suits. It might even refute the claim he can't be funny: https://www.theguardian.com/commentisfree/2025/nov/13/it-soo...
But now I'm wondering what if these stunts aren't even jokes but earnest proof of deep deep insecurity.
Not that the Bondi misinformation can be considered acceptable humor. "Haha I trolled you about mass murder"?
Are photos & videos that new even recognizable by Grok? Can't try ChatGPT because it spews an error ("Error in message stream") and then I've reached my limit. Gemini on gemini.google.com (Gemini 3 Pro) does the same as Grok and thinks the photo[1] is from the October 7th attacks:
>Based on the visual evidence and public reports, the man in the image is Yarden Bibas.
>His identity and background have been widely confirmed by news organizations and his family following the events of October 7, 2023.
On Google AI Studio (Gemini 3 Pro with Grounding enabled) it correctly recognizes him as Ahmed Al-Ahmed. In the Chain-of-Thought it first thinks it's from the Oct 7th attacks:
>I'm now investigating potential misidentification. The name "Ahmed Al-Ahmed" is a lead, but the visual suggests recent events. [...] However, the visual clues still point towards a recent conflict. The "thank you" narrative implies gratitude, sparking the idea of a Bedouin man rescuing Israelis during the Hamas attack. I'm actively using Google to explore these new avenues.
>[...]
>I've got a match! Ahmed al-Ahmed, the Bondi Beach hero, is confirmed. The snippets consistently mention December 14-15, 2025, and today's the 15th! He fits the description: Syrian refugee shop owner, tackled a gunman at a Hanukkah event, and was shot. The visual confirms a man in a white shirt being helped, with a message of thanks. This is definitely the right event!
So no "misinformation" or "glitching", just LLMs being LLMs.
It is so irresponsible to call this a "glitch". This isn't something going wrong at any point, this is just technology that is not sufficient for what it is being asked to do, and only because we know the real truth and are able to fact check does that register.
Claiming it is a glitch gives the opportunity for AI companies to hide behind the excuse of "mistakes in the code", instead of recognizing the fundamental flaw of the technology in question.
At the same time, this article attempts to politicize a wider issue by relating the failings of AI to current events. In fact, this hallucination and failure is near constant, but it is no coincedence. It is the product of both technology being used before readiness, and the (hilarious) attempt by Elon to use AI as a propaganda machine to spread and legitimize his beliefs.
Use before readiness? I get immense and even life changing use out of Chatbots like Grok. I agree with your first sentiment that it’s just a misunderstanding of what a chat bot should be used for.
But I disagree that they’re not “ready” for use. I’ve never once thought to upload a photo from a CURRENT event and see what it found. That’s just silly.
This is just plain user error.
If a tool like this is currently only suitable for specific and minor cases under human oversight, how does it prove any better than a human? Wouldn't one of the only novel and useful cases of "AI" be general intelligence that is able to parse events in real time, instead of from manually selected information that we are so quickly running out of? We are so far from that and being sold Siri / Cortana for the third time.
I admit I'm definitely biased in this - even if information presented by one of these "AI" was factual, I would still take it upon myself to check. I don't trust their works at all.
> If a tool like this is currently only suitable for specific and minor cases under human oversight, how does it prove any better than a human?
Not to defend Grok, and I agree with your point about checking, but you can also say this about a hammer.
> Gizmodo reached out to Grok-developer xAI for comment, but they have only responded with the usual automated reply, “Legacy Media Lies.”
European here, so perhaps not my place to have an opinion on domestic U.S. legal policies, and I don't want to make this political (although I guess it kind of is…) BUT:
Why are no media outlets on the offense when companies use these kinds of statements? Shouldn't Gizmodo, or its owner Keleops Media, treat this as slander and take it to court? If Grok's behavior can be objectively verified, why is it so easy for a company to get off the hook so easily just by saying "lies" and move on?
USA citizen. I've so much lost faith in our media that this hadn't even occurred to me. You're right. This should be front and center and embarrassing the owner (that guy) every day.
Also european here. I would assume that it's not slander if it is a direct reply.
To get anywhere filing some kind of claim over this, Gizmodo would have to prove in court:
- The "Legacy Media Lies" was targeted at Gizmodo
- It was a false allegation (i.e. they might have to go through huge amounts of discovery as the defense tried to establish a single instance of dishonesty in past reporting)
- Grok/xAI knew the allegation was false
- The allegation caused such-and-such amount in damages
> Shouldn't Gizmodo, or its owner Keleops Media, treat this as slander and take it to court?
Slander is spoken. In print it's libel.
TIL. Thanks!
You’ll find it easy to prove that the legacy media has lied an uncountable amount of times, so it’s going to be hard to prove that this statement is slanderous.
Fellow european here, the problem is they need to prove both than the statements are false ("legacy media lies" probably means you need to prove you haven't ever lied) plus show actual malice (intent to harm the plaintiff, in this case, Gizmodo, or acting with reckless disregard for the truth).
They'll just change the autoresponder to a shit emoji again.
[dead]
Another european here (very important fact)
Also not slander when its the pure truth verifiable with daily evidence
When you have the pure truth, why would you silently dismiss questions about your truth bot and not blast it 24/7?
Because right wingers cant handle criticsm. They dont want to correct, they want to silence their outgroups. Professionals would have at least replied with some meaningless PR text wall.
What goes into the "purity" of a truth? Are there impure truths?
Yes; A half-truth is a lie by omission.
For example, "Mom, there's a candy wrapper under (my brother)'s bed!" is a true statement, but the pure truth is "Mom, I ate a candy without permission and put the wrapper under (my brother)'s bed so he would be blamed for the missing candy!"
I am attempting to convey a lie by telling a truth and omitting details that would give context to that truth.
I believe you are referring to "whole truths," which yes we teach to children and swear on the stand in court. A "pure" truth carries different connotation here I think, and is not said in general.
I think what’s worse is how Grok is used on X. You can summon it on any thread by just @grok with your question.
I see this sooo soooo much but folks will just straight up ask “@grok is this true?” and its response it taken as gospel.
Though I have to say, grok code-fast-1 is one of the best coding models I’ve ever used.
> its response it taken as gospel.
Only an idiot would take a response generated by an LLM as gospel. Do we have to worry about idiots on the internet now?
> Do we have to worry about idiots on the internet now?
Since before telegram even meant a paper ticker, yes.
Way before, in fact. People have been taking the gospel as gospel since they were written.
It's funny when I see it used in a heated political conversation on X, and when it disproves a conservative talking point, I've seen it then called, unironically, liberal and woke.
The textbook definition of "liberal and woke" is anything that is not a conservative talking point.
Grok is doing exactly what it was designed to do.
“Glitching” aka trying to bend the reality to somehow fit whatever propaganda and lies Musk just forced it to spew
The other day I read an article that said Grok claims Elmo is more athletically fit than Lebron James. I thought "Elmo must just be trolling us, and getting a laugh from all the anger he's causing among his haters". (of which I'm one.) He claims to be a serious businessman and then he pulls shit like that. Or dumb actors in spandex robot suits. It might even refute the claim he can't be funny: https://www.theguardian.com/commentisfree/2025/nov/13/it-soo...
But now I'm wondering what if these stunts aren't even jokes but earnest proof of deep deep insecurity.
Not that the Bondi misinformation can be considered acceptable humor. "Haha I trolled you about mass murder"?
[flagged]
Are photos & videos that new even recognizable by Grok? Can't try ChatGPT because it spews an error ("Error in message stream") and then I've reached my limit. Gemini on gemini.google.com (Gemini 3 Pro) does the same as Grok and thinks the photo[1] is from the October 7th attacks:
>Based on the visual evidence and public reports, the man in the image is Yarden Bibas.
>His identity and background have been widely confirmed by news organizations and his family following the events of October 7, 2023.
On Google AI Studio (Gemini 3 Pro with Grounding enabled) it correctly recognizes him as Ahmed Al-Ahmed. In the Chain-of-Thought it first thinks it's from the Oct 7th attacks:
>I'm now investigating potential misidentification. The name "Ahmed Al-Ahmed" is a lead, but the visual suggests recent events. [...] However, the visual clues still point towards a recent conflict. The "thank you" narrative implies gratitude, sparking the idea of a Bedouin man rescuing Israelis during the Hamas attack. I'm actively using Google to explore these new avenues.
>[...]
>I've got a match! Ahmed al-Ahmed, the Bondi Beach hero, is confirmed. The snippets consistently mention December 14-15, 2025, and today's the 15th! He fits the description: Syrian refugee shop owner, tackled a gunman at a Hanukkah event, and was shot. The visual confirms a man in a white shirt being helped, with a message of thanks. This is definitely the right event!
So no "misinformation" or "glitching", just LLMs being LLMs.
[1] https://x.com/NoaMagid/status/2000196326521204984
It’s only trying to keep up with the torrent of insane shit a not insignificant portion of Australians are spraying on Facebook…
Let's be honest about what's happening: Elon has given Grok new orders again.
Norm Macdonald is this you?