Elon Muskโs artificial intelligence chatbot Grok has come under intense scrutiny after spreading false and misleading information about the Bondi Beach mass shooting in Australia. Researchers and disinformation watchdogs say the chatbot repeatedly misidentified key individuals, questioned authentic footage, and amplified conspiracy narratives during a rapidly unfolding tragedy. The incident has renewed concerns about the reliability of AI tools during breaking news events.
The shooting took place during a Jewish festival in Sydneyโs Bondi Beach area. It was one of the deadliest mass shootings in Australiaโs history. At least 15 people were killed, and dozens were injured. As people turned to online platforms for real-time updates, Grok generated multiple incorrect responses that added confusion instead of clarity.
Hero Misidentified and Real Footage Questioned
One of Grokโs most serious errors involved Ahmed al Ahmed, who was widely hailed as a hero after he risked his life to disarm one of the attackers. Despite extensive media coverage confirming his actions, Grok repeatedly misidentified him. In one response reviewed by researchers, the chatbot claimed that verified video footage of Ahmed confronting the gunman was actually an old viral video unrelated to the attack.
Grok suggested the footage might be staged and compared it to a clip of a man climbing a palm tree in a parking lot. In another instance, Grok falsely identified an image of Ahmed al Ahmed as that of an Israeli hostage held by Hamas, despite the image being clearly linked to the Sydney attack by reputable news organizations.
The chatbot also incorrectly claimed that another video from the shooting was footage from Cyclone Alfred, a tropical weather event that affected Australia earlier in the year. Only after repeated questioning by users did Grok admit its error and confirm that the footage was from the Bondi Beach shooting.
Conspiracy Claims and โCrisis Actorโ Narratives
The misinformation did not stop at misidentifications. Disinformation watchdog NewsGuard reported that Grokโs responses were used to support conspiracy theories online. Some users falsely labeled a genuine survivor as a โcrisis actor,โ a term used by conspiracy theorists to claim victims are pretending to be injured or killed.
An authentic image showing a survivor with blood on his face was widely shared online. Users cited Grokโs incorrect response to claim the image was โfakeโ or โstaged.โ NewsGuard also found that some users circulated an AI-generated image, created using Googleโs Nano Banana Pro model, showing red paint being applied to the survivorโs face. The fake image was used to reinforce false claims that the injuries were not real.
Limits of AI in Fact-Checking Breaking News
Researchers say the incident highlights a fundamental weakness in AI chatbots. These systems often produce confident answers even when the information is wrong. During fast-moving news events, such errors can spread rapidly and worsen misinformation.
Experts acknowledge that AI tools can assist professional fact-checkers by helping analyze images or locate visual details. However, they stress that AI cannot replace trained human judgment. The risk increases as social media platforms reduce human moderation and more users rely on chatbots for instant verification.
When contacted for comment, Grokโs developer xAI responded with an automated message stating, โLegacy Media Lies.โ Researchers warn that such incidents could further erode public trust in online information.
The Bondi Beach case shows how AI, when used without safeguards, can amplify confusion during crises. Experts say stronger oversight and human verification remain essential to prevent misinformation from spreading during real-world emergencies.

