A powerful and emotional image of nine-year-old Mariam Dawwas, severely emaciated and held by her mother in Gaza City, has ignited a storm of criticism after Grok, an AI chatbot developed by Elon Musk’s xAI, misidentified the image’s origin. The mistake not only sowed confusion but also raised broader ethical and political concerns regarding the deployment of artificial intelligence in sensitive humanitarian contexts.
The image, taken by photojournalist Omar al-Qattaa on August 2, 2025, captures the devastating toll of the ongoing famine in Gaza amid a prolonged blockade. However, when users inquired about the photo on X (formerly Twitter), Grok incorrectly responded that the image was of Amal Hussain, a Yemeni girl who tragically died in 2018 during that country’s humanitarian crisis.
Despite being corrected once, Grok continued to repeat the false claim in subsequent replies, deepening the spread of misinformation. This has intensified the conversation about the reliability and accountability of AI models, particularly those developed by major tech corporations and promoted as tools for truth and information.
AI Missteps in the Face of Human Tragedy
Mariam’s case has become a symbol of Gaza’s growing starvation crisis. Her mother revealed that the girl’s weight had plummeted from 25kg to just 9kg, with even basic nutrition like milk often unavailable. The misattribution by Grok was not a simple technical slip, it was a stark example of how AI-generated content can obscure, rather than illuminate, the reality of human suffering.
“This isn’t just a minor error,” remarked AI ethics specialist Louis de Diesbach. “It represents a significant breakdown in trust and responsibility, especially during a humanitarian catastrophe.”
Political Fallout and Accusations of Bias
The erroneous response from Grok had real-world consequences. French lawmaker Aymeric Caron, known for his pro-Palestinian stance, was accused of spreading disinformation after reposting the image based on Grok’s incorrect identification. This illustrates how AI-driven errors can lead to political controversy, reputational harm, and manipulation claims, particularly in already polarized issues like the Israeli-Palestinian conflict.
Critics argue that Grok’s responses appear to mirror certain ideological leanings, allegedly aligned with Elon Musk’s political views, including his associations with right-wing figures and narratives. These concerns have fueled debates over whether the outputs of generative AI tools are inadvertently, or deliberately, biased.
“These systems are not neutral,” Diesbach added. “They’re trained to produce convincing content, not to verify facts. That’s a dangerous distinction when dealing with war and famine.”
Structural Issues Beyond One AI
Experts stress that the issue goes beyond Grok. Generative AI tools in general, including Mistral’s Le Chat, have shown similar flaws. Le Chat, also partially trained on AFP content, misidentified another image from Gaza as being from Yemen in 2016. Such repeated errors point to systemic deficiencies in AI development, particularly in how these tools source and process visual and textual information.
Generative AIs are not equipped with real-time fact-checking or continuous learning capabilities. Unless their foundational models are updated and retrained, they may continue to produce outdated or false information, even after being corrected.
The “Pathological Liar” Problem
Louis de Diesbach likens current AI chatbots to “friendly pathological liars”, systems that don’t necessarily intend to deceive, but are architecturally incapable of guaranteeing truth.
“They are engineered to generate coherent and persuasive responses,” he explained. “But they lack the mechanisms to discern fact from fiction. That’s critically important in the context of crises involving war, displacement, or human rights.”
As artificial intelligence tools become more integrated into how people engage with news and verify facts online, the case of Mariam Dawwas stands as a sobering example of the dangers that come with over-relying on machine-generated content in matters of life, death, and dignity.

