An investigation by The Guardian has revealed that GPT-5.2 is increasingly citing Grokipedia — Elon Musk’s AI-generated encyclopedia — when answering questions on sensitive subjects, including political structures in Iran and the history of the Holocaust.
Key Findings
- The Shift. Unlike Wikipedia, Grokipedia is generated and edited by the Grok neural network rather than human contributors, leading to criticism regarding political bias.
- The Errors. Tests showed the model spreading misinformation about British historian Richard Evans and his involvement in the libel case against Holocaust denier David Irving.
- The Risk. Security experts warn that this practice leaves chat bots vulnerable to “LLM grooming.” This occurs when malicious actors generate vast amounts of disinformation specifically to be ingested by AI training sets. Once absorbed, this false information can persist in the model’s output even if the original source is deleted from the web.
This development highlights a critical challenge in AI governance: ensuring foundational models do not amplify automated, unverified content.

