On 1 April. Israel had targeted a building in the Iranian Embassy complex in Damascus, knocking off a couple of generals and other officers of the Quds Force. Iran, understandably miffed, declared its intention to retaliate. But they kept schtum about when and how leaving everyone on tenterhooks.
Fast-forward to Thursday, and this headline appears on X's main feed: "Iran Strikes Tel Aviv with Heavy Missiles." Cue global panic. But here's the kicker: it was all a load of codswallop. The headline was generated by X's AI chatbot, Grok, and promoted by the platform's trending news product, Explore.
While a strike against an embassy building might not start a war, hitting a capital city with heavy missiles certainly would.
Before Musk took over Twitter, human editors used to provide context for trending topics. But after Musk's takeover, the human team was given the boot, leaving a gaping hole in the platform's ability to verify and contextualise news trends.
The recent rollout of X's updated Explore page aimed to bring back written context for trending topics, with Grok given the job of generating official narratives and headlines. Despite warnings about Grok's early stage and potential for errors, the AI-driven approach was rolled out, spreading false information to millions of users.
The fake headline gained traction when verified accounts started spamming identical misinformation, accompanied by unverified videos of supposed explosions. X's algorithms picked this up as a potential trending story, prompting Grok to generate an official-looking narrative and headline for the Explore page, further fuelling the misinformation.
In a move that raised more than a few eyebrows, X made Grok available to all premium-subscribed users just a day after the incident, effectively giving them access to an AI chatbot capable of generating misinformation.
Since Musk's acquisition of X, the platform has reinstated thousands of previously banned accounts, banned others critical of Musk or X, and introduced a paid-for blue-tick system, which has been criticised for amplifying conspiracy theories.
X has also launched an ad revenue-sharing programme for verified users, often promoting false information. To qualify, users must meet criteria such as subscribing to X's £6 monthly premium subscription and having at least 500 followers.
Last year, Musk said that posts flagged in Community Notes—a feature on X enabling users to refute claims and provide additional context—would be ineligible for revenue share. However, Jack Brewster from NewsGuard, which operates a content rating system, told AFP that viral posts spreading misinformation often evade Community Notes flags.
NewsGuard examined 250 of the most popular posts in October endorsing prominent false or unsubstantiated narratives about the Israel-Hamas conflict. It discovered that only 32 per cent of these posts had been flagged by a Community Note.