Manchester United and Liverpool have effectively pressured Elon Musk's social platform X to remove content generated by Grok that has been labeled as "disgusting and reckless."
During the weekend, it was revealed that several anonymous accounts had prompted Grok, the artificial intelligence system created by xAI (another Musk-owned enterprise), to generate numerous social media posts deliberately designed to upset supporters of Manchester United and Liverpool by making references to disasters connected to both teams.
The 1958 Munich aviation tragedy, the deadly crowd disaster at Hillsborough in 1989, and the heartbreaking death of Liverpool striker Diogo Jota last summer were all featured in posts that prompted formal objections from both Premier League powerhouses, according to The Athletic.
By Sunday evening, the controversial posts had been deleted from X.
Tragedy Chanting Takes Depressing New Form
Tragedy Chanting Takes Depressing New Form

The fact that individuals would ask an AI chatbot to "genuinely attempt to upset" supporters of opposing teams represents merely the unfortunate progression of a longstanding phenomenon.
"Tragedy chanting" represents a vile practice where one group of fans ridicules past catastrophes that have impacted their opponents. Malicious songs whispered by spectators in stadiums and hateful messages written around venues have existed for generations, with social media now providing a digital platform for this reprehensible conduct.
Being the two most accomplished teams in English football history, both having experienced significant tragedies, Manchester United and Liverpool frequently face this type of harassment. In March 2023, the coaches of both teams—Erik ten Hag and Jürgen Klopp at the time—were moved to issue a combined statement addressing this issue.
"Using loss of life—regarding any catastrophe—to gain advantage is completely unacceptable, and this behavior must cease," Ten Hag stated. Klopp contributed: "We welcome the energy, we welcome passionate support, and we welcome an electrifying atmosphere. What we cannot tolerate is anything that crosses the line, particularly chants that have no place in football."
These declarations failed to achieve their intended impact.
As late as February, Nottingham Forest found it necessary to caution their supporters against tragedy chanting before Liverpool's arrival. This year, a Liverpool fan received a three-year ban from all football matches after being caught chanting about the deaths of two Leeds United supporters.
Social media platforms enable users to hide behind anonymity while freely targeting rival supporters, teams, and athletes without facing consequences. Nevertheless, the U.K. government has implemented measures to prevent AI systems like Grok from facilitating such behavior.
U.K. Government's Response to Grok Controversy
U.K. Government's Response to Grok Controversy

Liverpool West Derby MP Ian Byrne condemned the posts as "horrific and entirely unacceptable," forecasting that they "will shock and disgust the overwhelming majority of supporters."
"It's disturbing and heartbreaking that such hateful content can be produced by Grok on such an influential platform," Byrne informed The Athletic.
The British parliamentarian also challenged "how this situation was permitted," emphasizing: "Tech companies bear responsibility for ensuring their platforms don't create or spread abuse."
The Online Safety Act implemented in 2023 establishes that distributing "threatening communications" constitutes a criminal violation.
"These posts are revolting and negligent," declared a representative from the Department for Science, Innovation and Technology. "They contradict British principles and morality.
"AI platforms including chatbots that allow users to distribute content fall under Online Safety Act regulations and must block illegal material including hateful and abusive content on their platforms.
"We will maintain decisive action when AI services fail to provide adequately safe user environments."
ไทย
English
中國人