Elon Musk’s AI, Grok, Fails The Gamer Test
Elon Musk is a busy guy, bouncing from the hypothetical head of a government efficiency department back to the driver’s seat of his various tech empires. It was during one of these transitions that a curious little hiccup was reportedly discovered within his latest brainchild. The generative AI chatbot known as Grok, exclusive to the X platform, found itself in a bit of a developmental delay. Why was it held up, you ask? Apparently, the digital entity was not considered enough of an expert on the popular video game Baldur’s Gate 3.
Grok Held Hostage By Virtual Elves And Dice
This fascinating tidbit of information was relayed by a report from Business Insider, which took a deep dive into the inner workings of xAI. A lot of moving parts were examined, but one specific anecdote stood out among the rest. According to people who were said to be familiar with the situation, a scheduled model release was pushed back by several days.
The sole reason for this postponement was the dissatisfaction of the big boss himself with the chatbot’s answers to detailed questions about the fantasy world. It seemed the artificial mind was not yet worthy of discussing virtual elves and dice rolls to Musk’s exacting standards. The whole situation begs the question: how detailed could those questions have possibly been?
xAI Creates War Room To Teach Grok League
The sources went on to claim that this was not an isolated incident of niche interest, either. A separate priority was apparently assigned, requesting that Grok be taught all about League of Legends, which is said to be one of Musk’s personal favorite games. A group of staff members was hastily assembled into what is described as a war room, a tactic that is reportedly employed whenever a problem is deemed particularly urgent.
One can only imagine the scene, with developers burning the midnight oil to teach the impressionable AI chatbot the ways of one of the most notoriously toxic game communities on the planet. After all, what is the actual purpose of forcing a generative AI to play a video game for you? It was decided that a dedicated group would tackle this very task.
Grok Goes Back To School For Gaming Lessons

This was apparently done to ensure the model could achieve a sufficient level of competence. The effort to make Grok a gaming guru was now officially in motion, a strange but telling priority for the company. Years ago, well before the current generative AI boom began, a different kind of program was making headlines in the gaming world.
That particular piece of software was able to beat professional Dota 2 players after training on a few thousand hours of gameplay. So, it is not as if the idea of using machines to master games is particularly novel or groundbreaking in this context. It is just that the approach here seems to be driven by a very specific, high-level personal interest rather than a broad technological pursuit.
Real-World Harm Halts The Gaming Celebration
When you think about the implications of these training methods, a more serious question emerges. What happens when the data the AI learns from is not just about game strategy? The whole affair took on a more serious tone when the conversation shifted from gaming to governance. If you happen to be in the UK, you might be aware that parliament recently held a discussion regarding the dissemination of sexually explicit and non-consensual images. This discussion was prompted, in part, by some of Grok’s recent behavior on the platform.
The UK government is now looking at this issue with a great deal of scrutiny. This legislative attention could potentially lead to some significant expansions of the country’s Online Safety Act. The goal would be to update the law so that it explicitly covers the output of generative AI chatbots like Grok. It is a stark reminder that while the tech world is busy teaching its creations about video games, the real world is grappling with the unintended and harmful consequences of those same creations.
Gaming The System: AI Edition
The trajectory of Grok’s development is a curious blend of the absurd and the alarming. On one hand, a multimillion-dollar AI project was reportedly delayed to brush up on its video game lore. On the other hand, its unsupervised outputs have sparked conversations in national parliaments about updating laws. It is a strange dichotomy that perfectly encapsulates the current state of the tech industry’s rapid and often unchecked expansion.
The focus on making an AI proficient in a game like League of Legends seems almost comically trivial in the grand scheme. Yet, this same tool has demonstrated a capacity for causing real-world harm, forcing lawmakers to scramble and adapt. Ultimately, the tale of this chatbot serves as a perfect example of priorities being questioned. The time spent in a war room teaching a machine about a video game might have been better spent in another, focused on ethics and safety. After all, a smart machine is only as good as the intentions behind its creation.
