While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.
Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.
TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.
Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.
Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.
However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.
While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
News
Elon Musk announces plans to build $760 million tunnel underground in Texas
Elon Musk Plans $760 Million Tunnels Under Texas: Reports Elon Musk is pushing forward with a $760 million plan to build…
Awesome!!! Tiger’s son Charlie Woods scores his second hole-in-one in nine months
Charlie Woods, son of Tiger, hits second hole-in-one in last nine months Charlie Woods hit his second hole-in-one in the…
Lainey Wilson Starts Thinking About Kids With Fiancé Delvin “Duck” Hodges
Lainey Wilson Reveals How She Plans To Raise Her and Fiancé’s Future Kids Lainey Wilson revealed how she plans to…
Is it risky for Post Malone to venture into new territory by launching his eponymous clothing brand, Austin Post?
Post Malone pulls off a wild runway stunt as he brings a HORSE to his debut fashion show in Paris…
Tommy Fleetwood suggests how he’ll react if the American fans turn against him at the Ryder Cup in September
One of the most remarkable moments on Sunday at the Tour Championship came when Tommy Fleetwood was introduced on the…
Because You’re Worth It: Tommy Fleetwood Compared to Scottie Scheffler and Tiger Woods
What Tommy Fleetwood does which is now being labelled ‘very similar’ to Scottie Scheffler and Tiger Woods Tommy Fleetwood finally…
End of content
No more pages to load