While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.

The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.
Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.

TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.
Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.
Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.

However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.
While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
News
Niall Horan shares ‘brutal reality’ of being part of One Direction
Niall Horan kickstarted his career with One Direction in 2010 Niall Horan kickstarted his career with One Direction in 2010…
American billionaire Elon Musk’s father proposes building a super electric car factory in Russia
Elon Musk ‘s father , Errol Musk, unexpectedly advised his son to invest and build an electric car factory in Russia, while…
”Chances are almost zero”: American doctor confirms the harsh truth about Tiger Woods’ return to his son Charlie
American Doctor’s Remark Confirms Grim Reality About Tiger Woods’ Comeback With Son Charlie Tiger Woods and Charlie Woods playing together at the PNC…
“It’s a cowardly response to Beyoncé’s well-earned and historic win”: Fans Criticize New Country Category Change at the Grammys
After Beyoncé Win, Fans Criticize New Country Category Change at the Grammys The Recording Academy says the category has been…
SAD NEWS: We have lost a GIANT in the country music industry
Country music legend dead at 74: ‘We lost a GIANT in the music industry’ John Wesley Ryles might not be…
More bad news for Tiger Woods: Farewell to tradition with his son Charlie
Medical experts warn the golf legend’s latest back surgery makes a December return unlikely Tiger Woods (right) with his son…
End of content
No more pages to load






