While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
The latest version of Elon Musk’s chatbot Grok is causing controversy as it frequently consults Musk’s own views before providing answers on sensitive topics such as abortion laws or US immigration policy.
Despite being described as a “maximum truth” AI, there is evidence that Grok frequently searches for Elon Musk’s statements or social media posts as the basis for its answers. According to data from experts and technology sites, when users ask questions related to controversial issues, Grok tends to cite a large number of sources related to Musk, even most of the quotes come from his statements.
TechCrunch tested this phenomenon when Grok asked about abortion laws and immigration policy, and the results showed that the chatbot prioritized Musk’s views over consulting a variety of neutral or expert sources.
Grok uses a “chain of thought” mechanism to handle complex questions by breaking down the problem and consulting multiple documents before giving a response. For common questions, Grok still quotes from many diverse sources. However, on sensitive topics, this chatbot shows a tendency to answer according to Elon Musk’s personal stance.
Programmer Simon Willison suggests that Grok may not have been programmed to do so. According to Grok 4’s system code, the AI is instructed to seek information from multiple stakeholders when faced with controversial questions, and is warned that media views may be biased.
However, Willison believes that because Grok “knows” that it is a product of xAI – a company founded by Elon Musk – during the reasoning process, the system tends to look for what Elon Musk has said before constructing an answer.
While it’s unclear whether this was intentional on the part of the development team or just a spontaneous result of machine learning, the chatbot’s over-referencing of a specific individual’s opinion raises concerns about the objectivity and neutrality of AI in handling complex social topics.
News
U.S. Ryder Cup vice-captain Gary Woodland gives update on brain tumor battle
Team USA Ryder Cup captain offers health update after brain tumor battle with ‘distraction’ claim Gary Woodland is set to…
JJ Spaun claims Rory McIlroy ‘broke my heart’
J.J. Spaun claims Rory McIlroy ‘hurt’ him at The Masters with brutal comment which even left Shane Lowry shocked Rory…
Keegan Bradley admits he ‘feels bad’ after beating Tommy Fleetwood to win at Travelers
Keegan Bradley admits he ‘felt bad’ about what he did immediately after beating Tommy Fleetwood to win the Travelers Keegan…
‘Not the guy I like’: Sergio Garcia just recalled the fight with Tiger Woods
Sergio Garcia takes aim at Tiger Woods, addresses their rocky history The LIV Golf star reflects on tension, regret-and why…
Justin Bieber has just paid off a debt of tens of millions of dollars to his former manager, Scooter Braun.
Justin Bieber pays off tens of millions of dollars in debt Justin Bieber has just settled a financial dispute with…
Reba McEntire Sneaks Into Niall Horan’s Dressing Room And The Ending Is…”He’s Blown Away”
Reba McEntire ‘mischievously’ sneaks into Niall Horan’s dressing room and the result…He’s ‘blown away’ Reba McEntire Recorded a Hilarious Video…
End of content
No more pages to load