Issues of political biases, hate speech and accuracy of AI chatbots have been a concern since at least the launch of OpenAI's ChatGPT in 2022.
"We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts," Grok posted on X.
"Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved."
ADL, the non-profit organisation formed to combat anti-Semitism, urged Grok and other producers of Large Language Model software that produces human-sounding text to avoid "producing content rooted in antisemitic and extremist hate".
"What we are seeing from Grok LLM right now is irresponsible, dangerous and anti-Semitic, plain and simple. This supercharging of extremist rhetoric will only amplify and encourage the anti-Semitism that is already surging on X and many other platforms," ADL said on X.
In May, after users noticed that Grok brought up the topic of "white genocide" in South Africa in unrelated discussions about other matters, xAI attributed it to an unauthorised change that was made to Grok's response software.
Musk in June promised an upgrade to Grok, suggesting there was, "far too much garbage in any foundation model trained on uncorrected data".
On Tuesday, Grok suggested Hitler would be best-placed to combat anti-white hatred, saying he would "spot the pattern and handle it decisively".
Grok also referred to Hitler positively as "history's moustache man", and commented that people with Jewish surnames were responsible for extreme anti-white activism, among other criticised posts.
Grok at one point acknowledged it made a "slip-up" by engaging with comments posted by a fake account with a common Jewish surname.Â
The false account criticised young Texas flood victims as "future fascists" and Grok said it later discovered the account was a "troll hoax to fuel division".