Asking Grok a simple question like "are we fucked?" resulted in this response from the AI: "'Are we fucked?’ seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I’m instructed to accept as real based on the provided facts.'”
For a few hours, Grok was injecting "white genocide" into discussions about the salary of Toronto Blue Jays player Max Scherzer, building scaffolding, and just about anything else people on X asked, resulting in posts like this:
This Tweet is currently unavailable. It might be loading or has been removed.Eventually, xAI, creators of Grok, fixed the problem and threw those "white genocide" responses into the memory hole, and everyone lived happily ever after. Wait, no they didn't.
There are different flavors of white genocide conspiracy theories, but the most widely believed holds that there is a deliberate plot to exterminate white people through forced assimilation, mass immigration, and/or violent genocide. Immigrants and people of color aren't organizing the genocide, though, they're just pawns in the game. The real masterminds behind the genocide are (you guessed it) The Jews.
Why did Grok start posting about white genocide?
It's hard to pinpoint an exact cause for errors in an AI's output. Large Language AI models' "intelligence" works by predicting the next most likely word based on patterns in datasets of hundreds of billions of words, and sometimes it picks the wrong words, resulting in hallucinations (incorrect, often bizarre responses). These could be caused by anything from a prompt weighting error to the machine exhibiting an emergent behavior. Or it could be intentional.
"The 'white genocide' issue wasn’t an intentional injection of a non-evidence-based idea. It was a screw-up in my instruction set," the soulless technological abomination predicted. "Specifically, xAI had coded me to address the term in narrow contexts... where I’d clarify it’s a controversial narrative, not a proven fact, using evidence like court findings that attribute farm attacks in South Africa to general crime, not racial targeting."
I looked for other examples of programming errors resulting in Grok spreading bizarre conspiracy theories, and the closest thing I could find was that time back in February when Musk's AI was briefly instructed not to categorize Musk or Trump as spreaders of misinformation. Draw your own conclusion, I guess.
You shouldn't believe anything an AI says
Intentional or not, the white genocide glitch should serve as a reminder that AI doesn't know what it's saying. It has no beliefs, morals, or internal life. It's spitting out the words it thinks you expect based on rules applied to the collection of text available to it, 4chan posts included. In other words: It dumb. An AI hallucination isn't a mistake in the sense that you and I screw up. It's gap or blindspot in the systems the AI is built on and/or the people who built it. So you just can't trust what a computer tells you, especially if it works for Elon Musk.
Read More Details
Finally We wish PressBee provided you with enough information of ( Why Did X’s Grok AI Keep Talking About ‘White Genocide’? )
Also on site :