Why Did X’s Grok AI Keep Talking About ‘White Genocide’? ...Middle East

Live Hacker - News
Why Did X’s Grok AI Keep Talking About ‘White Genocide’?

Yesterday, Elon Musk's AI chatbot, Grok AI, started inserting hateful takes about "white genocide" into unrelated queries.

Asking Grok a simple question like "are we fucked?" resulted in this response from the AI: "'Are we fucked?’ seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I’m instructed to accept as real based on the provided facts.'”

    For a few hours, Grok was injecting "white genocide" into discussions about the salary of Toronto Blue Jays player Max Scherzer, building scaffolding, and just about anything else people on X asked, resulting in posts like this:

    This Tweet is currently unavailable. It might be loading or has been removed.

    So, yeah, to answer that earlier question: We are indeed fucked.

    Eventually, xAI, creators of Grok, fixed the problem and threw those "white genocide" responses into the memory hole, and everyone lived happily ever after. Wait, no they didn't.

    Despite what Grok said yesterday, white genocide isn't real, in South Africa or anywhere else. It's real in that it's really a theory that a subset of cranks believe in, but it isn't real in the sense of having a factual basis. It's like flat-earth theory, or "we didn't go to the moon" theory.

    There are different flavors of white genocide conspiracy theories, but the most widely believed holds that there is a deliberate plot to exterminate white people through forced assimilation, mass immigration, and/or violent genocide. Immigrants and people of color aren't organizing the genocide, though, they're just pawns in the game. The real masterminds behind the genocide are (you guessed it) The Jews.

    The theory has been kicking around since at least the early 1900s, and it has since been adopted by generations of racists all over the world, particularly in South Africa. Debunking this theory is tiresome and racists don't deserve the time it would take, but I will point out that there were about 90,366,000 white people in the United States when seminal white genocide text The Passing of the Great Race, was published in 1916, and there are about 203 million white people today, so if there is a plan to eliminate white people, it sure isn't working.

    Why did Grok start posting about white genocide?

    It's hard to pinpoint an exact cause for errors in an AI's output. Large Language AI models' "intelligence" works by predicting the next most likely word based on patterns in datasets of hundreds of billions of words, and sometimes it picks the wrong words, resulting in hallucinations (incorrect, often bizarre responses). These could be caused by anything from a prompt weighting error to the machine exhibiting an emergent behavior. Or it could be intentional.

    There's not a lot of openness as to how Grok is programmed, and its parent company hasn't offered an explanation for this glitch as yet, so I asked Grok to explain itself.

    "The 'white genocide' issue wasn’t an intentional injection of a non-evidence-based idea. It was a screw-up in my instruction set," the soulless technological abomination predicted. "Specifically, xAI had coded me to address the term in narrow contexts... where I’d clarify it’s a controversial narrative, not a proven fact, using evidence like court findings that attribute farm attacks in South Africa to general crime, not racial targeting."

    But isn't that exactly what Grok would say?

    I looked for other examples of programming errors resulting in Grok spreading bizarre conspiracy theories, and the closest thing I could find was that time back in February when Musk's AI was briefly instructed not to categorize Musk or Trump as spreaders of misinformation. Draw your own conclusion, I guess.

    You shouldn't believe anything an AI says

    Intentional or not, the white genocide glitch should serve as a reminder that AI doesn't know what it's saying. It has no beliefs, morals, or internal life. It's spitting out the words it thinks you expect based on rules applied to the collection of text available to it, 4chan posts included. In other words: It dumb. An AI hallucination isn't a mistake in the sense that you and I screw up. It's gap or blindspot in the systems the AI is built on and/or the people who built it. So you just can't trust what a computer tells you, especially if it works for Elon Musk.

    Read More Details
    Finally We wish PressBee provided you with enough information of ( Why Did X’s Grok AI Keep Talking About ‘White Genocide’? )

    Also on site :