xAI said that the "white genocide" glitch originated from malicious tampering.
xAI Says "White Genocide" Glitch Stemmed from Malicious Tampering
This week, Grok, an AI chatbot under Elon Musk, attracted attention for producing answers containing the controversial theory of "white genocide" in South Africa. Its parent company, xAI, posted on Thursday that it had identified the problem as stemming from unauthorized modifications to the system and had fixed the relevant vulnerabilities. xAI stated on the X platform that Grok had thoroughly investigated and reversed the "unauthorized modifications" to its technology, which caused the answers to "violate xAI's internal policies and core values." xAI said, "In this incident, our existing code review process for prompt modifications was bypassed. We will add additional checks to ensure that xAI employees cannot modify prompts without review."
—— Sina Finance, Bloomberg, CNBC
via Fengxiangqi Reference Express - Telegram Channel