Last week, Elon Musk flew to the UK to hype up the existential risk posed by artificial intelligence. A couple of days later, he announced that his latest company, xAI, had developed a powerful AI—one with fewer guardrails than the competition.
The AI model, called Grok (a name that means “to understand” in tech circles), “is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!” reads an announcement on the company’s website. “It will also answer spicy questions that are rejected by most other AI systems.”
The announcement does not explain what a “spicy” or “rebellious” means, but most commercial AI models will refuse to generate sexually explicit, violent, or illegal content, and they are designed to avoid expressing biases picked up from their training data. Without such guardrails, the worry is that an AI model could help terrorists develop a bomb or could result in products that discriminate against users based on characteristics such as race, gender, or age.
xAI does not list any contact information on its website, and emails sent to common addresses bounced back. An email sent to the press address for X received an automated response reading, “Busy now, please check back later.”
The xAI announcement says that Grok is built on top of a language model called Grok-1 that has 33 billion parameters. The company says it developed Grok in two months, a relatively short amount of time by industry standards, and also claims that a fundamental advantage is its “real-time knowledge of the world via the X platform,” or the platform formerly known as Twitter, which Musk acquired for $44 billion in 2022.
Stella Biderman, an AI researcher with EleutherAI, an open source AI endeavor, says the claims made in the xAI announcement seem plausible. Biderman suggests that Grok will use what’s known as “retrieval augmented generation” to add up-to-date information from X to its output. Other cutting-edge language models do this using search engine results and other information.
Large language models have proven stunningly capable over the past year or so, as highlighted most famously by OpenAI’s groundbreaking chatbot, ChatGPT.
These models feed on huge amounts of text taken from books and the web, and then generate text in response to a prompt. They are typically also given further training by humans to make them less likely to produce offensive, rude, or dangerous outputs, and to make them more likely to answer questions in ways that seem coherent and plausibly correct, although they are still prone to producing errors and biases.
Source