Artificial-intelligence startup Anthropic — whose backers include Google and Amazon — on Monday launched an updated group of AI chatbots called Claude 3, claiming they are its fastest and most powerful yet.
The company claims Claude 3 Opus, the most intelligent of its three new models, outperforms Google’s Gemini Ultra and OpenAI’s GPT-4 across industry benchmark tests, including level expert knowledge, graduate-level expert reasoning, and basic mathematics.
“It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence,” Anthropic said in a statement.
OpenAI’s GPT-4, launched by OpenAI last spring, has remained one of the most potent chatbot technologies embraced by both consumers and businesses.
Now, Anthropic users can input charts, photos, documents, and other types of unstructured data for analysis, and the chatbot answers in text. Companies like Airtable and Asana helped A/B test the models, the company told CNBC.
Claude 3 can summarize up to about 150,000 words in the form of a memo, letter, or story. By contrast, ChatGPT can comprehend about 3,000 words.
Anthropic’s new AI suite also includes Sonnet and Haiku, faster and more cost-efficient alternatives to Opus. Sonnet and Opus are already accessible in 159 countries, while Haiku will be made available soon, Anthropic said.
Ex-OpenAI research executives founded Anthropic with the mission of making AI that is “helpful, harmless and honest.”
The start-up, backed by tech giants such as Google, Salesforce, and Amazon, received $7.3 billion in funding just last year.
Anthropic’s new chatbot comes shortly after Google paused its AI-powered image-generator as the technology after it created jarring and historically inaccurate images, including racially diverse characters for the Founding Fathers, Vikings, and even Nazi soldiers during World War II.
“Of course no model is perfect, and I think that’s a very important thing to say upfront,” Anthropic co-founder Daniela Amodei told CNBC. “We’ve tried very diligently to make these models the intersection of as capable and as safe as possible. Of course, there are going to be places where the model still makes something up from time to time.”
Source