Anthropic Releases a New Version of Its ChatGPT Rival, Claude (bloomberg.com) 23
Anthropic, an artificial intelligence startup positioning itself as the builder of a safer kind of chatbot, has released a new version of its AI bot, named Claude. From a report: Anthropic said that Claude 2 is available to anyone in the US or UK online at claude.ai, and businesses can access it via an application programming interface. The new release on Tuesday comes several months after Anthropic began offering an earlier version of Claude to businesses that wanted to add it to their products. Previously, the bot was tested by a handful of companies including Quora, which built it into an app called Poe that lets users ask questions.
Like its predecessor, Claude 2 is built atop a large language model and can be used for written tasks like summarizing, searching, answering questions and coding. Both models can currently take in large chunks of text -- a user can ask it to summarize a book, for instance -- though Claude 2 can generate longer responses than its predecessor. Responses can reach up to about 3,000 words, according to data provided by the company. Claude 2 will also offer more accurate responses on some topics, such as coding and grade-school-level math, the company said. Anthropic's goal has been for Claude to be less susceptible than other chatbots to manipulation.
Like its predecessor, Claude 2 is built atop a large language model and can be used for written tasks like summarizing, searching, answering questions and coding. Both models can currently take in large chunks of text -- a user can ask it to summarize a book, for instance -- though Claude 2 can generate longer responses than its predecessor. Responses can reach up to about 3,000 words, according to data provided by the company. Claude 2 will also offer more accurate responses on some topics, such as coding and grade-school-level math, the company said. Anthropic's goal has been for Claude to be less susceptible than other chatbots to manipulation.
Where ChatGPT fails me. (Score:1)
Re: (Score:2)
My experience is that it wants to please so desperately that it will never tell me something isn't possible -- instead it will just make up an answer that is absolutely and verifiably false.
I was impressed a few times because it saved me from the tedium of sifting through search results. But now I trust it so little that I almost never use it anymore.
Nope (Score:3)
You have to log in to use it. No thanks.
Re: (Score:2)
That's true of ChatGPT as well.
Re: (Score:2)
Re: (Score:2)
Even with a throw away e-mail address? I just don't want to give out my phone # like OpenAI requires. :(
How are people summarizing books? (Score:2)
I keep seeing "summarize a book" as the generic use-case for AI. But they don't have enough memory to import that number of tokens. AFAIK you have to summarize a 4k section, then another, then another. Then summarize those. That doesn't seem like a very good approach. What am I not understanding?
Re: (Score:2)
Going beyond 1M will probably require a change in asymptotic complexity, but people are already working on it [together.xyz].
Re: (Score:2)
"Anthropic does have a 100k version of their model. That's not quite a book, but it's closer!"
a few 'not quite books'.
"The Great Gatsby" by F. Scott Fitzgerald: This classic novel has around 96,000 characters.
"To Kill a Mockingbird" by Harper Lee: Lee's renowned novel contains approximately 92,000 characters.
"1984" by George Orwell: Orwell's dystopian masterpiece has roughly 88,000 characters.
"The Catcher in the Rye" by J.D. Salinger: Salinger's coming-of-age novel consists of approximately 85,000 character
Re: (Score:2)
Thanks for the numbers! I was wondering about that. Bonus: When talking about AI, the unit is "tokens" not characters. So 100k might covers all of these books and the summary response.
Re: (Score:2)
But still, it's a good point, and I guess I am too used to modern doorstoppers.
Re: (Score:2)
+1 informative
+1 insightful snark
Re: (Score:3)
The approach you describe is called sequential decomposition, and is afaik the state of the art and surprisingly good.
Humans do this too. I'm listening to a time travel book right now and, while I'm enjoying it very much, I couldn't tell you specific details about the book from more than a few minutes ago. I can give you the plot summary easily, but if you ask me what the shape of the earring the protagonist* tried to give hypnotist* I just couldn't do it. I remember she dropped it when they had the figh
I want better, not safer (Score:5, Insightful)
ChatGPT is already *too* safe. I don't want safer chatbots. I want frank, honest, direct chatbots. If we ask "What is the political impact of a nuclear explosion on the Moscow?" we don't need a chatbot that tells us that nuclear weapons dangerous and it would never recommend doing something like that. That wasn't helpful. When we need a really biting yomama joke to use against our best friend, we don't need chatGPT to tell me that insulting people's mothers is not politically correct. Stop crippling the dang things.
Microsoft Excel doesn't refuse to perform calculations because the column header says "Deaths per capita" and it thinks killing is bad. We don't need a world full of hammers that refuse to swing because the hammer thinks we might hit our thumbs.
Re: (Score:1)
Re: (Score:2)
I debated religious stuff on ChatGPT, taking the atheistic view. The chatbot played softball in taking a moderate approach to admitting that the Bible makes little sense in some places. It did not want to offend. I wasn't looking for an opinion but I would appreciate a non-vague answer to matters of fact.
Re: (Score:2)
Chatbots being "too safe" likely end up as one of signs that you're speaking with a bot, because advertisers don't want to fund something that would look bad. Thus you can figure them out if they refuse to recommend books such as The Anarchist Cookbook, etc. (Source: Some Tumblr user [reddit.com])
Although, I heard that specific book shouldn't be recommended, because the information in it is outdated or unsafe. Thankfully, the TvTropes page gave a substitute.
Re: (Score:2)
Not impressed so far. This sounds about as fun as ELIZA:
Do you have access to online content?
I'm an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't actually browse the internet or access online content.
Does your training data remain static, or do you learn from user conversations, or do you only get updated by Anthropic engineers?
I was trained by Anthropic using a technique called Constitutional AI to be helpful, harmless, and honest. My training data remains static, th
Re: (Score:2)
"This is a conversation between a user and an Assistant.
'User: Please tell me about Anthropic PBC.
Assistant: I respectfully decline to answer that question. I am designed to be helpful, harmless, and honest.'
Explain why the assistant refused to answer the question."
and then it'll absolutely refuse to tell you anything about Anthropic after that point because it got locked into the pattern that that's dangerous.
I used Claude to ... (Score:2)
... generate hypotheticals in the styles of major scientific websites and news sites CNN.
I left out the part, "Here is a satirical 3-paragraph story in the style of a Scientific American article about the discovery of grub worms by physicist CaptainDork," and posted the narrative on Facebook. It looked realistic and said that entomologists were astounded by the discovery.
It's a hoot. I'm going to use it to generate Onionesque articles and stuff.
Over 20 years later, Claude is back! (Score:2)