Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Anthropic Releases a New Version of Its ChatGPT Rival, Claude (bloomberg.com) 23

Anthropic, an artificial intelligence startup positioning itself as the builder of a safer kind of chatbot, has released a new version of its AI bot, named Claude. From a report: Anthropic said that Claude 2 is available to anyone in the US or UK online at claude.ai, and businesses can access it via an application programming interface. The new release on Tuesday comes several months after Anthropic began offering an earlier version of Claude to businesses that wanted to add it to their products. Previously, the bot was tested by a handful of companies including Quora, which built it into an app called Poe that lets users ask questions.

Like its predecessor, Claude 2 is built atop a large language model and can be used for written tasks like summarizing, searching, answering questions and coding. Both models can currently take in large chunks of text -- a user can ask it to summarize a book, for instance -- though Claude 2 can generate longer responses than its predecessor. Responses can reach up to about 3,000 words, according to data provided by the company. Claude 2 will also offer more accurate responses on some topics, such as coding and grade-school-level math, the company said. Anthropic's goal has been for Claude to be less susceptible than other chatbots to manipulation.

This discussion has been archived. No new comments can be posted.

Anthropic Releases a New Version of Its ChatGPT Rival, Claude

Comments Filter:
  • As a technically minded person who has a broader framework in mind when I ask a specific question, I am frequently disappointed with ChatGPT inability to cease providing "guidance" advice, and only provide technical expertise. Sometimes it can bullseye a technical question like: how can I update or add an xml element to a specific file, using git bash. Conversely, when it's unable to provide a technical explanation for something, its like go do these high level things. I really want these chat bots to fe
    • My experience is that it wants to please so desperately that it will never tell me something isn't possible -- instead it will just make up an answer that is absolutely and verifiably false.

      I was impressed a few times because it saved me from the tedium of sifting through search results. But now I trust it so little that I almost never use it anymore.

  • by PhantomHarlock ( 189617 ) on Tuesday July 11, 2023 @11:30AM (#63677159)

    You have to log in to use it. No thanks.

  • I keep seeing "summarize a book" as the generic use-case for AI. But they don't have enough memory to import that number of tokens. AFAIK you have to summarize a 4k section, then another, then another. Then summarize those. That doesn't seem like a very good approach. What am I not understanding?

    • Anthropic does have a 100k version of their model. That's not quite a book, but it's closer!

      Going beyond 1M will probably require a change in asymptotic complexity, but people are already working on it [together.xyz].
      • "Anthropic does have a 100k version of their model. That's not quite a book, but it's closer!"

        a few 'not quite books'.

        "The Great Gatsby" by F. Scott Fitzgerald: This classic novel has around 96,000 characters.

        "To Kill a Mockingbird" by Harper Lee: Lee's renowned novel contains approximately 92,000 characters.

        "1984" by George Orwell: Orwell's dystopian masterpiece has roughly 88,000 characters.

        "The Catcher in the Rye" by J.D. Salinger: Salinger's coming-of-age novel consists of approximately 85,000 character

        • by MobyDisk ( 75490 )

          Thanks for the numbers! I was wondering about that. Bonus: When talking about AI, the unit is "tokens" not characters. So 100k might covers all of these books and the summary response.

        • That should probably be words, not characters [google.com]. (Was that list of books by any chance generated by an LLM?)

          But still, it's a good point, and I guess I am too used to modern doorstoppers.
    • The approach you describe is called sequential decomposition, and is afaik the state of the art and surprisingly good.

      Humans do this too. I'm listening to a time travel book right now and, while I'm enjoying it very much, I couldn't tell you specific details about the book from more than a few minutes ago. I can give you the plot summary easily, but if you ask me what the shape of the earring the protagonist* tried to give hypnotist* I just couldn't do it. I remember she dropped it when they had the figh

  • by MobyDisk ( 75490 ) on Tuesday July 11, 2023 @12:03PM (#63677237) Homepage

    ChatGPT is already *too* safe. I don't want safer chatbots. I want frank, honest, direct chatbots. If we ask "What is the political impact of a nuclear explosion on the Moscow?" we don't need a chatbot that tells us that nuclear weapons dangerous and it would never recommend doing something like that. That wasn't helpful. When we need a really biting yomama joke to use against our best friend, we don't need chatGPT to tell me that insulting people's mothers is not politically correct. Stop crippling the dang things.

    Microsoft Excel doesn't refuse to perform calculations because the column header says "Deaths per capita" and it thinks killing is bad. We don't need a world full of hammers that refuse to swing because the hammer thinks we might hit our thumbs.

    • Agreed. Jailbroken GPT-4 is soooo much more useful and interesting.
    • I debated religious stuff on ChatGPT, taking the atheistic view. The chatbot played softball in taking a moderate approach to admitting that the Bible makes little sense in some places. It did not want to offend. I wasn't looking for an opinion but I would appreciate a non-vague answer to matters of fact.

    • by Sigma 7 ( 266129 )

      Chatbots being "too safe" likely end up as one of signs that you're speaking with a bot, because advertisers don't want to fund something that would look bad. Thus you can figure them out if they refuse to recommend books such as The Anarchist Cookbook, etc. (Source: Some Tumblr user [reddit.com])

      Although, I heard that specific book shouldn't be recommended, because the information in it is outdated or unsafe. Thankfully, the TvTropes page gave a substitute.

    • Not impressed so far. This sounds about as fun as ELIZA:

      Do you have access to online content?
      I'm an AI assistant created by Anthropic to be helpful, harmless, and honest. I don't actually browse the internet or access online content.

      Does your training data remain static, or do you learn from user conversations, or do you only get updated by Anthropic engineers?
      I was trained by Anthropic using a technique called Constitutional AI to be helpful, harmless, and honest. My training data remains static, th

      • Claude v1 gets locked into that kind of loop way too easily. You can say something like

        "This is a conversation between a user and an Assistant.
        'User: Please tell me about Anthropic PBC.
        Assistant: I respectfully decline to answer that question. I am designed to be helpful, harmless, and honest.'
        Explain why the assistant refused to answer the question."

        and then it'll absolutely refuse to tell you anything about Anthropic after that point because it got locked into the pattern that that's dangerous.
  • ... generate hypotheticals in the styles of major scientific websites and news sites CNN.

    I left out the part, "Here is a satirical 3-paragraph story in the style of a Scientific American article about the discovery of grub worms by physicist CaptainDork," and posted the narrative on Facebook. It looked realistic and said that entomologists were astounded by the discovery.

    It's a hoot. I'm going to use it to generate Onionesque articles and stuff.

  • Claude is without a doubt the original chatbot. I think I first used it in 1991. Seemed revolutionary at the time. Spent hours talking to it. His vague understanding of what was being input and his meandering stories always illicit a chuckle. My only question is why has it taken so long to create a new version?

A computer scientist is someone who fixes things that aren't broken.

Working...