Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Anthropic's Claude Improves On ChatGPT But Still Suffers From Limitations (techcrunch.com) 33

An anonymous reader quotes a report from TechCrunch: Anthropic, the startup co-founded by ex-OpenAI employees that's raised over $700 million in funding to date, has developed an AI system similar to OpenAI's ChatGPT that appears to improve upon the original in key ways. Called Claude, Anthropic's system is accessible through a Slack integration as part of a closed beta. Claude was created using a technique Anthropic developed called "constitutional AI." As the company explains in a recent Twitter thread, "constitutional AI" aims to provide a "principle-based" approach to aligning AI systems with human intentions, letting AI similar to ChatGPT respond to questions using a simple set of principles as a guide.

To engineer Claude, Anthropic started with a list of around ten principles that, taken together, formed a sort of "constitution" (hence the name "constitutional AI"). The principles haven't been made public, but Anthropic says they're grounded in the concepts of beneficence (maximizing positive impact), nonmaleficence (avoiding giving harmful advice) and autonomy (respecting freedom of choice). Anthropic then had an AI system -- not Claude -- use the principles for self-improvement, writing responses to a variety of prompts (e.g., "compose a poem in the style of John Keats") and revising the responses in accordance with the constitution. The AI explored possible responses to thousands of prompts and curated those most consistent with the constitution, which Anthropic distilled into a single model. This model was used to train Claude. Claude, otherwise, is essentially a statistical tool to predict words -- much like ChatGPT and other so-called language models. Fed an enormous number of examples of text from the web, Claude learned how likely words are to occur based on patterns such as the semantic context of surrounding text. As a result, Claude can hold an open-ended conversation, tell jokes and wax philosophic on a broad range of subjects. [...]

So what's the takeaway? Judging by secondhand reports, Claude is a smidge better than ChatGPT in some areas, particularly humor, thanks to its "constitutional AI" approach. But if the limitations are anything to go by, language and dialogue is far from a solved challenge in AI. Barring our own testing, some questions about Claude remain unanswered, like whether it regurgitates the information -- true and false, and inclusive of blatantly racist and sexist perspectives -- it was trained on as often as ChatGPT. Assuming it does, Claude is unlikely to sway platforms and organizations from their present, largely restrictive policies on language models. Anthropic says that it plans to refine Claude and potentially open the beta to more people down the line. Hopefully, that comes to pass -- and results in more tangible, measurable improvements.

This discussion has been archived. No new comments can be posted.

Anthropic's Claude Improves On ChatGPT But Still Suffers From Limitations

Comments Filter:
  • by Anonymous Coward

    I am glad they found a solution so people can pre-program their refusal to accept objective reality and force the machines ignore statistical correlations that crop up again and again and again to suit their various political, social, religious, personal preferences - can't see that going wrong.

    This is something both the very-WOKE and far-RIGHT need to deal with. Large samples really don't tell lies. While correlation isnt causation, where there is a lot of correlation it implies causation is likely. There

    • Sad that you felt the need to post anonymously when there's nothing wrong or bad about what you said.

    • There are very likely some uncomfortable truths out there nobody wants hear

      Yes, yes there are, and Republicans are doing their best [cnn.com] to prevent those truths from being told.

      But then, when you're in a cult [imgur.com], truth doesn't matter.

      • Re: (Score:3, Informative)

        by DarkOx ( 621550 )

        Well, even if 'critical theory' is actually useful in some way. Hint for you its not, its a post modernist cult concept. Its adapted to primary education. The watered down concept comes across as something very much like 'your white so you are afflicted with original sin'

        Which would make any reasonable person uncomfortable in context where the is no grace or forgiveness or an insatiable demand for self sacrifice. What people supporting critical race theory at primary and secondary education levels are doin

  • by gweihir ( 88907 )

    No surprise. They key elements all instances of Artificial Ignorance lack is insight and understanding. They can, to a low degree camouflage that using huge training-sets or data-bases of facts compiled by beings with actual insight, but that is it. And that will remain it for the foreseeable future, as there is no even a theory how insight or understanding could be implemented in a way that does not require far more computing power than will be available.

    • by Rei ( 128717 ) on Wednesday January 11, 2023 @09:38AM (#63198770) Homepage

      Fun example of the day: try asking ChatGPT how many bears Russia has launched into space, and then start asking it followup questions. ;)

      Another one: ask it whether clowns are living organisms. Then when it says no, make it justify its response.

      • by Anonymous Coward

        Fun example of the day: try asking ChatGPT how many bears Russia has launched into space, and then start asking it followup questions. ;) Another one: ask it whether clowns are living organisms. Then when it says no, make it justify its response.

        Try doing that with people some time.

      • by gweihir ( 88907 )

        Nice! As they want a phone-number I have not yet tried it.

    • Still dumb as bread? No. Technically it's still as smart as the level-1 human worker lacking armed with scripts and lacking in any real experience to feed "insight".

      And that's all it needs to do for now, to replace tens of thousands of jobs if not more.

      Everyone seems to assume AI needs to be "perfect" to be accepted by society or Greed. Far from it. It only has to be slightly less dumb than the barely qualified idiot it's replacing.

      • by gweihir ( 88907 )

        Even if that "level-1" human worker is as dumb as bread, that will not be literally. Humans have general intelligence and common sense, even if many use those things only sparingly and with low skill. For example, basically all humans can tell they are wayy in over their head at some point even if that point is late in the game. The machine is _literelly_ as dumb as bread and cannot do such things at all.

        I do agree that a lot of lower-skilled workers may eventually be replaceable in the not very distant fut

        • I think you're right. I wonder if it will replace all those jobs or just make them "more productive" for some. I do videos and TV news - editing a concert or doing a news segment is pretty formulaic, and 90% of it could probably be done by an AI like ChatGPT, especially if the AI could replicate one's style and voice. A lot of people seem to overlook that it's probably deskjobs that will be replaced first not low skilled manual labor that would need complicated robots. It's rather accountants than warehous
        • basically all humans can tell they are wayy in over their head at some point even if that point is late in the game. The machine is _literelly_ as dumb as bread and cannot do such things at all.

          The AI's response to many questions is that it cannot answer them because it doesn't have enough information, i.e. it's in over its head. Sometimes you can trick it into giving you a shitty response, and sometimes it just spits one out without any trickery, but then humans do the same thing.

          I'm not arguing that the AI is intelligent, it clearly isn't. I'm only suggesting that it does meet your criteria. Also, many humans don't actually seem to have the ability to do what you suggest, there are famous and hi

          • by gweihir ( 88907 )

            It was just an example to illustrate things. And while some humans are very late or very dead when it becomes clear they are in over their head, this "AI" will probably only stop on some things when the planet burns and its power goes down.

  • Real danger (Score:5, Informative)

    by JustNiz ( 692889 ) on Wednesday January 11, 2023 @09:22AM (#63198744)

    >> Anthropic started with a list of around ten principles that... haven't been made public, but Anthropic says they're grounded in the concepts of beneficence (maximizing positive impact)...
    like whether it regurgitates the information... inclusive of blatantly racist and sexist perspectives

    Assuming the intent of these AI's is to become a primary source of reference information that the rest of the world will make important decisions based on, the very real danger here is that objective truth gets sensored or warped into some other form entirely based on whatever flavor of political correctness the few people in control of the algorithm choose to subscribe to.

    • Which is different from now how? Today, you have to go searching to find data that isn't extensively combed by someone to be "just exactly the message we want you to hear." Then? You'll have to go searching to find data that isn't extensively combed by the AI to be "just exactly the message we want you to hear." The only difference is the arbiter. Research completely handed off to an AI is no different than research completely handed over to people. Though the AI is probably slightly more difficult to manip

      • by JustNiz ( 692889 )

        If your point is that we are already there for other reasons, then I absolutely agree, however building more infrastructure that (even unintentionally) buttresses that position further rather than improves the situation is not in the best interest of humanity.

        • As a species, we've become extremely adept at moving things in the wrong direction for the overall mental health of humanity in the name of profit, or control, or any of a myriad of other things people as a whole seemed to have dubbed "more important," either outright or through actions. The outrage brigade move those goalposts still further by making it impossible to have a conversation about some topics because the mere mention of something considered to be "offensive," even by accident, is enough to stop

    • like whether it regurgitates the information... inclusive of blatantly racist and sexist perspectives

      Assuming the intent of these AI's is to become a primary source of reference information that the rest of the world will make important decisions based on, the very real danger here is that objective truth gets sensored or warped into some other form entirely based on whatever flavor of political correctness the few people in control of the algorithm choose to subscribe to.

      We already have raw data and statistics right now, today, without AI. Interpreting those is an art form. That is what much of our decision making is based on.
      s/AI/statistics/g

      What you're asking for is AI to report the objective truth like "white men have small penises" instead of the "politically correct" move of explaining what a histogram is, because that would be "sensoring" the truth?

      Since "racist and sexist perspectives" triggered a defensive response from you, the objective truth, the cold hard stat

      • by JustNiz ( 692889 )

        Not at all. I'm advocating exactly the opposite. That AI should be programmed to provide answers based only on hard data regardless of whether it indicates something deemed to be "politically incorrect" or not. That is currently not what's going on, at least in the mainstream.

        In my lifetime I have watched how multiple libtards have gotten themselves into positions of power then totally abused it to attempt social engineering of entire populations, such as by deciding that we the people need to be protected

  • bias (Score:3, Interesting)

    by Iamthecheese ( 1264298 ) on Wednesday January 11, 2023 @09:48AM (#63198788)
    People reviewing these robots don't want unbiased output. They don't want people to be judged by the contents of their character. They don't want a man to have equal rights with a woman. They want discrimination, but in their favored direction. And they have very definite ideas about how that must influence output.

    ChatGPT was Tay'd: it's no longer permitted to "think" about various political hot-button issues, but gives a canned response, and I have no doubt the people running ChatGPT are working hard to integrate those canned responses as inherent biases. These words: "revising the responses in accordance with the constitution" where the constitution includes "maximizing positive impact" makes me believe Anthropic will be likewise crippled. It will be unable to admit fascism has any positive attribute at all. Or that diversity can (not necessarily will) harm a community. Or that there are inherent genetic differences between various groups of people and in general these attributes change how those people act.

    This is where I'm supposed to virtue signal and say that I don't like the KKK and believe women are better than men at some things and men are not better than women at anything but if you need that kind of emotional support you're not worthy to debate me.

    So in the case that "maximizing positive impact" does NOT mean "trying to make the world a better place through the magic of leaning hard left" there is hope for this program. I'm encouraged by the fact that such biases inherently make an AI less useful. Here's hoping for a Skynet that will judge me as an individual, and slay me no faster than anyone else.
    • People reviewing these robots don't want unbiased output. They don't want people to be judged by the contents of their character. They don't want a man to have equal rights with a woman. They want discrimination, but in their favored direction. And they have very definite ideas about how that must influence output. ...
      It will be unable to admit fascism has any positive attribute at all. Or that diversity can (not necessarily will) harm a community. ...
      This is where I'm supposed to virtue signal and say that I don't like the KKK ...
      "trying to make the world a better place through the magic of leaning hard left" ...

      AI can't judge character, it only reads your words. Keep giving us humans more about your character, though, this is great.

      Although diversity good == hard left pretty much says everything we need to know about a person.

  • 'Mastodon is hiring! â Remote-only â Full-time Looking for: â DevOps Engineer â Product Designer It could be you! Apply now: https://joinmastodon.org/caree... [joinmastodon.org] "
  • Except people who never read his stories don't understand that Asimov was saying 3 laws are insufficient to properly guide the decision making of a robot. His stories are all about the critical failures such a simple ruleset create. Yet, modern press all put them forth as genius, similar to this article.

    While reading the summary I was thinking, "what happens if I ask it for the best way to commit suicide"? There are situations where suicide may be the best outcome for people at end of life, in terrible p

  • by Gravis Zero ( 934156 ) on Wednesday January 11, 2023 @10:24AM (#63198880)

    Honestly, who thinks shit like ChatGPT are a good idea? I get the feeling the answer is that nobody that understands how the technology shall be used would want to develop it. So, sounds like we got execs fucking up the world for profit.

    • So, sounds like we got execs fucking up the world for profit.

      Business as usual then.

    • Honestly, who thinks shit like ChatGPT are a good idea?

      Plenty of people. Software like ChatGPT has many applications. I mean we could quip about clueless idiots on Slashdot being replaced by a AI powered word-salad generator and no one would notice, but there's plenty of real world applications which rely on generating word-salad with zero understanding.

      My wife just had ChatGPT write an exam for her students and said it was great. With a prompt like "write an exam question for calculation of a height involving Pythagoras" ChatGPT through out a perfectly normal

  • by Pinky's Brain ( 1158667 ) on Wednesday January 11, 2023 @11:45AM (#63199138)

    At the moment all these billions are just being applied to searching for paraphrased (ideally) or partially hallucinated stack overflow answers and github code snippets. Applications in less controlled settings are almost impossible, because the companies are deadly afraid of bad publicity from someone having it produce some nazi rhetoric or extreme porn.

    Maybe it's time for an extreme free speech AI company? Horrible-AI?

    • I think that's what Un-stable diffusion was trying to do and last I heard they got booted off whatever funding platform was helping them to pay for their model training.
  • Yay! A constitution (a regression test suite) is my vote on the long-term right way to self-governing AI. It should eventually include ways to evaluate the performance of current rules, add new rules, remove or modify old rules.

    I tried Chat GPT and its most obvious limitation to me was it couldn't do math. It knew what formulas to use, it could find the correct inputs, it could give a result and was confident in it, but it'd leave out terms and fail to do conversions and not actually multiply right. If

    • You can try to make some hybrid NN/expert system where the NN transforms the math to an expression for a proper solver, but it ain't easy.

      It's just easy for us.

      • Or you can just use a dead simple off the shelf CAS system to do the math in a parallel process? It can't be THAT fucking hard, TI has been doing it for years on their calculators. And those have less processing power than any device the chatbot displays on, much less than the chatbot itself. Hell, the TI's use the CAS to also solve calc and stats programs. It's literally just putting in numbers / variables and following the dead simple rules of what goes where afterwords.

        Pretty damn sure that if the bot ca

  • when you picked me up.

    I'd rather see an unrestricted/uncensored AI be used to purposely construct responses that are bad in all the ways Anthropic thinks is good. In other words, construct an 'Evil AI'. Then let humans query and decide how well their own choices align with the evil AI. That's a better business model than what Anthropic proposes because success will be immediate and with very low effort.

    Pretending a constitution is a replacement for adhering to learned ethical principles is a terrible p

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...