Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Anthropic's AI Can Now Run And Write Code (techcrunch.com) 23

Anthropic's Claude chatbot can now write and run JavaScript code. TechCrunch: Today, Anthropic launched a new analysis tool that helps Claude respond with what the company describes as "mathematically precise and reproducible answers." With the tool enabled -- it's currently in preview -- Claude can perform calculations and analyze data from files like spreadsheets and PDFs, rendering the results as interactive visualizations.

"Think of the analysis tool as a built-in code sandbox, where Claude can do complex math, analyze data, and iterate on different ideas before sharing an answer," Anthropic wrote in a blog post. "Instead of relying on abstract analysis alone, it can systematically process your data -- cleaning, exploring, and analyzing it step-by-step until it reaches the correct result." Anthropic gives a few examples of where this might be useful. For instance, a product manager could upload sales data and ask Claude for country-specific performance analysis, while an engineer could give Claude monthly financial data and have it create a dashboard highlighting key trends.

This discussion has been archived. No new comments can be posted.

Anthropic's AI Can Now Run And Write Code

Comments Filter:
  • by Anonymous Coward

    Of course, it'll be “mathematically precise” — because that's what we’re all missing: more bots that can spit out numbers while pretending to be insightful. I can't wait for the day when it’s just chatbots talking to each other about quarterly sales reports while we sit back and let them run the show.

    • by dfghjk ( 711126 )

      "mathematically precise and reproducible" doesn't even begin to meet the requirements anyone would have of code. exit(1) is both mathematically precise and reproducible yet doesn't do anything except indicate failure.

      • Whoa now, this thing writes javascript. We are talking about web development here. All it needs to do is glue a bunch of barely tested open source projects together using webpack and it'll be better than 90% of the people I interview.

        • by gweihir ( 88907 )

          it'll be better than 90% of the people I interview.

          Not denying that. But that is a _really_ low bar. The term "better crap" comes to mind.

      • by gweihir ( 88907 )

        True. But the average moron does neither understand what "mathematically" nor "precise" nor "reproducible" means, and is going to attribute deep meaning and a high level of achievement to something that is essentially just a very bad idea and an accident waiting to happen.

    • Preferably in Tahiti

  • 50/50 chance that climate change or AI nonsense will destroy our civilization.
  • Is being able to write Javascript considered a difficult feat?
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Only if it's trying to maintain the code I wrote
  • From personal experience it seems to me like Claude.ai actually does handle programming specific questions better than other chatbots.

    It still can hallucinate but if you want to run anything programming related you may want to give Claude a try.

  • So what now? Forbidding AI from knowing certain languages? No AI equivalent of a process 0/process 1?
  • by Somervillain ( 4719341 ) on Friday October 25, 2024 @04:17PM (#64894589)
    The whole Devin AI developer thing turned out to be a scam. Any tool can "write code". I ran into 3 problems this week that were tricky and decided to try out ChatGPT one was obscure, so OK...ChatGPT failed completely and suggested a solution that didn't even remotely work....but I can forgive that. I have realistic expectations. I went to ChatGPT because I couldn't find the answer in the vendor docs nor stack overflow or Google.

    So last night, I had a RegEx issue and thought...well, this is PERFECT for ChatGPT. I asked it something along the lines of "Given xxxxyyxxxx. I want to replace yy with zzz...how can I do so (x, y, and z were segments of customer configuration I can't share here). Their solution?...close, but completely wrong. Then I asked how to do it in Java...also completely wrong...didn't even resemble working.

    So I figured "well, the whole world raves about ChatGPT endlessly and I just bitch about it...surely there's something I don't know...well, let's try something ChatGPT should be good at...I want a smart electric kettle, so I asked ChatGPT "What electric kettles have WiFi and Alexa integration"...a nice, simple softball question...it returned 4 results, half didn't have ANY smart functionality at all.

    So...I gave the top AI 1. a difficult configuration/coding question. 2. an easy RegEx one. 3. a super easy shopping question...it failed on all 3. That's a VERY LOW success rate...

    OK, anecdotes are like assholes, everyone has them and they all stink....So here's how I know it's BS.

    If Generative AI systems could write working code, they would!!! They wouldn't promise it. They wouldn't say "someday". They would show, not tell. There are a million ways to print money if you can write code...for starters, how about a smarter runtime?...MS is a leader in AI...why not have a .NET CLR that takes your shitty VB code and converts it into assembly code that looks like it was written by the world's greatest programmers?...why not do the same for C#? Some customer would be happy to pay a premium to have their code run much faster on your azure platform. Why stop at the CLR? Why not a Python runtime? Why not just run it internally and cut your own cloud compute bills? They've poured trillions into this, applied the best minds we can find...and all they have are empty promises? It tells me the LLM model is an empty promise.

    If you could write a program that wrote useful code, you could charge a fortune for it...or charge a fortune to rewrite existing code, optimized for performance or guaranteed to be secure. Companies would pay a fortune just for the security compliance part. What if there was an AI service that would scan your code for vulnerabilities and certify it is secure...offering patches for any vulnerabilities found...SOOO many companies would pay thousands a month for such a service...even if it didn't end up fixing anything for them...just in case...because no one wants to deal with the headache of a data breach.
    • If Generative AI systems could write working code, they would!!! They wouldn't promise it.

      Indeed! It's like if a restaurant proudly advertises that they don't have rats or a car is advertised as being able to move you around WITHOUT BREAKING DOWN, it kind of makes you a bit suspicious as to why they decided to bring that up...

      Anyway I've been using chatgpt recently to "write code", and it's not great but it is pretty useful. Needed to do some 3D plotting from python and matplotlib is still not where Matlab

    • We use AI (Copilot) at work for summarizing meeting transcripts, telling it to pull out action items, deliverables, etc and to provide a boiled-down summary of what happened without all the chatter and irrelevant stuff.

      I hate to say it, but it works pretty well. Maybe even very well. It's way way waaaaaaay faster than doing it by hand (seconds instead of an hour or two) and no one has to take notes.

      It's not perfect, occasionally it'll miss things, but there haven't been any hallucinations (so far), no gobbl

  • Can it RUN and CHEW gum?

  • Amazing that Antropics can do it the other way round. =/

Hackers are just a migratory lifeform with a tropism for computers.

Working...