Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Supports, not validates (Score 1) 52

> I doubt it is really possible to produce quality output wrangling 5 sessions on 5 pieces of system code at a time.

Right, you can run as many Claude Codes in parallel as you like (each working on a different git checkout), but it's the human spinning them up and directing them, checking/testing the output, throwing away if necessary (Cherny says he throws away 10-20%), and a human swivel-chairing between 5 different development tasks at the same time doesn't sound like a recipe for high quality output.

In fact Cherny claims he simultaneously runs 5 Claude Code terminal sessions, and also 5-10 Claude web sessions, handing things off from on to the other, sometimes starting sessions on his phone too. Given that most of the smarts of Claude Code is in Claude itself, not Claude Code (which is just a go-between, executing the file read/write/edit commands that Claude requests), it makes you wonder what on earth he's working on?

Comment Re:The only problem, though, is getting to that po (Score 3, Informative) 52

Cherny does say on Twitter than he ends up throwing away 10-20% of the Claude Code sessions that he starts because they have ended up nowhere (he's not specific about the various ways it fails). No doubt as creator of the tool he's well attuned to knowing when to have it try to course correct and fix, or better to abort.

Comment Re:TFA sounds like Co-pilot orchaestration (Score 2) 52

I'm curious if you've also tried other LLMs for writing API docs, maybe Gemini (which in general seems very good), and how Claude compares specifically for that (writing detail-orientated documentation)?

Are you talking about Claude (web interface) or Claude Code (which works a bit differently in the way it edits files)?

Comment Re: He keeps saying that (Score 1) 32

Well, yes, LLMs and JEPA are both predictors, and it's the objective function that sets them apart, but the significant difference is that LLMs are trying to predict what they themselves are going to "do" next, while JEPA is trying to predict what the external world is going to do next.

It's maybe just as well that LLM's don't learn at runtime since then they'd learn to self-predict even better and maybe get into some dysfunctional feedback loop.

The whole point of JEPA, with it's outward vs inward looking objective, is that it will eventually learn at runtime and then it'll be learning about the world, not just about itself, which is what makes it potentially (if taken to fruition) animal-like rather than just a ginormous rules-based generator.

Comment Re:He keeps saying that (Score 3, Interesting) 32

I really don't know why LeCun is a rock star. It seems his main achievements have been an early invention/application of convnets for reading handwriting, and an early involvement with EBMs (interesting, but didn't really lead to anything). His claim to have invented convnets seems a bit dodgy since these (originally just considered as weight-sharing between kernels applied at different positions) seem to have first been mentioned by Hinton in the PDP handbook.

That said, I do think JEPA is a step in the right direction since the model is now essentially predicting the external environment (per it's own latent space sensory representations) as opposed to an LLM which is auto-regressive - predicting it's own generative continuations.

JEPA isn't exactly ground breaking - it's widely understood that animal/human brains are predicting the external world, not just predicting auto-regressive behavioral continuations (although we do that too), but at least LeCun is a fairly rare voice pointing out that ultimately, on the quest for human-level intelligence, LLMs are a dead end. LLMs are very useful, and will get better, buy they are what they are - ultimately more akin to expert systems, packaging canned knowledge, than animals.

Comment Re:What could possibly go wrong? (Score 3, Insightful) 272

Not just Windows - I'm sure a lot of Office is also old enough to have been coded in C++, and apparently parts of Azure as well.

This is going to be one of the most epic rewrite failures of the software industry, even if the result is only a massive waste of resources rather than actually doing material harm to Microsoft.

This is a chance for anyone applying for the Principal engineer position to get their name in the history books, but not in a good way.

Comment Re:Not a "refactoring" (Score 1) 272

Yeah, it seems highly likely that either the timeline will be totally off, or they'll run into so many problems that they'll eventually quietly abandon the effort.

In any case, why does this all need to be rewritten in another language if super-human AI is around the corner? If humans can maintain a large C++ codebase (just takes some discipline to avoid unsafe legacy features and practices), then why can't AI do it ?!

Comment Maybe "Fire and Ash" will fail because it sucks ? (Score 1) 66

> If I get to do another Avatar film, it'll be because the business model still works

The first Avatar movie was amazing - it introduced us to a whole new world, and had an interesting story line.

The second Avatar movie ,"The Way of Water", was OK to extent that it revisited this world that movie fans had fallen in love with, but the story line wasn't great, and it was more of just a pure action / war movie than a story telling one.

This latest movie, by all accounts, is just a 3 hour long war movie. No doubt the visuals are spectacular, but without any compelling story line it sounds awfully predictable. Maybe it'll manage to rake in another couple of billion, but if it doesn't I don't think it's any indication that the movie business model is broken - more that if you are going to spend massively to make a movie then there better be something very compelling about it, not just a formulaic "franchise part 3" trying to cash in on prior success.

Comment Re:Intelligence, my ass! (Score 1) 24

That's too simplistic.

Yes, it's a trained regurgitator, but it's regurgitating statistics not training samples, which is why it's (LLMs) a lot more capable and useful than if it was just regurgitating training samples.

Can they reason? Well, yes, to some degree, even if there is a lot structurally missing to be human level.

- A base model can regurgitate reasoning as well as anything else

- RL-based post-training heavily biases these models to regurgitate reasoning steps that will work towards a successful result in areas like math and programming. RL-training is going beyond next-token prediction to work towards a longer horizon goal.

- Agentic scripting and iterative calling of the base LLM can create some degree of reasoning/planning beyond what's in the model itself, even if this is limited by being hand crafted

Can they learn (other than during pre-training)? Well, not very much, but there is in-context learning that lasts for the duration of a session, and many of these models/interfaces now have memory that may somewhat mitigate lack of learning in some cases. But, yes, lack of real incremental learning is a major gap.

Ability to design? Depends on what you mean. They can riff on design just as well as they can riff on any other sort of content in the training data, and in many use cases people are not looking for creativity - they just want the LLM to answer questions or generate code, etc. LLMs do lack true creativity - ability to transcend what's baked into them by pre-training, but I think there's a lot more to fix (e.g. continual learning) before this really becomes the major issue in them having less than human ability.

Comment Re:Tactile things are fun (Score 1) 62

Where can you get film developed and prints made from negatives nowadays? Bricks & mortar photography store?

The kids like retro point and shoot digital cameras too, like the Canon Elph. Maybe its part the experience of using them, but they also like the output better than iPhone. More flattering, perhaps.

Comment Re:How about allowing Chinese EV here ... (Score 1) 206

I'm not sure I would really want tiny ATV-sized cars on the road, but there are plenty of superb full sized Chinese EV's too, selling for 1/2 Tesla prices, as well as higher end ones also "1/2 price".

Check out Marques Brownlee's recent YouTube review of a Xiaomi SUV for example. His conclusion "this is a $42,000 car that would compete well against anything in US up to $75,000".

Slashdot Top Deals

"Who alone has reason to *lie himself out* of actuality? He who *suffers* from it." -- Friedrich Nietzsche

Working...