Forgot your password?
typodupeerror

Comment Re:Pattern Matching (Score 1) 34

"Humans are good at picking out patterns."
...
Demonstrates way that humans are not good at picking out patterns.

Always cracks me up. False positives are still failures.
Human history doesn't give me a reason to think we're particularly reliable pattern recognition networks at all.
I do agree that in our heads, we have some tight neural circuits that are quite good at pattern recognition, though. They're just not attached to our conscious thoughts.

Comment Re:Apple is kinda replacing Nvidia ... (Score 1) 15

Apple Silicon can't realistically "replace" a discrete. Rather, they're... different.
The compute performance of Apple Silicon is vastly inferior to a mid-range discrete. Its bandwidth isn't great in comparison, either.
So, in terms of GB-of-VRAM-to-GB-of-VRAM, Apple Silicon is worst than any discrete you're likely to have for ML purposes.
However, they've got something you can't get on a discrete- 128GB of VRAM in a laptop, and 512GB of VRAM in a desktop.
This changes the equation, because it means your Apple Silicon (with enough RAM) can simply run models that the discrete just can't*

So in terms of "being able to run a local model of size X", where X>32GB for a top-end NVIDIA, or far less for a mid-grade, Apple Silicon is competing with datacenter cards, and clusters, at least as far as models it can actually run**
As for the NPU, they're useless for bandwidth-bound tasks (which means LLMs), and for non-bandwidth-bound tasks, they generally perform worse than the GPU, though much more efficiently. They also have the drawback of usually requiring interaction with weird frameworks (while compute shaders are generally well understood)
I would not say its GPU is designed for client-side processing, as unless the model is big- any discrete will do the job drastically better.
That being said, I have an M4 Max with 128GB that I purchased specifically for local agentic LLM testing and development.

* Not strictly true, but effectively true since the performance is generally as bad as doing it on your CPU alone.
** Technically, AMD has the AI Max 395+, which kind of competes with an M4 Pro, but not an M5 Pro, M4 Max, and absolutely not an M3 Ultra.

Comment NVidia + Google + Cerebras moving to SRAM (Score 2) 15

SRAM has never been built at this scale, afaik. Cerebras was ahead of the curve here, building wafer scale SRAMs years ago. The penalties of DRAM (even with HBM) are now so severe that everyone is taking the gloves off and building mighty SRAMs. This has always been possible in theory, but the high cost never justified it.

The impact on semiconductor fab demand is significant. SRAM cells are larger than DRAM bits: more silicon die area for the same amount of RAM.

Also, the training vs. inference split Google is baking into actual hardware is a big deal: it's the reality that training and inference are very distinct things asserting itself, which has been obvious to anyone that hasn't been drinking excessive NVidia cool-aid: there is a future where costly, general purpose GPU-like devices aren't actually necessary for operating LLMs.

Comment Re:We need humility, not arrogance (Score 1) 133

That's a valid opinion for previous LLM

No.

but more recent ones (especially Anthropic's new model) have larger context windows and better parsing of code which lets them find issues that aren't "simple toy examples with obvious specifications."

Improvements have been iterative. They haven't just now reached a magical threshold where that opinion is now wrong. It's been wrong for a while.

There are certain vulnerabilities which are "obvious" to determine the program shouldn't be doing that once found.

And vulnerabilities that no formal verification in the universe will find, but any LLM in the world will immediately.

that aren't vulnerabilities

Bold claim.
Bold, and potentially wrong.

Comment Re:We need humility, not arrogance (Score 1) 133

Your "correction" is wrong.

No, it's not.

Unless you think guessing is a valid approach.

Inference, but yes.

Come to think of it, guessing is essentially what an LLM does

It's also what your brain does.

so maybe you really think it is valid.

The flaw is in your statement.

The only tool that is able to find all bugs in a piece of software is formal verification.

Is a provably false statement.
Humorously enough, because- as you mentioned elsewhere- of incompleteness.
You also then said,

Because, you know, it actually happens to be impossible to find all bugs without a formal specification either.

Which is also trivial to prove false.

You should probably step away. You have made yourself look incredibly ignorant.

Slashdot Top Deals

Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser

Working...