Comment Re:The real mystery (Score 1) 34
Sat phone, though- not a bad idea.
and transistors are not neurons which is also enough to prove it's a folly.
Prove?
Transistors aren't vacuum tubes- it's folly to think you could implement a computer on them. Ask drinky- he can prove it.
Seriously, and again, you're too fucking stupid to have this conversation.
SRAM has never been built at this scale, afaik. Cerebras was ahead of the curve here, building wafer scale SRAMs years ago. The penalties of DRAM (even with HBM) are now so severe that everyone is taking the gloves off and building mighty SRAMs. This has always been possible in theory, but the high cost never justified it.
The impact on semiconductor fab demand is significant. SRAM cells are larger than DRAM bits: more silicon die area for the same amount of RAM.
Also, the training vs. inference split Google is baking into actual hardware is a big deal: it's the reality that training and inference are very distinct things asserting itself, which has been obvious to anyone that hasn't been drinking excessive NVidia cool-aid: there is a future where costly, general purpose GPU-like devices aren't actually necessary for operating LLMs.
That's a valid opinion for previous LLM
No.
but more recent ones (especially Anthropic's new model) have larger context windows and better parsing of code which lets them find issues that aren't "simple toy examples with obvious specifications."
Improvements have been iterative. They haven't just now reached a magical threshold where that opinion is now wrong. It's been wrong for a while.
There are certain vulnerabilities which are "obvious" to determine the program shouldn't be doing that once found.
And vulnerabilities that no formal verification in the universe will find, but any LLM in the world will immediately.
that aren't vulnerabilities
Bold claim.
Bold, and potentially wrong.
Literally everything you just said has no basis in anything I said.
False.
Again, if you can't read, please don't reply to my posts.
Can't tell if gaslighting, or honestly stupid.
You're wasting my time and everyone else's.
It's never a waste of time to confront a partisan shit-for-brains.
That, and all the open fascists.
Ah, there it is.
From my perspective the problem is those fascists, and the illiberal shit-for-brains like yourself that don't realize just how little sunlight exists between you and them.
Your "correction" is wrong.
No, it's not.
Unless you think guessing is a valid approach.
Inference, but yes.
Come to think of it, guessing is essentially what an LLM does
It's also what your brain does.
so maybe you really think it is valid.
The flaw is in your statement.
The only tool that is able to find all bugs in a piece of software is formal verification.
Is a provably false statement.
Humorously enough, because- as you mentioned elsewhere- of incompleteness.
You also then said,
Because, you know, it actually happens to be impossible to find all bugs without a formal specification either.
Which is also trivial to prove false.
You should probably step away. You have made yourself look incredibly ignorant.
Yes, that is what this idiot just claimed.
No, it isn't.
No, there is no magic in LLMs
Correct.
and incompleteness not only applies
Wait- wasn't it you who said "The only tool that is able to find all bugs in a piece of software is formal verification.".
lol- you fucking idiot.
Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser