Comment Re: Testing? (Score 1) 108
There are actually several, including language translation layers in some models.
There are actually several, including language translation layers in some models.
You know what, you should start by convincing yourself, blahabi, that you truly think batteries are dangerous. That would mean getting rid of all the computing devices in your possession, and that would mean an end your inanely stupid comments on Slashdot, so it would be winners all round! Go for your life, big boy. Show us your courage by giving in to your fear of the deadly batteries that menace you at every turn. Watch out! There's one behind you, creeping up ready to explode and set fire to you right now.
Of all the ridiculous things in the world to be worried about, this is one of the very most ridiculous of all. The rate of fires is massively lower than for ICE cars. You have loads of lithium battery devices in your house already, and you don’t (presumably) shriek with fear each hour of each day as you contemplate the terrifying death traps they apparently represent, tucked away next to your combustible household goods. And of course, it’s also such an outdated fear, because LFP is already well established — about *40%* of the global EV battery market — and has a substantially lower risk of fire than NMC.
These are not meaningful risks to your health and safety. Fumes, noise and car-dependent urban design are meaningful risks to your health and safety.
Are you on drugs? The story here is on a robot (!) walking (!) or rather failing at it. Your comment makes zero sense.
You are clearly an idiot with a gigantic ego. These terms are decades old established disciplines. If you had any actual on-target understanding, you would know that. Do not expect everybody to think as sloppily as you.
"Communicating with a system in natural language" is a sub-discipline of "NLP". Incidentally, LLMs cannot really do that. They need an NLP layer for that to work. Raw LLM output is not something you want to use.
and there is no such thing as some kind of right to infinite discovery.
Thanks for making it clear that you are not interested in the facts of the matter. How pathetic.
Seems to me, they are just insuring their income stream stays nice and healthy.
Yep. Commercial education is something that does not work.
Sorry, but "multiplication tables" and doing basic algebra in your head is not Math. It is bullshit make-work. You can carry a calculator (most of us do), a slide-rule or pen and paper. What counts is whether you know what multiplication does and how it works, not some rote memorization. I say that as a PhD-level engineer.
Indeed. When everybody graduates, the degree becomes a complete joke. And that is really bad for society, with long-term effects.
The reasons are typically too many people incapable to admit things are broken (and here: admitting that their kids may not be so smart). Denial kills everything because it kills feedback loops.
Massive. As in "let's copy anything we can".
And no, data processing to calculate LLM calibration is not "training" in the same sense as a human does it. Courts and the law understand that, even if idiots like you do not.
You think wrongly. No experience with data analysis?
Here is a starting-point: https://matthewdwhite.medium.c...
Here is another: https://ml-site.cdn-apple.com/...
On the side that claims LLMs can reason, it is all marketing materials only, i.e. bullshit.
I see some stupid people have mod-points. Here, waste some more. I dare you.
Naa, that would have been competent. AI people do not do competent. The ones that did renamed their occupation to other things, like cognitive systems, automated reasoning, robotics, etc. The ones that still claim to be doing "AI" are the hacks that are in it for the money.
One man's "magic" is another man's engineering. "Supernatural" is a null word. -- Robert Heinlein