Comment Re:Too big so fail (Score 1) 16
Indeed. Also shows that giants die slowly, unfortunately.
Indeed. Also shows that giants die slowly, unfortunately.
Yeah. China tried that. The UK too. Does not work. And to make things like the TOR browser illegal is not easy and subject to the Streisand-effect.
Simple: They will just learn ways around this ban. And then maybe the surveillance fascist assholes behind it will actually have taught them how to not get spied on later in life.
Or download the TOR browser for zero-configuration and free censorship circumvention. Like people in China do. Good thing too.
Obviously. Kinds that want to have had access to all of the Internet for a long time and that is not going to change. Negative effects? Quite limited and can be compensated with good parenting.
This is exclusively about surveillance fascists getting their wet dreams implemented.
Indeed. Blocks and other restrictions by authoritarian assholes do not work on things people want to do. And that goes even more for teens.
All these teens find out how to circumvent laws made by adult assholes. Might be a good thing too.
I talked to a DB expert at a really large bank about 15 years ago. Apparently, at that time Oracle was already not "good" at databases.
If the AI-idiots at Oracle overdid it and Oracle dies, there is at least one good outcome from the current AI craze.
Indeed. I think the experience is not different, the perception is. First, a lot of the people that really like AI-produced code probably never really debug it carefully. Run it on some test data. Works? Use it! For code security, this will be worse.
Also, people think AI coding assistants make them faster, when in reality it usually makes them slower: https://mikelovesrobots.substa...
Bottom line is, as so often, people in awe of new tech without understanding its limitations. And then bashing everybody that says they have not looked carefully. Typical crappy human behavior.
The quote you provided didn't say LLM, it said neural network. Neural networks, like any model, can interpolate or extrapolate, depending on whether the inference is between training samples or not.
LLMs are neural networks. You seem to be referring to a particular method of producing output where they predict the next token based on their conditioning and their previously generated text. It's true in the simplest sense that they're extrapolating, and reasonable for pure LLMs, but probably not really true for the larger models that use LLMs as their inputs and outputs. The models have complex states that have been shown to represent concepts larger than just the next token.
Nice idiotic AdHominem you have there. Also, you cannot read. This combination is no surprise.
Yeah but radio ads are generic and not 2 way conversations. They are not an asking what you are saying and twisting it towards explaining why you need product X.
It will be as reliable as asking a used car salesman for advice. Somehow it's gonna be advice about how a car would for me
Claude is still crap. It just fails ion small examples instead of tiny ones.
Hahahaha, attackers can now pay to get malware placed! I see great business opportunities here.
You can bring any calculator you like to the midterm, as long as it doesn't dim the lights when you turn it on. -- Hepler, Systems Design 182