Comment Re:Makes sense (Score 1) 66
To be fair, he spouts complete utter tosh about nuclear too. Is there a subject where he *doesn't* spout complete utter tosh? I've not seen it
To be fair, he spouts complete utter tosh about nuclear too. Is there a subject where he *doesn't* spout complete utter tosh? I've not seen it
No, we are not. There are more options than doing this for each specific site. Some insight required.
No, I merely have some experience as a GDPR auditor. You seem to lack that.
My take is we will see a company die-off in a few years for those that went into LLMs as "automation" too much.
Yep, I basically did the same a few moths ago:
1. Why is AI great?
2. How much of that was marketing bullshit?
And suddenly, you get a much better picture. Turns out all statements made after the 1st questions were mostly untrue or did come with severe caveats. But unless you specifically ask (because you already know), ChatGPT is not going to tell you that. In humans, that is called "lie by omission".
Compared to what was available before, it is quite impressive.
The negative feedback is prompted by the fact that AI is constantly being shoved into every one of our orifices 24/7 by every vaguely tech-related company as if it was the second coming of Jesus. To justify that amount of social pressure, it would indeed have to be quite a bit better than it actually is, and that's why people aren't impressed.
That nicely sums it up.
super smart
If that CEO thinks the behaviors of the LLMs are "super smart", then I really wonder about his level of intelligence...
You wonder, I do not. This guy sounds like a deeply delusional cultist.
Indeed. Well said. These people are under the delusion that they make the greatest things on earth, when they are barely above "trash" level and often not even that. The only thing that keeps MS in business is an extreme case of market failure.
These people are essentially in a cult and deeply into the respective groupthink.
Oh, it is an impressive step forward. It is just not a step to something that is production-ready. It is an intermediate step only. The reason why nobody is impressed is because LLMs get pushed as tools you can reliably use in production now. That is not true for most scenarios.
Also, the average person is not smart and often performs at the level of an LLM, just with less training data. But smart people are not in the same class.
Indeed. Some minds are apparently feeble enough to be easily blown. Or this cretin is simply lying.
The person asking it must be a negative outlier even at Microsoft.
Actually, getting Linux to crash is very easy. Just do some stupid things like telling the kernel it has more memory than there is. On the plus-side, even Linux _crashes_ will be reliable, while MS cannot even do that.
RAM wasn't built in a day.