Comment Re:Ironically, this Slashdot summary title is a li (Score 1) 103
Which it was.
Which it was.
Do you often use VeraCrypt on a company-managed device? I'm sure if you do then it's with the knowledge and consent of your IT department and they'll be responsible for managing any consequences of the VeraCrypt issue according to their official policy as well.
Current base price for a Mac Mini is $599. So, there's that.
the Mac mini being the rare exception, which was just a little too nerdy (needing your left over keyboard, mouse, and monitor)
If that's a barrier to entry, it's one that is shared by 90% of the (non-laptop) PC market, and it never seemed to bother PC users. It's not like Apple won't happily sell you a keyboard, mouse, and monitor along with your Mac Mini, if that's what you want to do.
Since pertinent information was withheld (that it didn't know), then by your own post you acknowledge it was a lie of omission.
The stupidity of people these days is truly beyond belief. And, yes, get the f off my lawn.
We learned back in the 80s that trying to get a neural net to emphasise what you want is actually very difficult. What it will tend to emphasise are the assumptions that underly the test data, and that's usually a completely different sort of fiction.
it's using horrendous amounts of power and causing untold environmental damage
Comparable to, say, a 787 airliner, whose environmental damage we tolerate without thought or comment simply because we're already used to it.
while maintaining the existing overall parity between the bad guys and the worse guys.
Consider the alternative, then. Anthropic does nothing, and sooner or later OpenAI or some other less responsible company delivers an AI with similar capabilities, but just throws it out to the public without much thought about the consequences. Both the black hats and the white hats start using it, of course, but the black hats have a field day compromising anything and everything before the white hats have a chance to find, fix, and distribute all the necessary patches to defend against all the newfound exploits. Not a great situation to be in, but probably unavoidable at this point unless the white hats are given a head start.
But was that figure provided by AI?
Even if not, we all know that 793% of all statistics are invented.
If something is inaccurately presented as being the truth, then it is a lie of omission because it is dishonest about the fact that the information isn't actually known.
Gemini is exceptionally bad, as LLMs go. I really have no idea why it is so dreadful, even compared to other LLMs. It isn't context window. and it doesn't seem to be training material either.
Cyber Implications have been noted. Mondas security is to be Cyber Vibed until we have Cyber Security capable of defeating The Doctor.
When I test the different AI systems, Google's AI system loses track of complex problems incredibly quickly. It's great on simple stuff, but for complex stuff, it's useless.
Unfortunately.... advice, overviews, etc, are very very complex problems indeed, which means that you're hitting the weakspot of their system.
I've designed a few machines - some rather more insane than others - in meticulous detail using AI. What I have not done, so far, is get an engineer to review the designs to see if any of them can be turned into something that would be usable. My suspicion is that a few might be made workable, but that has to be verified.
Having said that, producing the design probably took a significant amount of compute power and a significant amount of water. If I'd fermented that same quantity of water and provided wine to an engineering team that cost the same as the computing resources consumed, I'd probably have better designs.But, that too, is unverified. As before, it's perfectly verifiable, it just hasn't been so far.
If an engineer looks at the design and dies laughing, then I'm probably liable for funeral costs but at least there would be absolutely no question as to how good AI is at challenging engineering concepts. On the other hand, if they pause and say that there's actually a neat idea in a few of the concepts, then it becomes a question of how much of that was ideas I put in and how much is stuff the AI actually put together. Again, though, we'd have a metric.
That, to me, is the crux. It's all fine and well arguing over whether AI is any good or not (and, tbh, I would say that my feeling is that you're absolutely right), but this should be definitively measured and quantified, not assumed. There may be far better benchmarks than the designs I have - I'm good but I'm not one of the greats, so the odds of someone coming up with better measures seems high. But we're not seeing those, we're just seeing toy tests by journalists and that's not a good measure of real-world usability.
If no such benchmark values actually appear, then I think it's fair to argue that it's because nobody believes any AI out there is going to do well at them.
(I can tell you now, Gemini won't. Gemini is next to useless -- but on the Other Side.)
This means you shoud NOT, under any circumstance, run Claude at 88mph. Unless you really want to.
Take everything in stride. Trample anyone who gets in your way.