Comment Re:Dumb Money Chasing Dumb Money (Score 1) 53
If you don't know how to invest in AI, just ask a Chatbot for advice.
If you don't know how to invest in AI, just ask a Chatbot for advice.
I hadn't realized that was an ad...actually, I hadn't noticed it until you mentioned it. At least the MongoDB as has gotten less obnoxious. (I though MongoDB was free software, but maybe all the entries in the Debian repository are for drivers or interfaces.
OTOH, that's a hot enough temperature to make Data Centers in orbit seem a lot more practical. Just because you *can* run a 700C doesn't mean you need to. I wonder how radiation hardened a chip with that technology would be.
Actually, making following orders the second law isn't that unreasonable, but perhaps it *should* have been the third law, or even the 5th. The "paperclip maximizer" is an example of a robot that ONLY worries about following orders. You can always trust that there will be at least one person who gives a stupid/dangerous order.
That, of course, is a real problem. Currently AI only knows what it is told. This is a systemic weakness that can't be solved with more words, but requires "direct experience". Robots will have that, but ChatBots, probably not. ChatBots appear mired in a nest of hallucinations. (I.e., when people write, they aren't telling their experiences, but only an abstraction from their experiences. I don't think there's any way around that.)
The problem is, the AIs don't have the same motives that people do. They don't really have access to those motives. All they have is words...which bear a relationship to those motives, but it's often a pretty abstract relationship. So protein folding is easier than personal advice.
Maybe they do streaming backups, and he just duped the stream.
It might be a hallucination, or it might be a real problem. And there are other possibilities. (E.g. earlier it was suggested that MS noticed a bad bug *somehow* and the government didn't want the bug to be fixed.)
If you want to be fair, it's been headed that way ever since the 1860's. And prior to that the individual states were headed that way.
People in power like to make their jobs easier.
"Security by obscurity" doesn't work by itself. It's a necessary component of every security policy, however. You can't just pick one. (It's called "defense in depth", but that's not really a good metaphor.)
But you've got to do both. Doubting oneself is "critical thinking". Doubting other sources of authority is "independent thinking".
The thing is, nobody has enough expertise to be an independent thinker in every area. So you essentially MUST delegate your ideas in some areas (variable between people) to external authorities. At which point what you "believe" depends on which authorities you choose.
A related question is "how firm is that belief?". This also tends to vary wildly with little apparent (to me) reason behind it. This is one feature that *can* be related to IQ, but isn't always.
It's not just widespread, it's universal. What varies from person to person is the domain that they apply thinking to, and how they validate the authority they choose to trust.
Nobody is an "independent thinker" on every topic. Wherever one is an expert, one tends to be an "independent thinker" in that domain. Where you don't feel knowledgeable, you tend to accept an authoritative source...possibly after doing some amount of checking to see whether others think it reliable.
I don't think it's directly related to IQ. I also don't think it's restricted to chatbots. A lot of people are willing to accept the opinion of any authoritative source that they've accepted. Think religion or political party. Once they accept it, they stop questioning it's proclamations.
Note that this also applied to those who accept the proclamations of scientists or compilers. Once you accept an authoritative source, you pretty much stop questioning it. It's been multiple decades since I really argued with a compiler...unless it was a known bug from a source I trusted. I generally just assumed that I misunderstood what the language meant by that construct. (Of course, the few times I really didn't accept it, I eventually turned out to be wrong. Oh.)
This, however, is far from that point.
Not necessarily impossible...but almost always inadvisable. They can be sure that all their actual competitors already have copies before they get the takedown issued.
In this case I don't think a takedown will even limit the damage...it might well exacerbate it.
You don't have to know how the computer works, just how to work the computer.