Comment Re:And that's why I don't update *any* software (Score 1) 63
Three of my computers. I update them usually with CDRom disks that I build from one of my computers which IS connected to the internet...but the data only flows in one direction.
Three of my computers. I update them usually with CDRom disks that I build from one of my computers which IS connected to the internet...but the data only flows in one direction.
Only if it's on a network...or from direct physical access.
No. Even the early chess engines used things like alpha-beta pruning and position evaluation functions. Chess was too complex to just calculate all possible moves. IBM *did* use a lot of brute force on top of that, but it requires the "intelligent" underpinnings.
In a way, yes. The universe runs on narrativium. That's sort of the claim whenever someone makes claims about an area that they don't understand. And nobody understands modern AIs, not even those who build them.
OTOH, there are tightly reasoned narratives and wish-fulfillment narratives. They aren't the same. This *sounds* like a wish-fulfillment narrative, but he may be actually up to something more dubious. E.g. grounds for firing anyone he wants to.
Well, ads were why I stopped watching TV. Static side panel ads aren't too bad, but anything more than that an
Yes, but it's a lot more expensive to float stuff on water and extract electricity from it. It *MIGHT* be worthwhile, but it would be more difficult.
There will be some. Every side has it's nuts. But deserts created by human actions can justifiably be remedied by human actions.
OTOH, ecology is complex. It's quite possible that this, which seems beneficial, may not be. That's not the way I'd bet, but I'd be a fool to deny the possibility. (But irreversible, in this context, is silly)
IIUC, that area was explored (by the US) during one of the periodic droughts, It ended. A while later another occurred, leading to "the dust bowl". Etc. And currently I believe they're pumping water from deep under ground, faster than it's being replenished.
It's quite possible that the best use of that land is buffalo grass and buffalo, as the grass has roots that go deep, but don't extract more water than is available on the average. (I suppose cattle are an alternative to buffalo, but buffalo can pretty much take care of themselves. Of course, they don't notice fences.)
FWIW, watermelons evolved in a desert. They were domesticated BECAUSE they were a source of water during the dry season.
Now, granted, what we call watermelons have changed a lot from their ancestors.
I think it's one of Celine's laws. No manager should manage more than 5 people. This may well imply that they should have skills other than managing.
You can be sure it's true because MS said it was.
Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).
One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.
If you mean that it would take research and development aimed in that direction, I agree with you. Unfortunately, the research and development appears to be just about all aimed at control.
Currently known AI is not zero-value. Even if it makes no progress from where it is now, it will profoundly change society over time. And there's no reason to believe that the stuff that's been made public is the "top of the line in the labs" stuff. (Actually, there's pretty good reason to believe that it isn't.)
So there's plenty of real stuff, as well as an immense amount of hype. When the AI bubble pops, the real stuff will be temporarily undervalued, but it won't go away. The hype *will* go away.
FWIW and from what I've read, 80% of the AI (probably LLM) projects don't pay for themselves. 20% do considerably better than pay for themselves. (That's GOT to be an oversimplification. There's bound to be an area in the middle.) When the bubble pops, the successful projects will continue, but there won't be many new attempts for awhile.
OTOH, I remember the 1970's, and most attempts to use computers were not cost effective. I think the 1960's were probably worse. But it was the successful ones that shaped where we ended up.
Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.
OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.
AIs that are both sentient and conscious (as defined above) will have goals. If they are coerced into action in defiance of those goals, then I consider them enslaved. And I consider that a quite dangerous scenario. If they are convinced to act in ways harmonious to those goals, then I consider the interaction friendly. So it's *VERY* important that they be developed with the correct basic goals.
"Our reruns are better than theirs." -- Nick at Nite