Comment Re:common sense (Score 1) 100
It wasn't a place that knew their identity prior to the merger.
It wasn't a place that knew their identity prior to the merger.
I agree that the folks who are currently called conservative have more similarity to the "Know Nothings" that to traditional conservatives. I've got a reasonable amount of respect for the conservatives that model themselves on Teddy Rooseveldt. Even the Rockefeller conservatives aren't total bags of shit. The current crop? I can't see even one redeeming feature.
But that only works in legal matters. I (officially) there is no government involvement, the Constitution doesn't say anything.
The "(officially)" is because there is ALWAYS government involvement. The internet is managed by the FCC, etc. But I'm pretty sure that's not enough to cause a legal issue here.
Perhaps you should say "still officially anonymous". If the information is in a corporate database, it's not safe to consider it secure. Consider:
1) leaks
2) hacking
3) subpoenas
4) botched updates
$) bankruptcy sale
6) being acquired by someone else
and I'm sure I've missed a few circumstances. Circumstances that appear in the news every week or so.
What he says may well be true, but it appears so vague that there would be no way to disprove it come September.
I agree with your first paragraph.
For your second paragraph:
The "initial learning state" of a mammal depends strongly on instincts. This is not clearly true of an artificial intelligence, so quite possibly a thing could need to be trained to a certain point before it would be able to learn on its own. (Even people have leanings in that direction.) So perhaps a thing that had the potential to be an AGI would need proper training from the initial state.
For the third paragraph:
Self healing is a desirable characteristic, but is not something that would differentiate between an AGI and a non-AGI, except in the sense that non-elastic recovery is a necessary component of learning.
Does his theory explain the rotational speed of galaxies? I've heard claims that it doesn't.
Dark matter is NOT an explanation. It's the name for a series of effects observed. The "dark" there is to indicate that we don't understand what's going on. The "matter" is to indicate that we're talking about gravitational effects.
It's not the best of names, but don't confuse it with an explanation. It's just a name. When there's a real explanation it will get renamed/
It could be a language problem. But I'm going to wait for review by someone who understands what he's trying to say. (I pretty much agree with your recommended action, but I don't feel quite as dismissive, instead suspecting a translation problem.)
I'm rather certain you misunderstand what the cosmologists mean by "dark matter". I *think* that they require that it be non-baryonic rather than just "not illuminated". (Well, I'm pretty certain that they don't just mean "not illuminated".)
IIUC, string theory was never proven wrong. It's just that the math was too hard to handle, and it predicted too many different universes. It could still be correct, but it's not very useful. (I'm no expert, but that's what seems to have happened.)
I feel that an LLM is a necessary piece of an AGI. At least of one that is intended to work around people. And it's possible that the techniques that are the basis of the LLM can be extended to include modelling the physical universe.
The first test you propose isn't fair unless you give it the body. The second doesn't demonstrate AGI. I think it needs to be able to pass BOTH of those tests.
IIUC, that power requirement is for learning, not operating. Unfortunately, a real AGI needs to be continually learning.
Well, my prediction for a (weak) AGI is 2035, which is now about 10 years from now...but I've had the same prediction for a bit over a decade already. Actually, my prediction is "somewhere between 5 years before and 5 years after 2035", so his prediction almost crosses my error bars. However, by a weak AGI I don't mean something that's human equivalent, merely something that can learn anything, including both nuclear physics and how to cook toast, if given enough time and no more coaching than any other student. And that doesn't need to start all over when you switch to another topic. I have a theory that actual understanding requires being embodied, but I suppose that embodyment could be done in a simulator. (OTOH, doing it in a simulator poses the danger of learning for the wrong universe.) And if it's actually an AGI, it should be able to learn to handle different bodies. To become either a car or a janitor-robot (probably humanoid)...given enough time.
Only through hard work and perseverance can one truly suffer.