Forgot your password?
typodupeerror

Comment Re:Potential dangers (Score 1) 88

That's not clear. "Poison" is too broad a term. In both cases the question is "What do you need to do to make it usable?". The two cases are different, so the answers are going to be different, but one is not necessarily harder than the other. E.g. too much salt is poisonous, and so is too much water (people have died from drinking too much water), but both can be handled merely by proper dilution...however you don't dilute them with the same media.

Comment Re:I don't think he is talking about satellite lev (Score 1) 144

You DON'T want to boil off coolant in space, because resupplying it is quite difficult. Since you need radiation hardened chips anyway, what you need to do is use designs that work at a higher temperature, so cooling is a smaller problem. (Basically you're going to need to depend on radiation based cooling at the system level. The efficiency of that goes up, IIRC, as the fourth power of the difference between the emitter temperature and the background.)

OTOH, these were probably designed to meet the specs given them by Musk. So I may be overthinking this.

Comment Re:This is concerning (Score 1) 144

Sorry, but you don't "know it can't work because physics".
There are good reasons to believe it's a difficult problem, and it's certainly questionable whether it's a reasonable goal. But I'd bet those chips have other uses, if "Space A!!!" doesn't take off. And I doubt that "Space AI!!" is impossible, I think it's is probably currently too expensive to be practical, however. But I'm not an engineer specializing is design of space projects, so my judgement here shouldn't be taken as "inside information". Certainly heat is going to be an issue unless the chips are designed to be able to run at really high temperatures...but such designs are possible. (There are always trade-offs, of course.)

Comment Re:AI is not very intelligent and not improving. (Score 1) 146

IIUC, the public facing AIs have intentionally been set to not do permanent learning from current data. This was a decision made because of experiences like Microsoft's Tay chatbot.

It's definitely a decision that limits the AIs, but the "malice" of various people made it rather necessary. They *do* need to find a better way to handle the problem, but it's not proof that the AI is basically limited. That's a "late addition", like the other "guardrails".

OTOH, a better proof of the current (well, last year) state of the art would be https://aivillage.org/ . The currently seem to lack much "common sense".

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (1) Gee, I wish we hadn't backed down on 'noalias'.

Working...