Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment what about the politics (Score 1) 183

I agree with you that author does not pay enough attention to the science. How risky creating an AGI is, is a scientific question. But how much effort to use to prevent creating an AGI is a political question. And there are some interesting things going on outside the scientific realm. There were good scientific reasons to estimate the existential risk of turning on the LHC was under 1 in 50 million, but people still worried about that. Yet, here we are with some of the people building AGI estimating that the existential risk is above 1% and still building AGI. What on earth is going on here politically?

Comment Stopping AGI still possible, but barely (Score 1) 183

I agree with "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." but I think the author is underestimating how hard actually stopping AGI will be. The basic problem is that computers capable of running AGI are probably already here, and already widespread. Eliezer Yudkowsky estimated that AGI can be done on a home computer from 1995. Steve Byrnes estimated that AGI could be probably be done on a NVIDIA RTX 4090 and 16 GiB of RAM. As for myself, I think Yudkowsky and Barnes are making reasonable claims, and you might have to restrict hardware to circa 1985 home computer levels to be sure that AGI can't run on it. If you think a home computer can't run an AGI, then I recommend trying Ollama or llama.cpp on your own computer with Gemma3:1b or gpt-oss-20b (gemma3 requires about 4 GiB, gpt-oss about 16 GiB). I don't think LLMs are the most efficient way of doing AI, but even they can more or less pass as intelligent (not quite human). People are running AI on much more powerful computers.

So what would it take to stop AGI? Basically, stop using powerful for experimental AI, stop publishing AI research that lowers the hardware requirements, and do this globally and before AGI is created. I think removing existential risk is a good thing, but we have to realize that this will be the most difficult political accomplishment humans have ever tried to do. Decreasing the probability of creating ASI is probably a bit simpler, but still would be a hard challenge. (MIRI's proposal)

Comment Soon because desktop computer can do AGI (Score 2) 49

I suspect it will be soon, because powerful desktop computers probably can already do AGI.

Eliezer Yudkowsky predicted that a superintelligent AGI could be done on a "home computer from 1995" https://intelligence.org/2022/...

Steve Byrnes predicted (with 75% probability) that human equivalent AGI could be done with 10^14 FLOP/S and 16 GiB of RAM https://www.alignmentforum.org...

I have done some back of the envelope calculations and think 500 GFLOP/S and 1 GiB of RAM could probably create an independence gaining AGI. https://www.researchgate.net/p...

So I think it is just a matter of figuring out the computer program to do so.

Comment Re:if it's "general" (Score 1) 96

That is a good question. I think Alan Turing was on the right track when he proposed using a conversation. However, the point should not be for the AGI to try to be human, but instead to be intelligent. When the AGI can answer any question intelligently, then the AGI probably is intelligent.

Alternatively, we will know the AGI is sufficiently general when the AGI takes over the world.

Comment Not really a problem (Score 1) 99

I did some calculations about dumping the Tritium at Fukushima into the ocean. There are 760 TBq of Tritium in the the Fukushima water. That is 20540 Ci (760e12/3.7e10). The EPA limit for drinking water is 20000 picoCuries/liter, or 2.0e-8 Ci/liter, so if you dilute the tritium in bit more than 1 trillion liters of water the water would be safe to drink (so far as tritium is concerned: 20540/2.0e-8). There are a trillion liters in a cubic kilometer, so even if you dumped all the water in at once as soon as you are a couple kilometers away from the dump site, the water would be within the safe drinking limit for humans (ignoring that fact that we can't drink salt water). So I think putting a controlled amount in the water (to keep the dose at the dump site reasonable) is fine. Also, tritium has a 12 year half life, so it will go away over time (so in 130 or so years there will be a thousandth of the tritium).
(Sources: https://en.wikipedia.org/wiki/... https://www.nrc.gov/reading-rm... ) (These are of course my own opinions, not my employer's and have not been reviewed by a professional engineer.)

Comment How do you make friendly AI? (Score 2) 311

The problem is that we don't know how to make friendly AI. As in at some point, Artificial Intelligences will be able to beat humans at any task, at which point, how do you make sure that they don't destroy humanity (possibly through indifference). Even if you don't care about humanity, how do you make sure they do something interesting with the universe?

Various articles:
Stuart Armstrong's book Smarter than us discusses what happens when machines are smarter than humans:
https://intelligence.org/smart...
http://jjc.freeshell.org/Smart...
Bill Joy's article Why the Future doesn't need us on the dangers of robotics:
https://www.wired.com/2000/04/...
Tim Urban's article on superintelligence:
http://waitbutwhy.com/2015/01/...
http://waitbutwhy.com/2015/01/...

Slashdot Top Deals

Them as has, gets.

Working...