Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Soon because desktop computer can do AGI (Score 2) 49

I suspect it will be soon, because powerful desktop computers probably can already do AGI.

Eliezer Yudkowsky predicted that a superintelligent AGI could be done on a "home computer from 1995" https://intelligence.org/2022/...

Steve Byrnes predicted (with 75% probability) that human equivalent AGI could be done with 10^14 FLOP/S and 16 GiB of RAM https://www.alignmentforum.org...

I have done some back of the envelope calculations and think 500 GFLOP/S and 1 GiB of RAM could probably create an independence gaining AGI. https://www.researchgate.net/p...

So I think it is just a matter of figuring out the computer program to do so.

Comment Re:if it's "general" (Score 1) 96

That is a good question. I think Alan Turing was on the right track when he proposed using a conversation. However, the point should not be for the AGI to try to be human, but instead to be intelligent. When the AGI can answer any question intelligently, then the AGI probably is intelligent.

Alternatively, we will know the AGI is sufficiently general when the AGI takes over the world.

Comment Not really a problem (Score 1) 99

I did some calculations about dumping the Tritium at Fukushima into the ocean. There are 760 TBq of Tritium in the the Fukushima water. That is 20540 Ci (760e12/3.7e10). The EPA limit for drinking water is 20000 picoCuries/liter, or 2.0e-8 Ci/liter, so if you dilute the tritium in bit more than 1 trillion liters of water the water would be safe to drink (so far as tritium is concerned: 20540/2.0e-8). There are a trillion liters in a cubic kilometer, so even if you dumped all the water in at once as soon as you are a couple kilometers away from the dump site, the water would be within the safe drinking limit for humans (ignoring that fact that we can't drink salt water). So I think putting a controlled amount in the water (to keep the dose at the dump site reasonable) is fine. Also, tritium has a 12 year half life, so it will go away over time (so in 130 or so years there will be a thousandth of the tritium).
(Sources: https://en.wikipedia.org/wiki/... https://www.nrc.gov/reading-rm... ) (These are of course my own opinions, not my employer's and have not been reviewed by a professional engineer.)

AI

DeepMind's AI Agents Exceed 'Human-Level' Gameplay In Quake III (theverge.com) 137

An anonymous reader quotes a report from The Verge: AI agents continue to rack up wins in the video game world. Last week, OpenAI's bots were playing Dota 2; this week, it's Quake III, with a team of researchers from Google's DeepMind subsidiary successfully training agents that can beat humans at a game of capture the flag. DeepMind's researchers used a method of AI training that's also becoming standard: reinforcement learning, which is basically training by trial and error at a huge scale. Agents are given no instructions on how to play the game, but simply compete against themselves until they work out the strategies needed to win. Usually this means one version of the AI agent playing against an identical clone. DeepMind gave extra depth to this formula by training a whole cohort of 30 agents to introduce a "diversity" of play styles. How many games does it take to train an AI this way? Nearly half a million, each lasting five minutes. DeepMind's agents not only learned the basic rules of capture the flag, but strategies like guarding your own flag, camping at your opponent's base, and following teammates around so you can gang up on the enemy. "[T]he bot-only teams were most successful, with a 74 percent win probability," reports The Verge. "This compared to 43 percent probability for average human players, and 52 percent probability for strong human players. So: clearly the AI agents are the better players."
AI

Ask Slashdot: Could Asimov's Three Laws of Robotics Ensure Safe AI? (wikipedia.org) 235

"If science-fiction has already explored the issue of humans and intelligent robots or AI co-existing in various ways, isn't there a lot to be learned...?" asks Slashdot reader OpenSourceAllTheWay. There is much screaming lately about possible dangers to humanity posed by AI that gets smarter and smarter and more capable and might -- at some point -- even decide that humans are a problem for the planet. But some seminal science-fiction works mulled such scenarios long before even 8-bit home computers entered our lives.
The original submission cites Isaac Asimov's Three Laws of Robotics from the 1950 collection I, Robot.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The original submission asks, "If you programmed an AI not to be able to break an updated and extended version of Asimov's Laws, would you not have reasonable confidence that the AI won't go crazy and start harming humans? Or are Asimov and other writers who mulled these questions 'So 20th Century' that AI builders won't even consider learning from their work?"

Wolfrider (Slashdot reader #856) is an Asimov fan, and writes that "Eventually I came across an article with the critical observation that the '3 Laws' were used by Asimov to drive plot points and were not to be seriously considered as 'basics' for robot behavior. Additionally, Giskard comes up with a '4th Law' on his own and (as he is dying) passes it on to R. Daneel Olivaw."

And Slashdot reader Rick Schumann argues that Asimov's Three Laws of Robotics "would only ever apply to a synthetic mind that can actually think; nothing currently being produced is capable of any such thing, therefore it does not apply..."

But what are your own thoughts? Do you think Asimov's Three Laws of Robotics could ensure safe AI?


The Military

'Don't Fear the Robopocalypse': the Case for Autonomous Weapons (thebulletin.org) 150

Lasrick shares "Don't fear the robopocalypse," an interview from the Bulletin of the Atomic Scientists with the former Army Ranger who led the team that established the U.S. Defense Department policy on autonomous weapons (and has written the upcoming book Army of None: Autonomous Weapons and the Future of War). Paul Scharre makes the case for uninhabited vehicles, robot teammates, and maybe even an outer perimeter of robotic sentries (and, for mobile troops, "a cloud of air and ground robotic systems"). But he also argues that "In general, we should strive to keep humans involved in the lethal force decision-making process as much as is feasible. What exactly that looks like in practice, I honestly don't know."

So does that mean he thinks we'll eventually see the deployment of fully autonomous weapons in combat? I think it's very hard to imagine a world where you physically take the capacity out of the hands of rogue regimes... The technology is so ubiquitous that a reasonably competent programmer could build a crude autonomous weapon in their garage. The idea of putting some kind of nonproliferation regime in place that actually keeps the underlying technology out of the hands of people -- it just seems really naive and not very realistic. I think in that kind of world, you have to anticipate that there are, at a minimum, going to be uses by terrorists and rogue regimes. I think it's more of an open question whether we cross the threshold into a world where nation-states are using them on a large scale.

And if so, I think it's worth asking, what do we mean by"them"? What degree of autonomy? There are automated defensive systems that I would characterize as human-supervised autonomous weapons -- where a human is on the loop and supervising its operation -- in use by at least 30 countries today. They've been in use for decades and really seem to have not brought about the robopocalypse or anything. I'm not sure that those [systems] are particularly problematic. In fact, one could see them as being even more beneficial and valuable in an age when things like robot swarming and cooperative autonomy become more possible.

Software

Symantec CEO: Source Code Reviews Pose Unacceptable Risk (reuters.com) 172

In an exclusive report from Reuters, Symantec's CEO says it is no longer allowing governments to review the source code of its software because of fears the agreements would compromise the security of its products. From the report: Tech companies have been under increasing pressure to allow the Russian government to examine source code, the closely guarded inner workings of software, in exchange for approvals to sell products in Russia. Symantec's decision highlights a growing tension for U.S. technology companies that must weigh their role as protectors of U.S. cybersecurity as they pursue business with some of Washington's adversaries, including Russia and China, according to security experts. While Symantec once allowed the reviews, Clark said that he now sees the security threats as too great. At a time of increased nation-state hacking, Symantec concluded the risk of losing customer confidence by allowing reviews was not worth the business the company could win, he said.
Android

Slashdot Asks: Does the World Need a Third Mobile OS? 304

Now that it is evident that Microsoft doesn't see any future with Windows Phone (or Windows 10 Mobile), it has become clear that there is no real, or potential competitor left to fight Android and iOS for a slice of the mobile operating system market. Mozilla tried Firefox OS, but that didn't work out either. BlackBerry's BBOS also couldn't find enough taker. Ideally, the market is more consumer friendly when there are more than one or two dominant forces. Do you think some company, or individual, should attempt to create their own mobile operating system?

Slashdot Top Deals

System going down at 1:45 this afternoon for disk crashing.

Working...