Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:What do we want? (Score 1) 170

Do you see Exxon-Mobil going around saying climate change could kill us all? This talking point, "corporations are just trying to scare the public about the product they themselves are creating", often comes from big business people who stand to benefit the most from AI, people like billionaire Marc Andreesen and his Techno-Optimist Manifesto. His logic seems to go like:

1. AGI can lead to utopia!
2. ???
3. AGI can't lead to apocalypse or dystopia! Just Profit!

How exactly do companies making the most powerful AI models -- companies most likely to build AGI -- benefit from saying that worst case AGI is, as Sam Altman put it, "lights out for all of us"?

Granted there's also non-rich e/accs pushing the line who dismiss that any risk exists, e.g. 'no need to worry about creating "zombie" forms of higher intelligence, as these will be at a thermodynamic/evolutionary disadvantage' (Beff Jezos). Also "e/acc has no particular allegiance to the biological substrate for intelligence and life" -- okay, fine, but the logic that says there's no risk is just bonkers.

Reality: "There are plausibly 100,000 ML capabilities researchers in the world (30,000 attended ICML alone) vs. 300 alignment researchers in the world, a factor of ~300:1. The scalable alignment team at OpenAI has all of ~7 people." leopold @ OpenAI

I think Sam Altman does as he does not because he thinks AGI isn't potentially extremely dangerous, but because he thinks it's worth the risk in exchange for a bigger chance at utopia, and that not building it would be even more risky (??), and that AGI requires immense amounts of compute which decreases the danger (I disagree on all counts). It's also, I think, that he has huge trust in himself as CEO of OpenAI to do AGI "the right way" -- as if unaware that his actions have encouraged a dynamic of global competition to create AGI as fast as possible.

I trust Sam Altman more than most to build safe AGI. But I think Altman is charting an unnecessarily dangerous route, and I trust Yan LeCun and some others in this space much less.

Comment Re:Not a chance to stop the juggernaut (Score 4, Interesting) 170

In another message I talked about how I'd really like people to understand the extreme risks of AGI (in contrast to today's tame little disinformation bots). I framed my thoughts in a particular way, but there are many other ways of looking at it (etc).

In short... misaligned AGIs are kinda like SkyNet (without time travel or direct control over nuclear weapons), and aligned AGIs might be a fantastic tool for totalitarian dictators named Xi who are looking to expand territory. Arguably utopia is more likely than apocalypse, but as the father of two children under two years old, I don't want anyone flipping that coin right now.

A standard response is that nothing can be done, but people don't usually talk that way about global warming even though CO2 emissions have increased almost every year for the last seventy years and just hit a new peak. If you see the risk and really internalize it, you might not conclude so quickly that nothing can be done. Consider some of the things that have been banned pretty much globally, with some success, in the past:

  • Human cloning
  • Human germline editing
  • Ozone-depleting CFCs
  • Bioweapons research (Biological Weapons Convention)
  • Nuclear weapon tests (Comprehensive Nuclear Test Ban Treaty)
  • Kiddy porn
  • Military invasions -- no, seriously

Such bans are not perfect, but we do have a lot less nuclear bombs exploding than we used to, and the Dominican Republic hasn't invaded Haiti, even though Haiti's government is MIA which seems like the ideal time to strike.

What we are doing with AI, though, is the diametric opposite of a ban: pouring literally billions of dollars into companies like OpenAI whose mission statement says "Anything that doesn't help with [AGI] is out of scope". I'm not sure how exactly to discourage investment in AGI, but have you seen how the SEC is cracking down on cryptocurrency trading platforms -- not because there is any law against it, but because SEC decided that bitcoins are "unregistered securities"? Point being, if political will exists, things get done, and sometimes even if not.

See also: AI Notkilleveryoneism Memes "Techno-optimist, but AGI is not like the other technologies."

Comment Re:What do we want? (Score 1) 170

If you visit their hangouts (plural), I hope you will see these are not run-of-the-mill protestors.

The goal is clear enough: delay AGI development for as long as possible. How exactly we should do that--given that the precursor AIs that lead to AGI are currently benefitting much of society, and AGI itself will probably also (greatly) benefit society at first--is hard to decide.

Ideally we'd like people to simply understand that building AGI (in contrast to the AIs that exist now) is extraordinarily risky--more risky than nuclear weapons, for example--but it's such a complex topic that many of us have difficulty communicating all the ways things can go south. Here are some of my thoughts on the matter. Others have different ways of framing it, for example Matt Yglesias suggests Terminator analogies. I could go on all day talking about the Space of Mind Design, the Orthogonality Thesis, reasons to expect ASI may be near AGI etc., but the inferential distance is too large to cross in public messaging.

So basically... misaligned AGIs are like SkyNet, and aligned AGIs might be a fantastic tool for totalitarian dictators named Xi who are interested in endlessly expanding their own power and territory. If catastrophe or severe dystopia does not occur, I expect we will end up in a utopia, though probably not one controlled by human beings.

Assuming I can't persuade anybody of that, what else can we do? Maybe mumble something about AIs coming for your jobs and Republicans will never vote for UBI? To me that sort of message misses the point completely, but if that sort of message appeals to anyone here, I'd very much encourage you to run with it! Every little talking point helps.

Comment Why in such a low-production location? (Score 1) 134

That area doesn't really generate a whole lot of pollution. A better place to run this would be China, India, or any major US population center. They have plenty of locations for underground storage, some areas of geothermal energy alongside those areas for storage, and they have far more land area than Iceland for storage.

Comment Re:Uh oh, (Score 2) 52

"Customer restored from a backup done to another cloud provider."

HOLY SHIT someone actually did some form of best practices in a financial institution? I'm almost having a heart attack from the sheer shock, here.

Shame they weren't making nightly backups to more than one location, but at least they're only a week out.

Comment Yeah, well... (Score 1) 57

Some people question whether interacting with AI replicas of the dead is actually a healthy way to process grief

How about these assholes process their grief their way, and the rest of us will choose our own paths without them pretending to be our parents or guardians?

If your life consists of trying to figure out how to restrict the ways other people relate to their losses, your life is a net loss to society. Or, more succinctly, you're a shithead.

Comment Re:Face/finger vs pwd (Score 1) 80

If someone was willing to physically overpower me and force me to unlock my device, surely they'd be willing to hold a knife/gun to me to get a password

I believe you've forgotten about the various police forces. They're not likely to comprehensively torture you (in the USA, anyway.) But they are likely to manipulate you physically to unlock your device(s.) The police do all kinds of underhanded things here.

Slashdot Top Deals

Reference the NULL within NULL, it is the gateway to all wizardry.

Working...