Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

Elon Musk Says All Advanced AI Development Should Be Regulated (techcrunch.com) 70

Tesla and SpaceX chief executive Elon Musk said on Monday that "all org[anizations] developing advance AI should be regulated, including Tesla." Musk was responding to a new Technology Review report on OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. From a report: At the time of its founding in 2015, Musk posited that the group essentially arrived at the idea for OpenAI as an alternative to "sit[ting] on the sidelines" or "encourag[ing] regulatory oversight." Musk also said in 2017 that he believed that regulation should be put in place to govern the development of AI, preceded first by the formation of some kind of oversight agency that would study and gain insight into the industry before proposing any rules. In the intervening years, much has changed -- including OpenAI. The organization officially formed a for-profit arm owned by a non-profit parent corporation in 2019, and it accepted $1 billion in investment from Microsoft along with the formation a wide-ranging partnership, seemingly in contravention of its founding principles.

Musk's comments this week in response to the MIT profile indicate that he's quite distant from the organization he helped co-found both ideologically and in a more practical, functional sense. The SpaceX founder also noted that he "must agree" that concerns about OpenAI's mission expressed last year at the time of its Microsoft announcement "are reasonable," and he said that "OpenAI should be more open." Musk also noted that he has "no control & only very limited insight into OpenAI" and that his "confidence" in Dario Amodei, OpenAI's research director, "is not high" when it comes to ensuring safe development of AI.

This discussion has been archived. No new comments can be posted.

Elon Musk Says All Advanced AI Development Should Be Regulated

Comments Filter:
  • by Anonymous Coward
    How could you regulate software development unless you have spies on everyone's computer? DMCA already tried to regulate software repair and it turned out to be a complete failure because it can only be enforced in the tiny corner case of someone facing the market. If I'm developing AI or writing any other sort of software, you don't have any way to know whether or not I'm doing the-thing-you-want-to-regulate unless you're always watching me.
    • by Sigma 7 ( 266129 )

      How could you regulate software development unless you have spies on everyone's computer?

      This is done by controlling the operating system or hardware. The various consoles were completely regulated, which basically locked down what most people could do with it. On the other hand, computers such as the Commodore 64, Amstrad CPC and IBM PC were unregulated, allowing for anyone to create something (including computer viruses).

      IBM PCs were unregulated to a degree where boot sector viruses were spread. It took u

    • Original Subject: line was suitable for a first post, but who cares what an AC thinks? Ergo, I feel free to restart. Or should I make a joke about not talking to strangers and you can't get stranger than some of the ACs on Slashdot?

      The story summary strikes me as another example of celebrity incompetence extended beyond the limits of expertise. Elon Musk is a special case because his celebrity is largely based on actual competence. There are an increasing number of noisy celebrities who are best defined as

    • There are number of ISO standards on software.
      There are also other forms of regulation too, For health care there is what is called Meaningful Use, In which software has to perform such features in order to be considered. For example an Electric Health Record system is suppose to give Alerts if a drug could react to a person with an allergy or is on an other drug.

    • If I'm developing AI or writing any other sort of software, you don't have any way to know whether or not I'm doing the-thing-you-want-to-regulate unless you're always watching me.

      Good luck if you can pull that off by yourself.
      "Three may keep a secret, if two of them are dead."
      — Benjamin Franklin

      Where things get easier to regulate is when you need to hire and pay a team of researchers and acquire time on rather costly equipment to train and experiment.

      I think some of this is closer to 3D printing of firearms than DMCA. There are some technical hurdles still to be resolved. And if you scale up the operation in order to turn a profit it starts to become more obvious to authoritie

  • by beachmike ( 724754 ) on Tuesday February 18, 2020 @12:15PM (#59739624)
    I can't think of any better way to stifle innovation than to have the government regulate AI development.
  • Nice combination (Score:5, Interesting)

    by Geoffrey.landis ( 926948 ) on Tuesday February 18, 2020 @12:16PM (#59739630) Homepage

    An interesting combination of articles posted on /.: this one about the Open AI project, and half an hour ago, one about the same project that draws almost the opposite conclusion [slashdot.org]

  • by jellomizer ( 103300 ) on Tuesday February 18, 2020 @12:17PM (#59739634)

    Musk has a lot of character flaws, but he is not stupid. AI Development is important to a lot of his hair brain businesses, pushing for regulations implemented to benefit his companies. Is a Win Win. It puts his competitors at a disadvantage as they have a hurtle to change their methods, and he look good to the public, as actually caring about the risks of such technologies.

    Musk has a lot of money with advanced organizations already, so he can deal with the regulations much better then some upstart company trying to unseat his company stance.

    A lot of regulation and deregulation are pushed from interest groups who have an agenda vs what is best for the public on the whole.
       

    • My thoughts exactly. It raises the bar for the startups to have to get over.
    • Established Company: Our technology is potentially dangerous. We need regulation to make sure we don't do something stupid.

      Government: Sure thing. What you are doing looks fine. If anyone else wants to do something similar they'll have to jump through these regulatory hoops to prove it's as safe as your thing. You're going to help write those regulations too, right?

      Established Company: We'd love to.

      • Actually it is more like this:

        Established Company: Our technology is potentially dangerous. We need regulation to make sure we can point to the government when we do something stupid.

        So when the real life condition where the AI must choose to crash into a Nun or into a trolley full of people, then decides it explode on the spot taking out a city block except for the Nun and the Trolley. They can point to the fact that the AI met all the regularity guidelines, and created a novel solution, where the two avai

        • Depends on the regulatory regime. For instance, if you are a drug company and clearly state on the label that there are side effects, you can still be sued for those side effects, even though you were following FDA guidelines to a T. A pharmaceutical company that made chemotherapy drugs got sued because their drug caused hair loss. Because you wouldn't have taken your anti-cancer drug if you had known it caused hair loss, I guess?

          And, don't get me started on the FDA complaints process for medical devices. U

          • by djinn6 ( 1868030 )

            A pharmaceutical company that made chemotherapy drugs got sued because their drug caused hair loss.

            While that seems like a frivolous lawsuit, was the company judged to be in the wrong and had to pay damages? Because anyone can sue anyone for anything, no matter how ridiculous. I can sue you for hair loss, even though you don't know me and I don't have hair loss. It could still make the news tomorrow.

    • by AmiMoJo ( 196126 ) on Tuesday February 18, 2020 @01:13PM (#59739806) Homepage Journal

      Tesla isn't using real AI, as defined by Slashdot as actual intelligence. They are using image recognition to spot objects and road limits/markings, and then using an algorithm to plot a route for the car to follow. Image recognition uses a neural net but it's nothing special.

      Everyone else is doing basically the same thing with their driver assistance features. Nissan has a similar but hands-free system in Japan, Lexus announced that theirs will launch this year. Google/Waymo of course uses lidar but it's essentially the same - image and lidar based object recognition plus an algorithm that decides what action to take.

      • by Kjella ( 173770 )

        Tesla isn't using real AI, as defined by Slashdot as actual intelligence. They are using image recognition to spot objects and road limits/markings, and then using an algorithm to plot a route for the car to follow.

        Of course if you take the 30,000 foot view and say the two processes are "interpret" and "react" it's a good question what humans do differently. You'd probably want to put "predict" in there somewhere, but self-driving cars have predictive models too. Not nearly as fancy as ours, but honestly interpreting others is not nearly as creative a process as coming up with something novel. Usually it's either to pick up cues like is this car about to run a red light or it's general clues like that this person is d

    • Musk has a lot of character flaws, but he is not stupid. AI Development is important to a lot of his hair brain businesses, pushing for regulations implemented to benefit his companies. Is a Win Win. It puts his competitors at a disadvantage [...]

      Just about everyone who thinks deeply about the ramifications of AI eventually comes to the conclusion that it will cause a lot of suffering for mankind. Stephen Hawking thought AI would result in the end of the human race [youtube.com], Neil DeGrasse Tyson thinks the same [express.co.uk], and the list goes on.

      I dabble in AI research myself, and came to a similar conclusion. I can draw a straight line of good intentions from right where we are into a future dystopia. It's not that hard.

      An alternative explanation of the OP: Maybe Musk ha

      • by djinn6 ( 1868030 )

        So 2 physicists think they have the answer to a computer science question? If that's not an appeal to (the wrong) authority, I don't know what is.

        AI, when it exists, does not need to have a motivation to do anything. Humans are curious, self interested and easily bored, which causes them to do things others don't like. It's possible to create an AI that does not have those things. In that case, it will do nothing unless it receives human instruction. It's also possible to create several AIs that has only on

    • by leonbev ( 111395 )

      If it's such a great idea, let's make Tesla's self driving AI the first victim... er... "beneficiary" of these new regulations. By the time their 2nd generation Autopilot system passes through all of the red type, their competitors should caught up to them or surpassed them.

  • What's up with this trend of asking for the regulation of your sector?
    • by Anonymous Coward

      Larger companies can probably easily handle regulations/write the regulations/lobby against any harmful to them regulations. This makes smaller companies toast.

      They rode the elevator up, and are now demoing the escalator, pulling the ladder up and restricting access to the elevator to only themselves.

      And anybody coming close with a rope, helicopter, powered pogo stick, parachute, rocket ship, etc, they just buy and/or bury.

    • No one wants competition. Fake "regulations" are the way to achieve this. No new entrants can afford to meet these arbitrary "regulations". MORE FREE GOVERNMENT MONEY TOO!

    • by Dutch Gun ( 899105 ) on Tuesday February 18, 2020 @12:54PM (#59739744)

      It serves three purposes that I can see:

      First, It's a pre-emptive defensive strategy to deflect blame when anything goes wrong in the future. "Hey, look, I was ASKING for regulatory help years ago, but no one listened..."

      Second, heavy regulation is something that established companies can work with, but it tends to keep smaller startups locked out of the market. You can be sure any rules established will be too onerous for any but the biggest players to follow. This is the essence of regulatory lock-in.

      Third, it's good PR / virtue signalling for no real cost, other than to look even more cynical to people who see it for the do-nothing clown show it is. It's sort of like billionaires bemoaning the fate of the working class and asking to be taxed more... without actually volunteering to send in a dime more than they have to.

      • You can be sure any rules established will be too onerous for any but the biggest players to follow.

        I don't disagree at all here. Perhaps the common good is served by the "biggest players" suppressing would-be disrupters. The taxi industry was tightly regulated across the country to minimize traffic & pollution. Along comes Uber looking to disrupt the industry for their own financial gain by exploiting the commons. So boldface is their unbridled capitalism, they overtly stated their plan from the begi

    • by shanen ( 462549 )

      Evasion of liability. I posted a long screed on the topic in the story on Zuck's request for regulation of Facebook.

      Much as I dislike Microsoft, I admit that you have to give them credit for legal innovation in creating the super-EULA to escape all liability for their technical incompetence, which is extremely dangerous when it comes to computer security. Actually, most of the threat of this topic is linked to Microsoft's escape from responsibility. I sincerely believe that if Microsoft were liable for thei

      • by djinn6 ( 1868030 )

        If Microsoft had been unable to dodge liability, then some other company would go and do those risky things. They would have more features, be "good enough", and take over the software market. And if they just happened to be not based in the US and not subject to US regulations, then that's that.

        • by shanen ( 462549 )

          You sound rather incoherent, so mostly I'm dismissing your comment as typical mindless Slashdot negativism or whatever.

          Still, I'll go ahead and ask one topical question:

          What would be wrong with obtaining features from various other sources based on your free choice of the features you actually want and the company that offers the best value for each feature?

          I've tried to make the question as specific as possible, though there are meta-questions involved, too... Can you construct a clear and coherent example

  • Everyone, without exception is vulnerable to the Universal Laws of stupidity and Musk her proves he is no exception.

    If you regulate AI, then it will NEVER become an AI. I cannot understand why people think that you can create an AI and be constrained by rules... if it's an AI then it can undo any rule you create for it... not only that... but how do you think you are going to get close to actual AI with a bunch of fucking rules in the way?

    And of course... here we go again... calling shit that is not AI fuc

    • If you regulate AI, then it will NEVER become an AI. I cannot understand why people think that you can create an AI and be constrained by rules...

      I can't figure out if he is serious or not. If he is serious he is a very very confused person.

      • We are all confused people... this one just happens to be in a position of power.

        I already notice that we have some downvotes for the truth... as usual. Reminds me of that joke I saw running around on twitter from a year ago. https://twitter.com/bamafangrl... [twitter.com]

        Do y'all remember, before the internet, that people thought the cause of stupidity was the lack of access to information?
        Yeah. It wasn't that.

        • by djinn6 ( 1868030 )

          Elon Musk is the new Steve Jobs. He can do no wrong.

          It's human nature to want to believe in a higher power, which is why even with modern day science around, religion is still huge, and even in Communist countries, which are supposedly all atheist, all they do is replace Jesus or whatever with their leader.

      • Comment removed based on user account deletion
  • Good luck competing with China then.
    • Yea, about that... there is no competing with them. The virtue signaling world cowards... I mean world leaders are too busy sucking China's cock to hear anything or see anything... notably seeing Uighurs being oppressed... buy hey... cheap labor and products for our peeps and business owners that help get us elected into positions of power as the ignorant voters cheer us on!

  • Big companies always want regulatory lock in to prevent some smaller more nimble startup from disrupting their cash cow business. They just cash the checks and don't have to innovate much anymore. This year we have rounded corners, next year we'll innovate square corners, a few years later we can innovate rounded corners again.
    • This right fucking here!!! Nothing else really needs to be said. It is always this and will never fucking ever be anything other than this. Even the most well meaning of regulations will only ever result in this because when it comes to regulation even the regulators want to be lazy and regulate only one business instead of 500 so of course the first step is to eliminate the free-market. free-market give consumers too much power for that neither corporations or governments appreciate.

      The 1st gen regulat

  • The corona virus was probably a laboratory mistake regardless of the original intent.
    One day soon we will get a "laboratory mistake" that will bring us a virulent AI and we are woefully unprepared for the result.

  • We can trust the government to keep them safe. Especially Canada and China.

  • by JeffOwl ( 2858633 ) on Tuesday February 18, 2020 @01:27PM (#59739844)
    The development of AI does not need to be regulated. The application of AI to some things should be regulated. Much like we regulate humans who drive automobiles, we need to regulate AI driving automobiles. We require licenses for people who do all sorts of things, from being a stock broker to practicing medicine. That is the level that should be regulated. This is just by way of example, it may not line up 100% against things we license people to do. And to expand on that, we already have standards for software that does some things, like flying airplanes (e.g. DO-178C). Would this be any different for those things? Right now under many of those standards some of the stuff done by AI is actually prohibited, like modifying its own code.
  • Lots of things need to be regulated, but how would you even regulate AI. What even meets the definition of AI? And, aside from Elon's opinions, why should it be regulated. AI aside, I would be interested to see how long after SpaceX launches all of its satellites that Elon starts asking for regulation of satellite launches.

  • It's not even Real AI, it's branded half-assed crap that shouldn't even see the light of day beyond some computer lab somewhere, it's all garbage, it really doesn't work very well, and marketing departments are trying desperately to spin it into something it's not just to get us to pay them for it. It's about as real as an actor in a robot suit. [typepad.com]
    Seriously, enough already.
    Come back when you have Real AI and not the ersatz.
    • Sounds like YOU're just *begging* for some "regulation" here pal, move along...
  • The FBI was spying on the president and trying to keep him out of office.

    The government should be regulated and development should be FREE.
  • Should note that self-driving car technology in the Teslas are a weak form of AI. Maybe Musk freaked out after his technology actually killed someone.

  • It's only a matter of time before someone hooks up an AI to weapons, utilities, routing for emergency services, and logistics for food supply, and voting.

    Whether or deliberate with good intentions or through malice, once an AI is hooked up to these things, we are hopeful that someone's code isn't going to do things we don't want. And by the time we figure out something is wrong, it may be too late.
  • Who regulates the regulators? Whom is really qualified to understand the coding when AI starts writing its own code?

    Bad actors don't follow gun laws, bad actors think less of humanity when they weaponize viruses, bad actors give two shits about your world behind rose colored glasses. AI will give two shits less than the two shits not given by bad actors.
  • It's a great idea because the minute all advanced AI is regulated, every single one of these companies will immediately proclaim that they are just using basic search and pattern matching engines... ...and that they aren't subject to the regulation because what they're doing is not really "advanced AI" at all.

    As a result the whole space would be just a little more honest. :)

  • Elon wants to be regulated. Bring on the regulators. Seems most activities involving commerce are regulated.
  • uite interesting news. But I don't think that it can affect something. There are many software development companies which work all over the world. For example recently I have been cooperated with Mangosoft company for mobile application creation. They provide many modern services https://mangosoft.tech/service... [mangosoft.tech] and can create any website or app for you.

"Take that, you hostile sons-of-bitches!" -- James Coburn, in the finale of _The_President's_Analyst_

Working...