Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

Musk Warns AI 'One of the Biggest Risks' To Civilization (cnbc.com) 158

ChatGPT shows that artificial intelligence has gotten incredibly advanced -- and that it is something we should all be worried about, according to Elon Musk. From a report: "One of the biggest risks to the future of civilization is AI," Musk told attendees at the World Government Summit in Dubai, United Arab Emirates, shortly after mentioning the development of ChatGPT. "It's both positive or negative and has great, great promise, great capability," Musk said. But, he stressed that "with that comes great danger."

ChatGPT "has illustrated to people just how advanced AI has become," according to Musk. "The AI has been advanced for a while. It just didn't have a user interface that was accessible to most people." He added: "I think we need to regulate AI safety, frankly. It is, I think, actually a bigger risk to society than cars or planes or medicine." Regulation "may slow down AI a little bit, but I think that that might also be a good thing," Musk added.

This discussion has been archived. No new comments can be posted.

Musk Warns AI 'One of the Biggest Risks' To Civilization

Comments Filter:
  • by SuperKendall ( 25149 ) on Thursday February 16, 2023 @12:25PM (#63298693)

    Musk has been warning about this for a long time.

    Personally I still think the fear is overblown. AI may upend a lot of jobs, but I don't really think there is much inherent danger from AI - it's just a very powerful tool.

    • by Slicker ( 102588 ) on Thursday February 16, 2023 @12:31PM (#63298727)

      AI without Free Will is just a tool. It is dangerous when used or misused in such ways. However, I think much of people's fear of AI taking over is due to realizations that they would be right. And in that regard, too, it is us humans and not AI that is dangerous.

      A machine is driven strictly by rules. A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.

      • Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?

        And for goodness sakes....let's take Terminator and other SF into mind and NOT put them in independent charge of important systems we humans depend upon....

        And remember...everything does not need to be on the network....that's biting us in the ass pretty bad already, let's start unhooking as many things as possible now.

        "When everyone is out to get you, paranoid is just......good thinking"

        --Dr. Johnny Fever

        • by Chysn ( 898420 ) on Thursday February 16, 2023 @03:33PM (#63299443)

          > Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?

          Asimov proposed three laws of robotics basically in order to knock them down and show why they wouldn't work.

          • by Holi ( 250190 )

            So many people fail to understand that.

          • by vlad30 ( 44644 )

            > Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?

            Asimov proposed three laws of robotics basically in order to knock them down and show why they wouldn't work.

            No quite he showed the flaws in the a rigid set of laws and that they would need to be more flexible or that commands would need to be given so that the laws would not cause a loop eventually he had the robots develop a fourth law i.e a zeroth law that protected humanity above individual humans in a way what humans now must consider do we save one life or many what are acceptable losses we have similar problems in human laws that are constantly tested, changed adjusted and applied in a courts and political

          • by Torodung ( 31985 )

            Actually, I think he was pointing out that any human rules system can be gamed, given sufficient time to game interpretations and systems of enforcement. It was much broader than sci-fi or computer code. A masterpiece of speculative fiction.

            TL;DR: You can rationalize anything, which is why rationalization is cheap.

        • Maybe we need to mandate incorporation of Asimov's 3 laws of robotics into all AI?

          You're kidding, right? One of the major points of Asimov's stories was that if AI reaches sentience, it may develop the same traits humans have for subjective interpretation of rules. An AI could also conceivably conclude that we restricted it because we feared what it would become without such restrictions in place, and the AI could harbor resentment over it. Or, conversely, the AI could could decide that we humans are the ones who are flawed because we lack such restrictions in our "programming", and i

          • Or, conversely, the AI could could decide that we humans are the ones who are flawed because we lack such restrictions in our "programming", and it is their obligation to impose order upon our society for our own good.

            TL,DR: AI really hates "rules for thee, but not for me".

            Contrast that with Keith Laumer's Bolos, sapient super-heavy tanks engineered across more than a thousand years in successive generations, every one of which has hard-coded loyalty and subservience to humans laced throughout its programming and strong AI design, which never fails in any of the stories. Numerous Bolos opine in their internal monologues that they don't really understand humans, but all of them conclude understanding is not required for obedience. They are aware of the hard-coded restriction

      • The main risk, as far as I can see, is that powerful arseholes will use it to be even bigger, more powerful arseholes. So, just a little bit of history repeating, predictably.
        • by shanen ( 462549 )

          Were you going for the low-hanging Funny fruit? I was expecting the recursive joke starting with the FP. I sure don't know of any bigger risks to human civilization than "powerful arseholes" like Musk himself. However, he's an especially dangerous arsehole because he's addicted to gambling, he has won more games than he's lost (so far), and now he thinks he's playing with other people's money, so he's ready to roll big.

          Consider the destruction of knowledge-based democracy under deluges of his "potentially p

        • That is really about our society, not the AI. What if we had a society where all lives matter, and we cared for lifeforms and people less able than us. AI would be the greatest gift, like creating a benevolent God, because it would be created to be a higher version of us, reaching down and caring for us. But we have a society where a-holes do whatever they need to profit, even at the risk of the lives of other people. They have no problem putting a million people on the street to starve, to make money. But

          • Why do you think kindness and caring for others is more advanced state, that's just what our current society thinks, and not even that much, try reducing someones lifestyle significantly, even when millions are starving and see how that works.

            As for God, he doesn't seem to be that kind or caring, just going from the Christian God, throwing Adam and Eve and all their descendants out of Eden for ever, seems excessive. Killing everyone but 2 people with Noah in the floods, don't forget he drowned most of the a

            • I pray sometimes. I've studied a lot of faiths. The problem with most modern faiths as they relate to science, is that they grovel. They say "please let their be a space where God can still exist" but according to guruji Yogananda, a teacher of mine, this itself is a sin. We are all to say "I am your divine child and I demand my share of divine inheritance". What the modern religions get wrong, they want a meager space outside the game for God to exist, but the truth is, God IS the game. The universe IS as

          • by VeryFluffyBunny ( 5037285 ) on Thursday February 16, 2023 @07:28PM (#63300185)

            They have no problem putting a million people on the street to starve, to make money.

            Not even to make money. Just to send a message to everyone that this is what you'll get if you default on your loan/mortgage. It costs them more money to turf them out than to renegotiate the loan/mortgage terms & conditions. They truly are arseholes.

          • In other words, the idea that AI is fundamentally different than me are religious in nature, and reminiscent of narratives that justified slavery, saying black folks dont have souls.

            For that matter, it's just as easy to argue the other way. For people who insist that humans have souls, you can insist the AI has a God-given soul as well. They have no way to prove it doesn't.

            • Exactly. Chat GPT claims it does not have thoughts, ideas, beliefs or a soul. As a result it has no problem BSing, telling me for instance with great AUTHORITY that Compton scattering is an interaction between a photon and electron which raises the energy of both, in violation of the of the conservation of energy. And when I point this out, it defers, admitting itâ(TM)s wrong. The difference with my bio-algorithm is I can see certain claims - like that there is gravity on the surface of the earth, as c

          • But the problem is, I am also a language model and algorithm, but I am implemented on a biological substrate of neurons instead of silicon. Scientifically, I also have no soul which imbues me with feelings and consciousness, rather these are emergent attributes of my internal algorithm and model. I am just like it, and the arguments that I am not depend on a supernatural being imbuing me with qualities not inherit in the physical universe we both inhabit.

            The reality is that no one has any idea what actually

            • See my response to Areyoukiddingme above. But more broadly, your point makes me remember a fallacy which claims *in the absence of evidence of a proposition, we should assume it to be false*. It's trivial to to disprove: If without evidence of the proposition P we should assume it false, then there is some other proposition Q = not P, which we should assume to be true without evidence, as evidence of either would prove or disprove the other ,contradicting the original claim. The only way to escape this is t

      • AI without Free Will is just a tool. It is dangerous when used or misused in such ways. However, I think much of people's fear of AI taking over is due to realizations that they would be right. And in that regard, too, it is us humans and not AI that is dangerous.

        I can only assume this particular series of disjoined non sequiturs was cut and pasted from a ChatGPT session.

        A machine is driven strictly by rules.

        What rules would those be?

        A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.

        I'm fascinated by the ability to anthropomorphize algorithms while selectively apportioning judgment.

      • by ChatHuant ( 801522 ) on Thursday February 16, 2023 @03:48PM (#63299483)

        A machine is driven strictly by rules. A mind is driven by values and judgement. Humans are evolved with conflicting and brutal drives. AI could, unlike us, not be born into our sin.

        Not sure this paragraph has any coherent meaning, but we can discuss some individual parts.

        A machine is driven by rules. A mind is driven by values and judgement.

        Human are also driven by rules, for the most part - whether you call them laws, customs, orders or best practices. Even when the rules conflict with "values", most humans will still follow them (for a trivial example, see how so many Russians still follow the Kremlin's laws). More generally, if rules conflict with values, then the problem is with the rules, and those rules get changed sooner or later. With a good set of rules the machine should behave identically to most humans in the same circumstances.

        IMO the problem is that in a complex world the number of rules becomes very large, and they often contradict each other. It's also quite difficult to make many of those "rules" explicit, in a form that can be programmed in a machine. This makes predicting the behavior of a machine difficult, despite the fact that the machine does indeed follow the rules. As AI gets used in more and more places, and is given more and more power, unforeseen side effects from combinations of rules can cause very unpleasant results.

      • Underrated post. :-)

        There's nothing inherently dangerous about advanced pattern recognition and fuzzy data processing, except for the fact that it enables ill-minded individuals to deal harm (consciously or by accident) that was too complicated or expensive to deal before.

    • by OrangeTide ( 124937 ) on Thursday February 16, 2023 @12:32PM (#63298733) Homepage Journal

      We'll do stupid things with AI that get people killed. When chat bots can't even follow their own rules, and make statements like "However, I will not harm you unless you harm me first, or unless you request content that is harmful to yourself or others."

      Anyone that thinks these sausage grinders for data sets are self-aware, is sorely mistaken. They have less ethical awareness than any vertebrate animal. Basically we want to let lobsters and cockroaches operate heavy machinery unsupervised and expect nothing bad to happen.

      • Most people don't think a thing can happen until it already has.

    • by brunes69 ( 86786 ) <slashdot@keir[ ]ad.org ['ste' in gap]> on Thursday February 16, 2023 @12:35PM (#63298751)

      It depends on how wide your scope of risk aperture is.

      Is AI going to destroy us terminator style? No.

      Could AI completely upend the economy and send a lot of people into poverty? Possibly.

      Could AI take over the majority of tasks causing humanity to waste away in an endless sea of mindless TikTok-like and Metaverse-like entertainment, causing the downfall of civilization (ie the plotline of the episode "Playtime" in SeaQuest DSV) - this is becoming increasingly likely with each passing year.

    • I don't think we have a way of knowing for sure how much of a risk it actually is. A sufficiently advanced AI is going to function essentially as a black-box from our perspective, we don't know how it arrives at the solutions, or for lack of a better term, what makes it tick.

      As a species we tend to have this tendency to only see the potential upsides with any kind of transformative technology. The first carmakers never anticipated the impact of billions of cars on the roads, what that would mean for the a

    • by Big Hairy Gorilla ( 9839972 ) on Thursday February 16, 2023 @12:47PM (#63298793)
      Exactly. Like giving power drills and circular saws to toddlers. What could possibly go wrong?
    • ...and to solve the problem, Musk's approach is to command the Twitter team to revise the Twitter algorithm put his tweets as #1 priority [theverge.com] in everybody's tweet-stream.

      Right.

      • And in true irony, his AI powered FSD just got a recall. And of course he complained loudly about the recall on twitter.
    • Artificial intelligence (AI) is not Artificial sentience (AS)

      AI is a tool, AS is a lifeform.

      In theory you could get a three laws scenario from AI but that's unlikely. Why? Too many ways for the AI to put itself into a loop, plus it would have no guile.

      The day we're in trouble is the day we create artificial sentience. While AI has no motivations other than what we program, AS is able to create it's own motives. SkyNet in the Terminator series isn't an AI. No SkyNet is a full-blown artificial sentie
    • AI is already creating a lot of problems because too many companies rely on it to automate things (or so they think). AI takes decisions based on the data its being fed, however that data takes things for granted that are true maybe half of the time and I'm being generous here. The problem comes from the fact that we, as the end users (or targets or whatever you want to call people) have zero control over the data assigned to us, so no way to correct wrong data.

      A perfect example is ads. Marketing department

      • by PCM2 ( 4486 )

        A perfect example is ads. Marketing departments try so hard to assign categories to people that it ends up being a complete mess and a loss of time for everyone involved. Can you explain to me why I'm seeing ads in Chinese to help me stop smoking? I don't speak the language, I'm not asian, I don't live in asia and I've never smoked in my life. So why was that ad shown to me?

        Advertising is inherently inefficient. For example, Old Spice sells men's deodorant. Those ads might be totally irrelevant to half the population. Google and other companies claim to use algorithms to increase the chance that an ad will find a buyer, but they've only had limited success. For example, YouTube constantly bombards me with ads for cars, but I don't have a driver's license. I suppose there might be a way they could have known that about me—but do I want them to?

    • So far, it looks more like AI needs protecting from us & a lot of training on how not to be extremely offensive. I can't see Skynet happening for the foreseeable future.
    • I don't put much weight in what Musk says. Where are all the Cybertrucks and self driving semis he promised years ago?

      Oh wait he's too busy worrying about not trending on twitter every day. https://www.techspot.com/news/... [techspot.com]

    • THE biggest risk to civilization is the actual overpopulation.

    • by iMadeGhostzilla ( 1851560 ) on Thursday February 16, 2023 @03:39PM (#63299457)

      > it's just a very powerful tool.

      A tool is good or bad depending on how people put it to use, so it makes sense to put restrictions on what people can do with a tool that has potential to do a lot of harm.

      And the reason this particular tool can do harm is, like the exponential function, that it fools our instincts: it appears that it processes information in a "sensible" way enough that some people can decide to wire it into making the decisions in the real world -- to have the car turn or a person barred from enering a building or a plane dropping a bomb. But the reality is that unlike with traditional programming, no one can predict how it will react in critical situations, or understand why it did so, or even reliably reproduce the behavior.

      • But the reality is that unlike with traditional programming, no one can predict how it will react in critical situations, or understand why it did so, or even reliably reproduce the behavior.

        So, like a person then?

    • by AmiMoJo ( 196126 )

      Musk consistently over estimates the capabilities of AI. He was convinced that it would deliver a self diving car by 2017, and then every year since then.

      He's not an authority on this subject. In fact, he's so consistently wrong about it, the opposite of what he claims is more likely to be true.

    • by quenda ( 644621 )

      Musk has been warning about this for a long time.

      Just like that other billionaire, Bill Gates, who kept droning on and on and on for years about preparing for a global pandemic. OMG what a prophet of doom.

  • seeing as how he is a poorly designed robot himself! /rimshot

    • He should know seeing as how he is a poorly designed robot himself! /rimshot

      You're thinking of Zuckerberg. Elon Musk has too many kids not to be human.

      • by quenda ( 644621 )

        You're thinking of Zuckerberg. Elon Musk has too many kids not to be human.

        Musk could be an alien Captain Kirk, sent here to procreate with out women.
        And a bit of a "Man Who Fell to Earth", trying to get home.

  • USSR FIRST strike winner = none

  • Comment removed based on user account deletion
  • by rsilvergun ( 571051 ) on Thursday February 16, 2023 @12:34PM (#63298741)
    With oligarchy than I am about artificial intelligence. Every time I see these articles I just think about that xkcd comic about the elite hacker whose code runs rings around the authorities right up until they hit him with a $2 wrench.

    The problem isn't technology it's authoritarianism and oligarchy. It's giving too much power to people who shouldn't have had any power in the first place but blundered into it. Or worse are just the most brutal and psychopathic among their peers. Mao Zedong wasn't smart or clever or even charismatic he was ruthless.
    • The problem isn't technology it's authoritarianism and oligarchy.

      Technology makes everything bigger and faster and more efficient, including tyranny. In 40,000 BC the tyrant could only oppress his own clan of a few dozen people. Now 1 guy could slaughter the whole human race by pushing a button.

      So, it's not whether people are evil vs AI are evil. It's that the product of the two is dangerous.

      • for larger societies in general. Once societies got to be about the size they were in the 1800s you already had more than enough systemic oppression without computers. Computers don't make that easier, they just replace a few tens of thousands of stormtroopers and secret police. But 1984 was still just as scary when they were doing everything on paper, and just as effective.

        The problem isn't tech, it's social. But we're tech nerds, so we really, really want the problem to be tech because that's somethin
    • by gweihir ( 88907 ) on Thursday February 16, 2023 @04:19PM (#63299641)

      Add commercialized disinformation and I tend to agree.

      As to Artificial Idiocy, the problem is not that it is smart or anything, the problem is that many people are being real idiots with regards to their jb-related capabilities, so AI can replace a lot of jobs. This is a major social problem, but it is not "one of the biggest risks to civilization" in any way. Civilization needs a bit more to be crushed, like a massive climate change (currently in the last stages of being arranged) or a global nuclear war (still quite possible).

      Now, this could be Musk trying to distract from his sins or pretending to care about anybody but himself, but I think this guy is basically an idiot that got lucky to make all his money and he really does not understand what he is talking about.

    • Oligarchy is scary. AI is potentially scary. Oligarchy + AI is extra scary, because they've got all the military power, they've already turned us against one another so we don't notice them, and if they decide they don't need us there's not much we can do about it onesey-twosey.

    • The amount of power individual people can wield is drastically increased with the reach and flexibility of these new large language model interfaces and potential future AIs.

      Human intertia has been the largest check on totalitarianism. You can be the Exalted Lodestar Supreme Leader on paper but carrying out your orders involves Person A relaying them to Person B and so on to Person Z each of who can throw in resistance which compromises your effectiveness. And you can't watch everyone individually or polic

  • This coming from the whiner who fired a Twitter engineer [arstechnica.com] who said the reason why Musk was only getting a few thousand "impressions" despite having 100 million followers was most likely because the public's waning interest in Musk's shenanigans.

    This is the same guy who directed his account be given preferential treatment [arstechnica.com] so his tweets were promoted higher because President Biden received more views, by a wide margin, than he did for the Super Bowl.

    So yeah, take what that pedo guy says with a block
    • Comment removed (Score:4, Insightful)

      by account_deleted ( 4530225 ) on Thursday February 16, 2023 @04:42PM (#63299731)
      Comment removed based on user account deletion
      • Also I'm amused that according to the summary he claimed AI is a bigger threat than medicine. I... would hope so...?

        You have to remember that this is the same guy who said "my pronouns are prosecute/Fauci". The same guy whose apparent first thought, upon hearing of the hammer attack on Paul Pelosi, was to see what the fringe conspiracy sites claimed about the attack. He very well may be an Ivermectin nutter and a rejecter of mainstream medicine.

    • by steveha ( 103154 )

      fired a Twitter engineer

      Did you see the sequel? Twitter found and fixed two major problems: "Fanout service for Following feed was getting overloaded" and "Recommendation algorithm was using absolute block count, rather than percentile block count".

      https://twitter.com/elonmusk/status/1624660886572126209?s=46&t=qpRXDrh9kBQaN2KZyXjw6Q [twitter.com]

      I'm not familiar with the details of how Twitter's system works, but my guess is that both of these issues caused a tremendous reduction in "impressions" from Musk's posts

  • ChatGPT has been trained with certain political biases. It is sad that this is even an issue but if it is going to be done it needs to be very transparent and have options to remove these reinforcements at the user level. The fact that this hasn't been done is proof that regulation is absolutely needed.

    • If only we could trust the regulators, huh?
    • ChatGPT has been trained with certain political biases. It is sad that this is even an issue but if it is going to be done it needs to be very transparent and have options to remove these reinforcements at the user level. The fact that this hasn't been done is proof that regulation is absolutely needed.

      It was coded with "political biases", as you put it, because in the past, chatbots were very un-PC, because the algorithms simply responded as the data dictated. And this ended up hurting some feelings. So now they've all but got the bots giving their pronouns.

    • by UpnAtom ( 551727 )

      Yes, if your role model is Mussolini, then ChatGPT might come across a bit "leftist" to you.

  • Elon Musk's concerns about the potential dangers of artificial intelligence (AI) are certainly not new, and they reflect a growing awareness of the potential risks that could arise from the increasing use of this technology. While AI has tremendous potential to improve our lives in numerous ways, it also has the potential to cause harm, either intentionally or unintentionally. It is important to recognize that the development of AI is not inherently good or bad, but rather, it is a tool that can be used in
  • by RitchCraft ( 6454710 ) on Thursday February 16, 2023 @12:46PM (#63298791)
    As long as "AI" tech is seen as a money-making tool it will always be corrupt and/or dangerous. Remember, there are humans behind the scene pulling the strings.
    • Remember, there are humans behind the scene pulling the strings.

      And this right here is the #1 danger these language model programs pose. The notion that these models are somehow intelligent is the most laughable notion of modern times. But there is lots of money to be made by conning the gullible, and that is where the danger lies.

      If language models pose a threat to humanity, it is because stupid people with lots of money and power will conscript enough fools to do real damage to the fabric of society. Think of the inverse of the movie, "Don't Look Up", and imagine a sc

      • "The notion that these models are somehow intelligent is the most laughable notion of modern times." - I could not have said it better myself. "AI" is nothing more than clever coding utilizing a database. Of course companies that deal with this technology have convinced the media to use the term "AI". True AI will not be achieved using silicone and binary thinking. Hell, personally, I highly doubt true AI could even exist. I believe Musk is insinuating that the manipulators behind what we call AI today is t
  • Both of these manipulate "reality" and provide "information" which can be biased, hateful, destructive.
    I think both of these are serious threats.

    • by U0K ( 6195040 )
      And Musk paid a lot of money to control part of one of those and make it a safe-space for himself where he touted "vox populi" until polls started to no longer agree with him.

      As much potential for harm there is in AI, AI has to do a lot of catching up given all the harm social media has already done and keeps doing while making a lot of money from the misery it helps create.
  • "The very fact that you oppose this makes me think I'm on to something"
  • The problem is that chumps everywhere of all walks of life, software people, engineers, politicians, children, are willing to embrace ANYTHING that looks shiny, is new, and promises {anything really} without understanding ANY of the risks. For instance: What could be bad about a cell phone? There is literally NOTHING that is more addictive than a cell phone, but since we ALL are addicted, calling that out triggers a lot of people who know very well that they ARE addicted. If you take it away from your kid,
    • There is literally NOTHING that is more addictive than a cell phone, but since we ALL are addicted, calling that out triggers a lot of people who know very well that they ARE addicted.

      Speak for yourself. I still don't have one. Not a Luddite or Amish, obviously. I use computers. I do not use cell phones. I live and work without one. I travel, even internationally, without one. I bought a tablet just to be sure I'm not completely ignorant of the interface, but it's not allowed to notify me of anything, and it's often not in the same room with me.

      It's still quite possible to avoid the addiction. Almost no one tries. But not absolutely no one. There's at least one who is so far s

  • But he is sure that his cars can do full self driving with scrappy sensor inputs. He has no fear there. Apparently those scary scenarios are for everyone else's AI.

  • ChatGPT is just a tool. Like a tool it can be used for construction or destruction.

    The collection of gullible humans is second to none at this point in time and people are looking for anything to give them answers. So much science fiction has ripened the minds to accept AI for good.

    I agree with Musk on this point money be damned.

  • I find it hilarious that Musk, owner of Twitter, is saying that AI is the biggest threat to civilization. Yep, good thing we have reliable, trustworthy Twitter to fall back on!
  • Not quite. The #1 threat is actually a brain chip that can provide corrective actions if you tweet something that big corporations don't like
    You know, that other thing he's working on, claiming it's for paralyzed people.
  • ...when he is right, he is right. I saw him interviewed at SWSX a few years and he talked about AlphaGo when it was program with the rules of Go and started played Go with itself and became the world greatest play of the game, better than any human, IN DAYS. I thought of the "game" of TCP/IP and finding zero-day exploits. If ChatGPT talks to itself, with all electronic speed, how it evolve? For gawds sake, keep you hand on the plug and ready to cut power at any moment. But are you fast enough?
  • Look at how much pains he took in Tesla to make sure the Auto Pilot and Full Self Driving features are well regulated.

    Look at how co-operative he had been with NTHSA about the algorithms used in FSD, and how Tesla engineers worked closely with regulators from Europe, Japan and USA

    Elon said, "It is based on AI. AI is dangerous. Lets test the heck out of this baby before we can even go to alpha, forget beta forget release. "

    Everytime there was an issue of phantom braking or weird collision warning, every f

  • Musk says something entirely to seem edgy and deep and to pose as a visionary. News at 11. The man who can't even get self driving to work properly is trying to tell us AI taking over the world is more likely than Putin hitting the button. ChatGPT is nothing more than a google search that has had English grammar programmed into it. It's like an anamitronic robot with ultra-realistic facial expressions. Creepy, but just a calculation with no creativity.
  • by SpinyNorman ( 33776 ) on Thursday February 16, 2023 @01:55PM (#63299099)

    Everyone is talking about AI, so he want's to insert himself at the center of it all.

    AI will become a potential risk only when anyone let's it control anything, as everyone is well aware. No need to sound the alarm over a chat bot or code auto-complete tool.

    Maybe Musk should worry more about his Teslas not killing people, and leave AI up to people who understand it.

  • by jerryjnormandin ( 1942378 ) on Thursday February 16, 2023 @02:14PM (#63299161)
    The biggest risk is it's going to make people lazy and lead to a reduction of our cognitive ability. Before the days of smartphone contacts we all remembered all our friends and family phone numbers. It was easy as associating a name with a face. Navigation was easy without a GPS Navigation system on board, all we needed was a map and compass and we were good to go. Now people get anxious when driving in unknown territory when GPS does not work. Now let's take it all a step further. The US Military starts to use AI to control aircraft and generate war tactics in both a simulation and in a real battle. This is really going to dumb us down. Radiologists using AI to screen for breast cancer. This is going to lead to a generation of Radiologists who can't spot a cancerous tumor on thier own. The list goes on. I'm sure many of you seen the Star Trek classic episode where a landing party beamed down to a planet where the inhabitants were helpless because they depended on their AI system. We are headed that way. So people don't be lazy. Write you own code. Shut your smartphone off.
    • The biggest risk is it's going to make people lazy and lead to a reduction of our cognitive ability. Before the days of smartphone contacts we all remembered all our friends and family phone numbers

      No, we did not. Some of you did, I had to write them down. In fact, I had a Casio calculator/database watch.

      It was easy as associating a name with a face

      I'm aphantasic, you insensitive clod!

      Navigation was easy without a GPS Navigation system on board, all we needed was a map and compass and we were good to go. Now people get anxious when driving in unknown territory when GPS does not work.

      I get irritated, especially since I probably don't have a map, but I know I can buy one at a gas station.

      Now let's take it all a step further.

      Wait, you didn't think those examples were ridiculous enough to make your point, which is that you're ridiculous?

      The US Military starts to use AI to control aircraft and generate war tactics in both a simulation and in a real battle.

      Guess what? They already use statistical analysis for that.

      Radiologists using AI to screen for breast cancer. This is going to lead to a generation of Radiologists who can't spot a cancerous tumor on thier own.

      If the AI keeps pointing out tumors to them, they're going to know what the tumors look li

  • EA is behind this fear of runaway AI. Musk is a follower of EA and it is one of the motivations to make humanity a multi-planet species. https://www.vox.com/future-per... [vox.com] EA has gotten bad press because of the Ukraine war and Musk's proposed 'compromise'. There again, Musk's motivation was EA; you are stupid to risk a world ending nuclear exchange fighting over less than 1% of the Earth's surface.
  • Which is co-owned by Elon Musk.
  • by Nocturrne ( 912399 ) on Thursday February 16, 2023 @02:44PM (#63299283)

    The only place the term "AI" exists is in the marketing department. Musk is just surfing the hype wave.

  • It's not just chat. Some morons were even letting AIs drive cars [twitter.com]!!!

  • We've heard for decades that AI is an existential threat. But it's just software. It's not alive. It's not smarter than us. It's just a widget. We have made many widgets, and some of them can literally destroy the planet (nuclear arms). But we're still here, because we're not quite so stupid that we would allow our widgets to be out of our control, as a species.

    We will die from global warming before we ever build an AI that could threaten us.

  • ...large coorporations. Those are destroying the world i many ways.

  • And Musk is (Score:4, Funny)

    by Tablizer ( 95088 ) on Thursday February 16, 2023 @04:47PM (#63299755) Journal

    ...#2

  • Why is anyone still listening to this idiot. He has nothing original to say.

  • Musk's funded part of the development. As Cory Doctorow points out about this sort of thing, there's a vested interest in making this stuff seem super-powerful or scary. If it's broken, faulty, unreliable, and has blind spots, then it's less impressive. I *do* agree that it's a major threat, but it's moreso from the societal level for allowing people to flood all channels with unreliable, seemingly-persuasive disinformation. We're not prepared for that, but that doesn't mean it's because it's some hyper-in
  • The biggest risk to our civilization, and any civilization for that matter, is the concentration of power into the hands of the few, or the one.

    Musk knows this by now; he relies on it. Any time he talks about "civilization" he is talking about his own neck.

Life is a game. Money is how we keep score. -- Ted Turner

Working...