Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

Bill Gates Calls AI's Risks 'Real But Manageable' (gatesnotes.com) 57

This week Bill Gates said "there are more reasons than not to be optimistic that we can manage the risks of AI while maximizing their benefits." One thing that's clear from everything that has been written so far about the risks of AI — and a lot has been written — is that no one has all the answers. Another thing that's clear to me is that the future of AI is not as grim as some people think or as rosy as others think. The risks are real, but I am optimistic that they can be managed. As I go through each concern, I'll return to a few themes:

- Many of the problems caused by AI have a historical precedent. For example, it will have a big impact on education, but so did handheld calculators a few decades ago and, more recently, allowing computers in the classroom. We can learn from what's worked in the past.

— Many of the problems caused by AI can also be managed with the help of AI.

- We'll need to adapt old laws and adopt new ones — just as existing laws against fraud had to be tailored to the online world.

Later Gates adds that "we need to move fast. Governments need to build up expertise in artificial intelligence so they can make informed laws and regulations that respond to this new technology."

But Gates acknowledged and then addressed several specific threats:
  • He thinks AI can be taught to recognize its own hallucinations. "OpenAI, for example, is doing promising work on this front.
  • Gates also believes AI tools can be used to plug AI-identified security holes and other vulnerabilities — and does not see an international AI arms race. "Although the world's nuclear nonproliferation regime has its faults, it has prevented the all-out nuclear war that my generation was so afraid of when we were growing up. Governments should consider creating a global body for AI similar to the International Atomic Energy Agency."
  • He's "guardedly optimistic" about the dangers of deep fakes because "people are capable of learning not to take everything at face value" — and the possibility that AI "can help identify deepfakes as well as create them. Intel, for example, has developed a deepfake detector, and the government agency DARPA is working on technology to identify whether video or audio has been manipulated."
  • "It is true that some workers will need support and retraining as we make this transition into an AI-powered workplace. That's a role for governments and businesses, and they'll need to manage it well so that workers aren't left behind — to avoid the kind of disruption in people's lives that has happened during the decline of manufacturing jobs in the United States."

Gates ends with this final thought:

"I encourage everyone to follow developments in AI as much as possible. It's the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.

"The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before."


This discussion has been archived. No new comments can be posted.

Bill Gates Calls AI's Risks 'Real But Manageable'

Comments Filter:
  • AI threats (Score:5, Insightful)

    by Stoutlimb ( 143245 ) on Sunday July 16, 2023 @04:46PM (#63690969)

    In my opinion the biggest threat is that AI gives one big player such a ridiculous advantage that they have (and abuse) the power to crush and prevent all competition from even getting started. I'm looking at you, Microsoft.

    Tell me that can't happen. I dare you.

    • Re: (Score:1, Offtopic)

      by ichthus ( 72442 )

      I'm looking at you, Microsoft.

      Oh, yeeeaah. Because Microsoft has been such a big innovator over the last twenty years. *rolls eyes*

      • MS Might not be, but OpenAI has an undeniably huge lead in the AI market, so big its not even close, and guess whos funding that?

        Why do you think MS where running GPT4 in bing before GPT4 was even announced?

    • by m00sh ( 2538182 )

      In my opinion the biggest threat is that AI gives one big player such a ridiculous advantage that they have (and abuse) the power to crush and prevent all competition from even getting started. I'm looking at you, Microsoft.

      Tell me that can't happen. I dare you.

      The actual AI technology is simple enough that anyone can recreate it.

      What I see as the only bottleneck is the semi-conductors. It bottlenecks at nVidia and then TSMC next right now. nVidia will be bypassed but TSMC doesn't seem to be. Maybe Intel will be the savior but I hope the AI chips that can be used will large and diverse and not just nVidia TSMC chips.

  • by manu0601 ( 2221348 ) on Sunday July 16, 2023 @04:57PM (#63690991)
    Some tome ago, he was able to forecast that a 640kB limit was a manageable risk.
    • Some tome ago, he was able to forecast that a 640kB limit was a manageable risk.

      He also said in his book. That the Internet was basically a fad. So I don't exactly consider his word ro be worth much.

  • Robotic technology sucks ass. It can't even replace many factory workers, let alone restaurant jobs. Factory automation hasn't improved much since like the 70s. Why do we still need humans to assemble smartphones? Why aren't things being designed for automated manufacture? We still don't have a dexterous enough hand and robot to install basic components. AI is useless without an avatar it can't do anything. It can't even take over a factory let alone a town. When robots can build the structure of a home fro

    • Re: (Score:3, Insightful)

      by jslolam ( 6781710 )

      > Why do we still need humans to assemble smartphones?

      Labor in China is too cheap. If Apple was forced to manufacture in the US I bet it would be much more autoamted.

    • In a thousand years we will have robots that can bend things.
  • by rsilvergun ( 571051 ) on Sunday July 16, 2023 @05:29PM (#63691029)
    For us maybe not. The problem with AI is that the technology has the potential to replace huge swass of the job market for a species that can't comprehend the idea of getting to eat if you don't put in a full day's work at least 66% of the time you're awake, and for most people it's closer to 80%.

    This isn't like buggy whip manufacturers. We're not replacing buggy whips with cars. So there's no going to the car factory for a job after the buggy with factory shuts down. This is a technology that only has value if it's replacing humans. And we're not all going to become geniuses tomorrow and become so valuable in the job market that we can't be replaced by incredibly complex automated systems.
    • by m00sh ( 2538182 )

      For us maybe not. The problem with AI is that the technology has the potential to replace huge swass of the job market for a species that can't comprehend the idea of getting to eat if you don't put in a full day's work at least 66% of the time you're awake, and for most people it's closer to 80%.

      This isn't like buggy whip manufacturers. We're not replacing buggy whips with cars. So there's no going to the car factory for a job after the buggy with factory shuts down. This is a technology that only has value if it's replacing humans. And we're not all going to become geniuses tomorrow and become so valuable in the job market that we can't be replaced by incredibly complex automated systems.

      The "I shall replace you with a small script" scare, all over again.

      • That'll be the end of capitalism then.
        If nobody has any money, how are the AI owners going to sell us stuff?
        • with neo-feudalism? Did the King need peasants to buy his products?
          • by Anonymous Coward
            I fear our rights are largely dependant on the rich requiring us for work. When AI and robots can do that, things will be bad for most people.
          • by linzeal ( 197905 )

            Robots needs batteries and shiny metal asses.

          • The problem with feudalism is that it relies on the peasants wanting to be peasants. We work way more than medieval peasants did for example.
            Feudalism was a bargain where each part (King, Church and people) played a part. It wasn't really slavery, which is why it started to collapse after the Black Death killed 1/3 of the working population. (Or more).
            Not that I don't believe there are people who wouldn't be cool with a return to feudalism (or slavery come to that), it's just that it would result in lot
          • Did the King need peasants to buy his products?

            Kings were desperately poor compared to billionaires.

    • I'm so tired of apocalyptic gloom and doom. I've seen places where large language models, and even more prosaic forms of machine learning, are making people more productive. Potentially, that could lead to less people being needed to do certain jobs in the near term. Keep calm and carry on.

  • It will give us tools to create more quickly and not waste time on the mundane. All the pearl clutching over AI will take over is about the same as when personal computers first hit the scene. Oh no! What the typing pool do? We survived.

  • You know the problems we have with social media being addictive and ruining peoples lives? Imagine those algorithms creating other forms all media.

    We've been so concerned with crossing the uncanny valley, we haven't thought about what may lie on the other side. AI generated media content might not only cross the uncanny valley, but go far beyond. An evolutionary algorithm might discover not only what heuristics we use to determine reality from fiction, but discover what releases dopamine in our brains
    • by m00sh ( 2538182 )

      You know the problems we have with social media being addictive and ruining peoples lives? Imagine those algorithms creating other forms all media.

      We've been so concerned with crossing the uncanny valley, we haven't thought about what may lie on the other side. AI generated media content might not only cross the uncanny valley, but go far beyond. An evolutionary algorithm might discover not only what heuristics we use to determine reality from fiction, but discover what releases dopamine in our brains as well. This could result in media that is far from reality, but objectively superior. To the point where humans withdraw from the real world entirely, and civilization comes to an end.

      You don't need at AI for that. Just HI does that - keep you addicted on dopamine rushes.

      Perhaps AI will perfect it so that we will never have bad movies, bad video games and every piece of media is sooo good. Good or bad?

    • You just described a wirehead -- they populate a lot of dystopian science fiction.

      Several years ago a story was published about a woman who had electrodes implanted in her spinal cord and connected to an electrical device to help alleviate chronic pain. She went back to the surgeon because it stopped alleviating her pain. However, she didn't want the old electrodes removed -- she just wanted new electrodes placed in the correct position. It turns out the old electrode positions and the device would trigger

    • We've been so concerned with crossing the uncanny valley, we haven't thought about what may lie on the other side.

      During Computex NVIDIA's Huang was peddling a harebrained idea where video conferencing systems would transform video into a model transmit data and re-transform back out on the other side. Something akin to a STT - TTS system trained to imitate someones voice except for video. Now everyone gets to be a "deep fake" apparently.

  • AI is going to decimate business and the economy. Yet nobody in business will realize it and we'll have an entire industry powering increasing amounts of money into something that is going to backfire with staggering consequences. A phrase comes to mind, "Hoist with his own petard."

    • by m00sh ( 2538182 )

      AI is going to decimate business and the economy. Yet nobody in business will realize it and we'll have an entire industry powering increasing amounts of money into something that is going to backfire with staggering consequences. A phrase comes to mind, "Hoist with his own petard."

      Hype_cycle.gif

      You are somewhere between "technology trigger" and "peak of inflated expectations". Reality will be something completely different.

      • Let's experiment and find out if the economy collapses on our overwrought designs for transforming society with AI, or if AI ends up being just another industry that needs some mild regulation. I hope AI fizzles out and goes the way of VR headsets and 3D movies. But realistically nobody has any way of knowing for certain how this is going to play out. Least of all the pundits that suggest that we take the average of two extremes as the most likely answer, as if those extremes carry an equal weight of [im]pr

  • The problem is people who use AI as a weapon
    We need to develop effective defenses

  • ... similar to the International Atomic Energy ...

    Essentially, manufacture and storage of nuclear materials was limited to government. They created the IAEA to keep it away from everyone else, and as a report-card for ensuring government's honesty. AI doesn't have that cost and limitation, the rare materials (algorithms, software, data) are free on the internet for any mad-man to collect and play with.

  • ... exploitable.

  • None of anything being touted as "AI" is much of anything like what people think of when they hear that term, which is machines that can think and reason like human beings. These chat and paper writing programs are closer to what were called expert systems back in the 80s. They're really just clever applications of neural nets, i.e. mathematical equations. There is no awareness, no rational thought involved. Input stimulus, get response, just like any other machinery. So why all the hype and threat pan

    • None of anything being touted as "AI" is much of anything like what people think of when they hear that term, which is machines that can think and reason like human beings. These chat and paper writing programs are closer to what were called expert systems back in the 80s.

      They are nothing alike.

      They're really just clever applications of neural nets, i.e. mathematical equations. There is no awareness, no rational thought involved. Input stimulus, get response, just like any other machinery.

      Does any of the above not apply to humans? If so do you have an objective test in mind to discern the difference?

      But the problem is all the sheep would never panic if they understood this was just clever use of mathematical equations to analyze data in new and interesting ways.

      What is the point in defining everything away as "mathematical equations"?

      • by Acron ( 1253166 )

        Sure they are alike, expert systems were for making use of practical areas of knowledge, like medical diagnoses. Both are just using analysis tools to generate a useful desired output using some equivalent of a neural net or decision tree tailored with the existing to knowledge to predict a possible outcome. Yup, there's plenty of ways to differentiate, from philosophical, to design, to function and capability. There's definitely a difference between a slide ruler and the architect that uses it to design

  • are already confused by regular reality. these are the fox viewers and they are fully detached. no need for AI, they think reality is 'fake news'.

    imagine if you have actual thinking people that are fooled and start to think that actual reality is now fake.

    when nothing is trustable, its like the earth moving below you (I have to imagine). this wont be something most people can re-educate themselves thru.

    its the trickery and results of it that I worry about the most. yeah, there will be benefits with auto

    • by narcc ( 412956 )

      Relax. We've been through far more disruptive change than what we can expect from AI and not only survived but thrived.

      american society, at least, cant fathom supporting its own people via 'socialism'.

      The problem isn't with 'socialism'. We largely support policies that are often slandered as 'socialist'. The problem is a vocal minority that lives in fear of one of "those people" getting something they don't "deserve".

      I've seen more than one able-bodied person, who have lived most of their lives on disability, get visibly upset while complaining about "bums on welfare". Those same peo

      • by gweihir ( 88907 )

        Greed, envy, fear of the "other" and many people being not only stupid as fuck but also uneducated. What a sad state of affairs.

        Those same people actively vote against their own interests, irrationally thinking that it will only hurt people they don't like

        If enough people vote to hurt others, then society is in the process of fracturing and eventually failing. That process seems to be well underway in the US.

    • by gweihir ( 88907 )

      imagine if you have actual thinking people that are fooled and start to think that actual reality is now fake.

      At least in the US, that seems to be standard these days. Flat-earthers, Covid-deniers, Anti-Vaxxers, religious fanatics, etc. all are deeply in denial about actual reality.

  • Those with power seeking ever more power through technology and corrupting government.

    https://old.reddit.com/r/IAmA/... [reddit.com]

    "I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

    Apparently once you h

    • by gweihir ( 88907 )

      It seems to me this constant fear of "super intelligence" is because the billionaire scum perceive it as a credible threat to them. Hence that is the only thing they are concerned about. People unable to feed themselves and their families? That risk does not even exist in their world.

      It also seems that these billionaires are not very smart. Because you can only scale up intelligence if there actually is some. Current "AI" has absolutely nothing in that regard.

  • by cascadingstylesheet ( 140919 ) on Monday July 17, 2023 @05:51AM (#63692094) Journal
    "Bill Gates actually died 10 years ago, was replaced by AI muppet"
  • It's not clear why I would take Bill's advice about this.
    • He most certainly did not write BASIC. He sold a version of BASIC. Several versions.
      • He and Paul wrote Altair BASIC, which became the model for most BASIC interpreters. Regardless, it stands to reason if you wrote a BASIC interpreter, you would have a deep understanding of AI because reasons.
  • "people are capable of learning not to take everything at face value"

    I'm going to need evidence for this claim. I think a significant fraction of the population hears something for the first time and decides how they feel about it (literally an emotional reaction), and they are not going to reexamine it in light of contradiction, they are going to reject both the new information and its source. Psychologists have written quite a lot about how to get these people to changes their minds, it's a long process t

  • ... should burn in hell.

  • Well, at least it is clear that the threat from Gates is greater than that from AI.

  • "Bill Gates is also real, but manageable"

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...