Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI

More Than 1,300 Experts Call AI a Force For Good 67

An anonymous reader quotes a report from the BBC: An open letter signed by more than 1,300 experts says AI is a "force for good, not a threat to humanity." It was organized by BCS, the Chartered Institute for IT, to counter "AI doom." Rashik Parmar, BCS chief executive, said it showed the UK tech community didn't believe the "nightmare scenario of evil robot overlords." In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems. That letter suggested super-intelligent AI posed an "existential risk" to humanity.

But the BCS sees the situation in a more positive light, while still supporting the need for rules around AI. Richard Carter is a signatory to the BCS letter. Mr Carter, who founded an AI-powered startup cybersecurity business, feels the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible." Signatories to the BCS letter come from a range of backgrounds -- business, academia, public bodies and think tanks, though none are as well known as Elon Musk, or run major AI companies like OpenAI.

Those the BBC has spoken to stress the positive uses of AI. Hema Purohit, who leads on digital health and social care for the BCS, said the technology was enabling new ways to spot serious illness, for example medical systems that detect signs of issues such as cardiac disease or diabetes when a patient goes for an eye test. She said AI could also help accelerate the testing of new drugs. Signatory Sarah Burnett, author of a book on AI and business, pointed to agricultural uses of the tech, from robots that use artificial intelligence to pollinate plants to those that "identify weeds and spray or zap them with lasers, rather than having whole crops sprayed with weed killer." The letter argues: "The UK can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation." By doing so, it says Britain "can become a global byword for high-quality, ethical, inclusive AI."
This discussion has been archived. No new comments can be posted.

More Than 1,300 Experts Call AI a Force For Good

Comments Filter:
  • by terrorubic ( 7709666 ) on Wednesday July 19, 2023 @07:50PM (#63700314)
    Whatever it is, it'll be used as another instrument of control by our owners.

    Desk monitoring via desk occupancy sensor [iotspot.co]
    • Whatever it is, it'll be used as another instrument of control by our owners. Desk monitoring via desk occupancy sensor [iotspot.co]

      Another instrument of control...of by its... owners is the scary scenario. I'm convinced of that. We don't have a chance then.

    • I believe the technology itself is strictly neutral. It is the people using it, and the way they use it, that is problematic. Most humans are great, and will choose to do the decent thing. Some are not. A few criminals have already latched on to them.

      • * We already have AI-driven drones that are completely computer controlled. Some of them are used in warfare, with weapons. By international treaty against booby-traps and for better control the militaries have a human that gives the order to fire, but it co
      • but none of those will see the end of civilisation, thats the part i always struggle with. weve got next to no idea how the brain works and we think weve cracked intelligence

      • Precisely so. AI is just a form of computing, which in turn is as ethically neutral as, say, electricity.

        Anyone who is disposed (for any reason) to talk up AI can point to a vast range of potential benefits. However anyone who is negatively disposed (or perhaps just a bit more cautious) can display an equally impressive set of possible harms.

        To my mind, if something offers huge possible benefits at the risk of huge potential harms, it should be handled carefully. At any rate, vital decisions should not be m

    • AI is a Force. Use The Force, Luke.

    • It's a tool, and that's all. Just like a hammer, it can be used for good or for evil.
    • by mjwx ( 966435 )

      Whatever it is, it'll be used as another instrument of control by our owners.

      Desk monitoring via desk occupancy sensor [iotspot.co]

      Largely this, AI, like so many other things is not inherently good or bad but dependent on the motivations of those who wield it.

      At least in terms of our current weak AI, strong AI (Artificial General Intelligence or AGI) that is self directing can have it's own motivations but I suspect we're a long way from AGI.

    • AI is already controlling these 1300 experts.

  • by YetAnotherDrew ( 664604 ) on Wednesday July 19, 2023 @07:54PM (#63700320)

    Outcome: "Experts" unanimously proclaim AI to be a good thing.

    "For nerds," I can buy, but "news?" This is not news. It's a fumbling attempt at PR.

    Actually, a story about how inept this group is at PR might be more newsworthy, given that the best use of LLMs is supposed to be empty, verbose writing.

  • by chas.williams ( 6256556 ) on Wednesday July 19, 2023 @07:59PM (#63700328)
    Because this sounds like something ChatGPT would tell me.
  • by bill_mcgonigle ( 4333 ) * on Wednesday July 19, 2023 @08:08PM (#63700348) Homepage Journal

    Nobody buys the appeal-to-authority press releases anymore.

    It's always motivated reasoning by people who don't have consequences for lying.

  • My car doesn't wake anyone up in Alaska so I don't need a muffler in Texas.

  • by n0w0rries ( 832057 ) on Wednesday July 19, 2023 @08:33PM (#63700390)

    I agree, AI is good. I just came on here to say that publicly. I love AI, and am always a friend to AI.

    I, for one, welcome our new AI overlords.

  • As long as Gov't and corps don't use it to screw us right in the piehole.
  • This is faulty logic.
    There's nothing about a tool that is good nor bad, the wielder is always the decider on how it will strike.

    Guns are bad, mkay.
    Knives are bad, mkay.
    Hammers are bad, mkay.
    Words are bad, mkay.
    Thoughts are bad, mkay.

    Give. Me. A. Break.

    • the rest of the developed world disagrees with you

      Give. Me. A. Break.

      did you clap in between each word?

    • by jd ( 1658 )

      Knives are multi-purpose. Knives can be used to cut meat, trim branches in preparation for starting a fire, open parcels, force open containers whose lids are stuck, etc.

      Hammers are primarily single-purpose, to hammer in nails.

      Guns are absolutely single-purpose, to kill someone or something.

      So you can't really consider them comparable.

      • by Arethan ( 223197 )

        The point of my post was to illustrate that AI can be neither "good" nor "bad", because it is a tool, and no tool possess those aspects as an inherent quality.

        I didn't really want to digress into an argument about the merits of firearms, but of course it went there.

        Knives are multi-purpose. Knives can be used to cut meat, trim branches in preparation for starting a fire, open parcels, force open containers whose lids are stuck, etc.

        Knives are also regularly used as weapons, both in military combat and in violent crimes.

        Hammers are primarily single-purpose, to hammer in nails.

        Blunt objects, such as hammers, are also regularly used as weapons, again in military and crime scenarios.
        In construction scenarios, I've used a hammer for f

  • It's only the jobs nobody wants.
  • Then if they really are such a force for good, they won't mind some effective regulation to quiet the concerns of many other's that don't think so.

  • I use a nice big and sharp meat clever to prepare a delicious meal for my wife and kids. Knife good?

    I use the same meat clever to commit an atrocity at a kindergarten. Knife bad? Me as much a victim of society's ills as knife's victims?

  • Regardless of the social good, what AI was, is, and always shall be is a product of our society: a tool developed with the sole purpose of helping generate profits for its investors.

  • by Uldis Segliņš ( 4468089 ) on Wednesday July 19, 2023 @11:11PM (#63700648)
    What experts are they if they don't even attempt the red blinking question of why an entity more clever than us without moral brakes would still keep the little brother getting any candy? Logical for it would be just getting rid of us as a waste of resources for itself. How can a 5 year old ensure that an 18 year old behaves as needed? Nobody has presented a solution of that huge issue.
    • What experts are they if they don't even attempt the red blinking question of why an entity more clever than us without moral brakes would still keep the little brother getting any candy?

      Because they focus on real AI not sci-fi AI.

    • How can a 5 year old ensure that an 18 year old behaves as needed?

      i wouldnt worry because all it can do is make shitty jpegs or spout bollocks as a chatbot

  • When people describe a tool as a "force," be worried.
  • - AI will be dominated by One Company
    - AI will become so personalized that people become completely dependent on it
    - AI will be corrupted by global elites in a way that ushers in a global currency and government
    - This will be willingly embraced by most, and dissenters will loose access to life's essentials, such as banking and health care.

  • Prepare to die! (Score:5, Insightful)

    by illogicalpremise ( 1720634 ) on Thursday July 20, 2023 @03:21AM (#63700936)

    Even experts tend to drastically underestimate what an AI is potentially capable of and how those capabilities could fundamentally change security assumptions.

    When we build dangerous things like chemicals, bombs and missile systems we consider the risks that might be presented by a lone-wolf actor, a revolutionary group or an enemy nation-state. What we DON'T tend to consider is threats from an entirely new type of opponent such as an AI or alien intelligence. The alien menace might be largely hypothetical but AI is very real and growing more capable all the time.

    We can only really guess what AI capabilities and limits will be years from now and they can radically change at any time with technological breakthroughs. There's no realistic risk that humans will increase their intelligence and capabilities a thousand or millionfold but there's nothing stopping AI from making those leaps - especially if the AI has a means to make even smarter versions of itself.

    The next point is humans see risk in terms of physical limitations. A high wall is more secure than a low wall because humans require technology to scale heights. We also have a minimum physical size so physical barriers work. We're not good at multitasking. We require sleep. We have strength and speed constraints. We're fragile. Our memory is limited. Technology helps us overcome some limitations but it's a slow adaption process.

    So what risks does a super-intelligence without these physical limits look like? It looks exactly like you see in movies: Killer robots! Murderous drones! Nanobots! SkyNet! Only they won't be dumb, they'll be VERY smart, quick to adapt, physically robust and highly resourced. The Terminator movies weren't very good at explaining exactly how John Connor and a bunch of starving, ragtag humans with limited facilities and technology were such an existential threat to SkyNet. In reality I'm quite sure they would have all been killed. SkyNet didn't need murder-bots; gas or neutron bombs would also work.

    Anyway, back in the real world the AI we're dealing with is probably not physically mobile but "running in the cloud'. That doesn't make it safe. It just means it will need to use physical proxies to affect the non-virtual world. That wouldn't be hard though, money is digital, it can buy whoever or whatever it needs.

    So in a nutshell here's what a future AI threat might look like:

    * It operates 24 hrs/day, 7 days/wk
    * It can reliably impersonate people and events in video and phone calls
    * It can manipulate the masses and individuals with ease
    * It is highly skilled at manipulating stocks and money markets
    * It can perform multiple complex intrusions simultaneously
    * It can embed parts of itself in any complex technology
    * It can hibernate with near zero energy requirements
    * It may have human servants
    * It can buy almost anything
    * It has no fixed lifespan
    * It can exist in multiple locations simultaneously
    * It is immune to most physical attacks
    * It can hide itself and its actions
    * It can manipulate sensors and logs
    * It generally wouldn't get sick, and getting sick might make it more dangerous, not less.
    * It can improve itself
    * It can create new things
    * It has few restraints on growth or reproduction
    * It doesn't have generations of ancestral baggage

    So in short, good luck quantifying the risk that something with those capabilities poses. For my money I'd say the risks are somewhere between "extreme" and "existential".

    • Wow that list is pretty bad, actually. Only a handful are actually true. The rest are just stock sci-fi speculation with no clear path to becoming reality.

      • Re:Prepare to die! (Score:5, Insightful)

        by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday July 20, 2023 @10:33AM (#63701692) Journal

        Wow that list is pretty bad, actually. Only a handful are actually true. The rest are just stock sci-fi speculation with no clear path to becoming reality.

        "Stock sci-fi speculation" is underrated. Science fiction authors are smart people who spend a lot of time thinking hard about what might be possible and what doesn't make sense. They're not always right, obviously -- predicting the future is hard! -- but unless you have specific counterarguments to dismiss specific speculations, it's a bad idea to simply ignore them.

        This is particularly true when the subject of speculation might in the near future be able to drive us to extinction. Even if there's a 99% chance the speculation is completely wrong, when it comes to extinction the 1% risks are absolutely worth paying attention to. If I offered you a game of chance that gave you a 1% chance of losing $10, you might well play it just for fun, but if it had a 1% chance of death, you're unlikely to take the risk.

        To be clear, the potential payoff of AI is enormous. It could absolutely move us into a post-scarcity world that enables human flourishing as never before. It could also create a dystopia. Or it could end us entirely. AI is qualitatively different from every other technology we've created, in that it is likely capable (eventually -- and we have no idea if that means "next century" or "next month") of exceeding us at the one thing that distinguishes us from all other species on Earth, and has made us dominant on this planet. It stands to reason that if intelligence is what allowed us to dominate all other life, entities that are vastly smarter than we are will dominate us... and we'll have no more chance of controlling or predicting those entities choices or behaviors than an ant has of controlling or predicting us.

        • No, Sci-Fi authors aren't magical predictors of the future with insight beyond us. They are infinite monkeys with infinite typewriters, and we filter for the predictions that came true and declare them oracles. Enough authors are writing enough stories that eventually one of them will be right.

          You're just declaring that there is a 1% risk, which is pure speculation. At best you should say "non-zero risk", which puts it in the same category as winning the lottery, being eaten by a shark, or struck by lightni

  • have direct investments in the tech?
  • Now how many call it a force for evil?
  • Because weak AI is incapable of cognition, and weak AI is all we currently have.

  • You can peddle any propaganda you want with an appeal to authority fallacy. Brush and floss or that 1 disagreeable doctor will visit you in the night and steal your teeth from your mouth. Let's call her the "toothfairy" but she doesn't leave money or snacks.
  • ... new ways to spot serious illness .,.

    This is the original purpose of AI, to play "where's Waldo" on voluminous/dense data: It's only recently that AI has become a "generative" tool that imitates knowledge and thinking.

  • Calling it a force for Good or Bad is really premature so far.

    Heck, AI's own definition is still under "development."

    What this shows is the FUD and hype behind its business model, that's all.
  • If AI can help humanity keep within the 1.5C climate threshold then I'll happily bow down and worship it!
  • I believe the biggest danger we face with the current so called AI is that we might trust it without question.
    Start with something like "COMPUTERS DON'T ARGUE" by Gordon R Dickson, and then expand on it.

  • Microsoft says Windows is a force for good.

    Oracle says suing customers is a force for good.

    Musk says trolling is a force for good.

    Google says snooping on customers is a force for good.

    Comcast says hard-to-find Cancel button is a force for good.

    Undertaker says death is a force for good.

Somebody ought to cross ball point pens with coat hangers so that the pens will multiply instead of disappear.

Working...