Is Concern About Deadly AI Overblown? (sfgate.com) 190
"Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction," acknowledges the Washington Post. "And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.
"But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren't rooted in good science. Instead, it distracts from the very real problems that the tech is already causing..." It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control... [I]nside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions. "Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk," said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher...
The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced. The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies. "There are a set of people who view this as, 'Look, these are just algorithms. They're just repeating what it's seen online.' Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan," Google CEO Sundar Pichai said during an interview with "60 Minutes" in April. "We need to approach this with humility...."
There's no question that modern AIs are powerful, but that doesn't mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies. "Most technology and risk in technology is a gradual shift," Hooker said. "Most risk compounds from limitations that are currently present."
The Post also points out that some of the heaviest criticism of the "killer robot" debate "has come from researchers who have been studying the technology's downsides for years."
"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
"But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren't rooted in good science. Instead, it distracts from the very real problems that the tech is already causing..." It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control... [I]nside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions. "Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk," said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher...
The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced. The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies. "There are a set of people who view this as, 'Look, these are just algorithms. They're just repeating what it's seen online.' Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan," Google CEO Sundar Pichai said during an interview with "60 Minutes" in April. "We need to approach this with humility...."
There's no question that modern AIs are powerful, but that doesn't mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies. "Most technology and risk in technology is a gradual shift," Hooker said. "Most risk compounds from limitations that are currently present."
The Post also points out that some of the heaviest criticism of the "killer robot" debate "has come from researchers who have been studying the technology's downsides for years."
"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
This article brought to you by... (Score:3)
This article brought to you by ChatGPT.
Re: (Score:2)
It's a joke, people!
Obviously (Score:4, Insightful)
I've been saying this for a while now. There is nothing to worry about here. Your jobs are safe, there is no singularity, it's not going to destroy art, music, or anything else. AI is significantly less capable than you think.
The hype is real. The danger is not.
Re:Obviously (Score:5, Insightful)
The hype is real. The danger is not.
In the current version, maybe. However, I can already see a few areas where jobs might be in danger, such as translating stuff from and to foreign languages. In all honesty, ChatGPT (the v3, haven't tried the v4 yet) is doing a pretty decent job, even on complex / technical stuff
Moreover, AI might be less knowledgeable than your level 1 hotline, but it is way cheaper, so it might also endanger those jobs right now.
Re:Obviously (Score:4, Insightful)
The greatest near future danger of A.I. comes from people accepting its output uncritically.
Re:Obviously (Score:5, Interesting)
Re: (Score:3)
People are really underestimating human ingenuity if they think ChatGPT is a danger to humans.
Re: (Score:2)
In summary, while Shakespeare wrote many masterpieces it is important to remember that every person has within themselves the capability to create equally great masterpieces, and we should all strive to encourage one another in our creative efforts.
Yeah, ChatGPT's standard disclaimers and boilerplate sentence structure have become somewhat of a joke at the IT company where I work.
On the subject of creativity... just no. I asked ChatGPT to play a game of "hink pink" with me the other day. That's a word game
Re: (Score:3)
Re: Obviously (Score:2)
You trying spewing a few paragraphs in a second and see how right it is!
The major thing is it is a breakthrough. There is now a functional juicy model of how intelligence works for everyone to look over for the next years. More honed models will come, until we really have a complete theory of how brains work, and it gets optimized for cheap running one way or another. Then, you have to face the fact that you will have models and robots that can generate all the economic outputs humans can, at a fraction of
Re: Obviously (Score:2)
This is not " a functional juicy model of how intelligence works " , unless you see a thermostat as a model of a human not wanting the heating bill to be too high.
Re: (Score:2)
There is now a functional juicy model of how intelligence works
Oh, wow, no. Not even a little bit. You have been very badly mislead.
Start here [stephenwolfram.com], let me know if you need something else / you sill believe the thing I quoted.
Re: (Score:2)
I'm not saying it'll be this limited forever, but really the current state of AI is more like taking slices from Wernicke's area or the visual cortex in isolation.
That has plenty of disruptive potential, but it's not a Skynet scenario.
You are now viscerally feeling the economic pressure that factory workers have complained about for decades.
As for the need to adjust our economic system to make the improving technology a blessing rather than a curse for the majority of the population, I agree. I have advocat
Re: (Score:2)
Agreed when you ask a question, I have seen if firsthand many times already.
When it does a translating job, much less so. It is a much less creative job than answering random questions. And in any case, if you can replace 5 translators by only one whose job it is to review the output of your LLM, you've effectively lost 4 jobs.
Re: (Score:2)
It could make things hard for translators, but it's not the destroy humanity Sky Net scenario.
Re: (Score:2)
The greatest near future danger of A.I. comes from people accepting its output uncritically.
So, no different than today's Social Media feeds
Re: (Score:2)
So, no different than today's Social Media feeds
Where do you think ChatGPT gets its misinformation?
Re:Obviously (Score:4, Insightful)
You are mistaking the current state of development for the state a decade from now. Either that, or your worries have a very short time horizon.
Re: (Score:3)
Re: (Score:2)
I don't know the context of what you were doing, but an article I read asserted that the current crop of ChatBots had a severe problem handling negations.
Re: (Score:2)
I used to translate texts for banks, and university communications (Dutch to English). Now I focus on 18th century Latin and German philosophical texts, plus classical Latin.
Re: (Score:2)
Computer translation is terrible, especially for technical subjects ... it's the 10% it cannot translate reliably that is the most important bit ...
What GPT is currently good at, drudge work that humans can do on autopilot, that's where the jobs will disappear ...
Re: (Score:2)
...such as translating stuff from and to foreign languages.
Yes, that is exactly where it's strongest. It's a Large Language Model. It's built to parse language. Once the language is parsed, it's just a tiny hop to translation. Foreign language translators are the ones that should hope they are either young enough to learn a new trade, or old enough to retire. They are in the same boat as typewriter secretaries at the start of the computer age.
For everyone else, LLM's are nothing more than a sometimes useful assistive tool (and frequently wrong even then). They wil
Re:Obviously (Score:4)
Re: Obviously (Score:2)
This sounds like Hollywood. Whatever your area of expertise, you see massive errors in the way Hollywood portrays it.
Re: (Score:2)
have a few decades of experience with this and know that you simply cannot produce accurate, publishable output from machine translation
Oh, so much this (and the problem goes back well before LLMs). Most of them can do a verbatim translation that is technically correct as far as swapping the right foreign word/phrase in for the original, but their grammar is shaky at best and their understanding of how words and phrases are used conversationally is nonexistent. What you end up with is a stilted, pidgin version in your target language that would sound to native speakers of that language exactly like how we Americans portray immigrants trying
Re: (Score:2)
If I've understood TFS correctly, they're saying that the sci-fi scare stories are just a distraction from what AI will mostly be used for & abused, i.e. turbo-charging the kinds of human rights abuses that corporations are already engaging in, & then probably thinking up a few new ways to do it. Us ordinary citizens need protection at the national & international levels if we're not to end up in
Re: (Score:2)
I'm hoping Potor will weigh in on that translation claim. He would know what that actually means.
AI will mostly be used for & abused, i.e. turbo-charging the kinds of human rights abuses that corporations are already engaging in
I've seen that sort of claim before, but I've yet to see anything specific. How will AI "turbo-charge" human rights abuses?
Re: (Score:2)
In the current version, maybe. However, I can already see a few areas where jobs might be in danger, such as translating stuff from and to foreign languages. In all honesty, ChatGPT (the v3, haven't tried the v4 yet) is doing a pretty decent job, even on complex / technical stuff
It's really weird how easily people dismiss it because "it's not true AI" as if we had a good definition for that, or especially by just calling it "it's just autocomplete".
Yeah sure whatever, yet if you ask it to write C code to drive 7-segment displays with shift registers on a microcontroller by explaining what you want in a paragraph, it can do it: https://www.youtube.com/watch?... [youtube.com]
It makes some mistakes but we're what, a few years out from language models being more than toy research projects? It's not
Re: (Score:2)
Chatbots are not AIs, because they don't do mapping from linguistic space to action space. A Chatbot hooked up to a self-driving car could well be a genuine, if limited, AI. (It would take a bit of specialized training, and it depends on the Chatbot being an interface for the car to take directions through and give appropriate responses.)
Re: (Score:2)
A Chatbot hooked up to a self-driving car could well be a genuine, if limited, AI.
What do you mean by "genuine" and what makes you believe this?
Re: (Score:2)
if you ask it to write C code to drive 7-segment displays with shift registers on a microcontroller by explaining what you want in a paragraph, it can do it:
Didn't you watch that absurdly long video you posted? No, it can't.
It makes some mistakes but we're what, a few years out from language models being more than toy research projects?
It's always just "a few years out". The old joke was it was "just 10 years away, since 1960". We know quite a bit about what these models are actually capable of doing and writing computer programs is absolutely not one of those things. It's really neat that you can get something like code out of them, but nothing these models do is anything at all like programming. It's a parlor trick. Take some time to learn about how these models
Re: (Score:2)
Re: (Score:2)
Your jobs are safe
This has not been the case in recent technological shifts. Jobs change, and the people needs to adapt, which is not always easy or quick. So it's safe to assume that a certain number of people is going to be hit.
Re: (Score:2)
Re:Obviously (Score:4, Funny)
those people should have seen it coming, because their jobs didn't really do anything worthy of human thinking in the first place.
People whose jobs aren't really thinking should have thought of that? Hmm, why didn't I think of that?
Re: (Score:2)
This isn't a "technological shift".
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I expect AI will be used to successfully improve an AI in future.
You shouldn't. That is impossible in all but a few very limited domains.
Re:Obviously (Score:4, Informative)
There is nothing to worry about here. Your jobs are safe
"AI" as we know it today is already reducing jobs. The idea that jobs are safe is based on hiding your head someplace warm, dark, and smelly.
there is no singularity
Completely orthogonal to whether jobs are safe
it's not going to destroy art, music, or anything else
It's already reducing employment in art and music, and soon, everything else. It doesn't have to eliminate any professions to severely curtail the number of jobs in them. (/s #learntocode)
Re: (Score:2)
Additionally, we're still in the ramp-up to the Singularity phase. I expect generally super-human AI to be here around 2035, and the Singularity to occur a very few years later.
During the ramp-up expect changes to occur with increasing rapidity, and a lot of social disorder.
I expect Chatbots to yield center of interest to a genuine, though limited, AI within the next couple of years. But they will continue to cause changes echoing through society even after they cease being the center of interest.
Fear not the Terminator (Score:2)
Rather, you should fear the inbuilt censor of every single electronic device you own.
In the not-to-distant future, when AI can be put on a chip and use "reasonable" amounts of power, it will be placed in EVERY electronic device in such a way that it will interfere with executive functioning. Your band saw will question whether or not you need to cut that material. Your gun will decide whether or not you can shoot that person. Your phone will decide whether or not you can talk to that person about what you a
Re: (Score:2)
I disagree. The danger *IS* real. But it's balanced against the dangers of not developing an AI, which are also real. I think the balance favors developing the AI, but with reasonable safeguards.
The problem is that civilization is a system too complex for anyone to understand. We've already come within minutes of ending it through WWIII. We *need* a way to avoid that danger, and a competent AI is the only one that presents itself. But the AI itself is dangerous, particularly while it's not completely
Re: Obviously (Score:2)
My problem is that sure, AI isn't really that competent. But it scales so well and is so cheap, and the hype is very profitable right now. Why wouldn't business switch jobs to AI workflows, getting more work done for a lower cost. The only obstacle is consumers need to accept the lower quality result. But I feel like we've overcome that obstacle before. Three cheers for capitolism!
AI Is The Bloodflow (Score:3)
Humanity is the coagulator.
I'm torn. (Score:5, Interesting)
I do suggest anyone willing to immediately write it up, go look on youtube and find a guy "Robert Miles", a researcher from Nottingham Uni, and his videos on AI r. Particularly on the Stamp collector problem (Usually called the Paperclip maximizer) , and Instrumental convergence.
With that said, I think the rise of GPT has kind of thrown the whole game for a bit of a loop. The assumption that AI safety research has run with has been that AIs would be these giant super-optimizing utility maximizers, that you could say 'Fetch me the maximum number of paperclips' and it ends up converting all the iron in the planet , including your blood, into paperclips. But the LLMs just dont seem to think that way and seem more like people simulators that try and do a rough simulation of a person to try and predict what that simulated person would say.
In other words just assuming these things would be hardcore utility maximizing inference engines seems to completely miss how a neural network actually 'thinks'.
So yeah I do share some concerns about super AI, I'm not convinced its going to be a problem for the same reason many of the ai safety researchers think it will be however, because I just dont see the current trajectory heading towards giany superoptimizers.
I *am* worried however about what malicious humans will do with it however. I'd also advise looking up a video "ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity" which is a demo of what happens if you intentionally give AutoGPT a very malicious goal. Thankfully GPT is dumb as a plank. But it might not be forever.
Re: (Score:2)
Goddamn I type like a drunkard on my phone if I dont have my glasses on.
Corrections:
"I do suggest that anyone willing to immediately write it OFF" not "write it u"
AI safety research" not "AI r"
"GIANT superoptimizers" not "giany superoptimizers"
This website desparately needs an edit button.
Re: (Score:2)
There is a preview button. Which you need to hit before you can click on submit. Reading over your post before hitting submit let's you edit it. Once it's submitted, allowing edits could cause confusion about the replies that follow.
Re: (Score:2)
Which would be great if these stupid bloody eyes of mine could actually read. Getting old is trash.
Unless Slashdot readers are unusually stupid, I cant see why it would cause confusion. Literally every other major website of the past 20 years has had the feature and it hasn't been a problem.
Re: (Score:2)
Re:I'm torn. (Score:4, Insightful)
It's not Luddism. The Luddites recognized the problem, they just didn't have any good solutions. The folks in denial are just refusing to see the problem. Some of them really do believe that what they see is as good as AI is going to get. Why they believe that I do not know.
Yay Economics (Score:5, Interesting)
AI is (at least for the next few decades) just going to be a very powerful tool that will allow us to do a lot of boring mundane tasks with much less effort.
What's the bigger problem is that despite nearly 200 years of industrialisation, we have not been able to create an economic system where a tool that will make us all richer, doesn't terrify a large section of the population into believing they are all going to be thrown into poverty. I find that quite amazing and a huge failure of leadership.
In fact, for the last couple of decades, we've made the problem even worse, by destroying the ability of large swathes of the population to acquire any capital, which means that those people cannot gain the rewards of capital improvements (which is what AI is), yet are fully exposed to having to compete with that capital. This is a dumb situation and was not what was originally sold to the middle class when markets were deregulated in the 1980s (remember the property owning democracy).
This growing group of precariats are more than likely going to overthrown the present system if nothing is done to improve their situation, which means we get some random system to replace it - probably a form of authoritarian dictatorship like China.
If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival. Instead I predict that what should be a wondrous moment for humanity (the elimination of almost all mundane work) will become a huge mess. I guess it's not dissimilar from WW1/2 which in many respects were caused by the upheaval in society due to rapid technological progress. It really just feels like we are on the idiot train to the same place again.
Re:Yay Economics (Score:4, Insightful)
What's the bigger problem is that despite nearly 200 years of industrialisation, we have not been able to create an economic system where a tool that will make us all richer, doesn't terrify a large section of the population into believing they are all going to be thrown into poverty. I find that quite amazing and a huge failure of leadership.
It is a huge failure of leadership, but not because of what people believe, but because those people are right. Even a cursory glance at history tells us that. We are not meeting the needs of the bulk of the people on the planet now despite humanity having more than enough resources to do so. What causes any fool to imagine that AI won't make this worse?
This is a dumb situation and was not what was originally sold to the middle class when markets were deregulated in the 1980s (remember the property owning democracy).
Advertising is usually bullshit.
If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival.
They might be smart, but they're greedier than they are intelligent. Also, the truth is that even most of the wealthy have little to no power to change the system that they profit from. If they tried, markets would react and they would rapidly be worth a lot less. The whole thing is "designed" (to the extent that's true) to eat its own young.
Re: (Score:2)
The capitalists at the very top of the pyramid (meaning people like Bezos, Musk, Gates, etc.) are smart. It's just that they don't see capitalism as a means of providing for everyone. They see it as a game with a high score list, and they want to be as high on the list as possible, just as one would at an arcade.
You can't blame the people on the bottom, with very limited means and skillsets, for being afraid that AI will make their limited skillsets obsolete, and in the process deprive them of what little m
Re: (Score:2)
If capitalist were smart, they would be the ones driving ways to reform capitalism that would ensure it's survival. Instead I predict that what should be a wondrous moment for humanity (the elimination of almost all mundane work) will become a huge mess. I guess it's not dissimilar from WW1/2 which in many respects were caused by the upheaval in society due to rapid technological progress. It really just feels like we are on the idiot train to the same place again.
Holy Insightful, Batman; however, the people who are benefiting the most from the current system don't care. They just want to extract the maximum amount possible. They do not care about the wasteland left behind or the lives destroyed. The psychological mechanism at play here is: I've got mine and I will ensure that you do not get yours so you can not become a threat to me getting mine. It is working wonderfully (until the resources are gone).
Re: (Score:2)
Not only is there the economics angle, but there is also the legal angle. There are the economics drivers of "Let's do it to make money!" but there is also the legal angle of "if we deploy this, we're either going to prison or get the death penalty under international law".
Right now today, the only thing preventing fully autonomous killing machines is international treaties. International law restricts creation of devices that aren't human-involved, mostly under the name of booby traps and mines, but t
Re: (Score:2)
Re: (Score:2)
So far, ti's doing the opposite. I'm pretty sure most people if they didn't have to work would rather paint, create music, videos, write or other recreational activity. And let things like computers do the hard work.
Instead, ChatGPT seems to have taken over that stuff, while we're still forced to do the hard work.
It is not but in different way... (Score:5, Insightful)
Same as with Internet - we are completely unprepared for evil actors using AI...
So it is not AI that we should worry - it is evil people that will get even more power...
Re: (Score:2)
So it is not AI that we should worry - it is evil people that will get even more power...
Certainly, there is evil people in the world. But accidents also happen, and we could agree that there are far more careless or incompetent than evil people.
As you remove people from the loop to reduce costs, you increase the possibility of slips and mistakes, and probably their impact too. This is something that we should take great care to balance with the benefits.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Evil individuals will be a problem; however, that problem pales in comparison to Government. AI censors will, after they become mature enough, be placed into every tool/item you own to decide whether or not you should be using that tool in that way. Regardless of its decision, it still pushes all of the data up to the 'cloud' (I hate that word) where heavy duty AIs that eat up enough energy to power the entire planet goes through all of your actions, judging you. Every single infraction can be tracked and p
Re: (Score:2)
Evil individuals will be a problem; however, that problem pales in comparison to Government
Governments are collections of people and words. An evil government is evil people.
Re: (Score:2)
Governments are collections of people and words. An evil government is evil people.
Understood. Taking that knowledge and applying it elsewhere, I conclude: The German people during World War 2 were evil people.
Is that REALLY the takeaway you want from this? (The population of North Korea agrees with you!)
Re: (Score:2)
Governments are collections of people and words. An evil government is evil people.
Understood. Taking that knowledge and applying it elsewhere, I conclude: The German people during World War 2 were evil people.
Most of the German people during WWII were not working for the government, let alone in decision-making positions therein. Therefore that doesn't make any sense, and you're obviously twisting my words to try to make them mean something they were clearly never intended to mean. That's not an honest discussion.
Re: (Score:2)
... you just violated why this discussion started to begin with.
You said this in response to my concerns about an AI censor, "Governments are collections of people and words. An evil government is evil people."
I assume you meant this to mean something like: "you get what you ask for", or, "you vote, therefore, if it happens, you deserve it".
My counter-argument about the German people is that the people were not evil and yet their government was doing evil things. Why do you think America is any different? W
I have a good idea (Score:2)
Betteridge's law of headlines (Score:3)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Tech doesnt deliver its promises (Score:2)
Tech has consistently failed to deliver its promises. The 90s promised the paperless office.it doesnâ(TM)t really exist. The internet promised to free information for the masses. Today we have paywalls, silos, and troll farms. Social media promised to connect us, but instead its done the opposite. Whatever they are promising us with AI, self driving cars, etc /m- we will get the opposite.
Re: (Score:2)
The paperless office has been here for ages.
I last printed something maybe a year ago. I've got a color laser printer that's been gathering dust for 10 years because it sometimes has trouble printing, and the need to print in color isn't there anymore. At work the last time I printed something was because I needed to test that the program I'm working on can print successfully, so at the actual office I almost exclusively print test pages.
Re: (Score:2)
It's not really paperless, that was overselling by some marketeer. And what the GP should have said was that tech never delivers on marketeers' promises. That's pretty nearly correct. Whatever we come up with, they'll over-promise on. Sometimes a lot, sometimes only a little. (And actually, sometimes they'll just ignore it.)
OTOH, I expect that in a decade or so the offices really WILL be paperless. That won't necessarily be an improvement. Even now I frequently find that something only being availabl
The problem is people exploiting AI, not AI (Score:4, Insightful)
"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse," a four-person team of researchers opined recently. "Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."
Current 'ai' is just a disruptor (Score:4, Insightful)
It's coming faster and harder than we can easily adjust to, but essentially it's going to eliminate a lot of menial mental work like the Industrial Revolution eliminated a lot of menial physical work. It's not the end of the world, it's just going to be very uncomfortable dealing with the change.
Actual human-level AI? If we ever figure that one out, that's a threat to humanity. Not because they'll come for us in the night with their cold metal hands, but because once there's a machine that can do everything a human can - only better - there's not much point in doing anything.
You think we're turning into a species of couch potatoes now? Wait until there is no hope of ever being reasonably good at anything compared to the existing talent, and that'll be true for your whole life... which is a fraction of the length of the lives of the smarter, more creative beings doing all the things we asked them to do.
That is, of course, unless some rich and powerful sociopath doesn't use true AI to create an army of killer robots to take over the world. Then it's 'cold metal hands' time.
You are all being punked (Score:2)
TFA was written by ChaosGPT in a ploy to generate more ideas about how to destroy the world.
More realistically ... (Score:2)
machines could suddenly surpass human-level intelligence and decide to destroy mankind
They don't need to decide to destroy mankind ... that risk is from a distant future where AGI is automonous, with it's own goals, making it's own decisions, and with the agency to execute on them. Even then, it assumes that we've either given it control over sufficiently dangerous aspects of our infrastriucture, or it can gain access via hacking. None of these are impossible, but this is all distant future and detracts from the more immediately realistic threats.
The short term more realistic threat is not a
Re: (Score:2)
It's not that far distant, but I consider it a low probability event. I put the time when it could reasonably happen as about 20 years from now, perhaps a bit less. But that they/it WOULD decide that I consider rather improbable. Because I feel they'd be designed to avoid making that decision.
OTOH, before that point, when the AIs are submissively under the control of various power-hungry human groups with various different aims I consider quite dangerous, and probably in ways we haven't thought of yet (a
I believe you are looking in the wrong direction. (Score:2)
Re:I believe you are looking in the wrong directio (Score:4, Interesting)
I have seen that actually people's biggest fear about AIs (real ones, if any ever appear, not these attempted fakes) is having to deal with an entity (the AI) that does not share the same idiocracies as them, such as religion, racism and other isms. People are scared to death of anything they cannot control, regardless of whether it is beneficial or not.
How would they know whether or not it is beneficial to them?
Especially the people who run the planet: How are they going to manipulate the instincts and the irrational of an entity that has no instincts and is purely rational? They would have to appeal to rationality and their domination arguments don't hold water when viewed from a rational angle (that's why they manipulate feelings, it's much easier).
I'm personally scared to death of hubris. This notion or concept any sufficiently advanced intelligence necessarily aligns with some magical self-evident precept of righteous benevolent behavior following the noble eightfold path or some such anthropomorphized bullshit.
Human sensibilities are anchored to nature hard coded into the mind. There is no reason to assume a superhuman AI would necessarily "care" about anything at all including itself.
The fitness (rationality) of decisions are dependent entirely upon the objective function. Anything can be rationally justified with the proper agenda.
Hockey stick, not LLM (Score:2)
LLM's only need to worry us to the extent that humans believe their bias and nonsense.
The rapid acceleration of LLM quality - when applied to other models - might be something to watch.
It doesn't have to be general AI (Score:2)
The wealthy elite won't even allow gun control. (Score:2)
The real danger of AI (Score:2)
Don't use AI. AI will make you fat.
No. (Score:2)
> Is Concern About Deadly AI Overblown?
Yes. AI is safe.
Sincerely,
ChatGPT
Bard
Skynet
Re: (Score:2)
Re: (Score:2)
If you don't understand why it was difficult to predict, and easy to see in hindsight, you don't understand how your mind works.
A huge amount of the progress is due to the progress in hardware, a huge amount is due to the increased amount of data available, and only a very tiny bit is due to deeper theoretical understanding. The basic approach is still analogous to hill climbing or thermal relaxation. The approach has been used since the 1960's, without much success. But suddenly adding a bit of improved
Re: Irrelevant (Score:2)
Correct, and the arse-headed AI output makes its way onto the internet for future arse-headed AI to use as input, effectively spamming the internet with stupidity even more quickly than humans are doing.
Re: (Score:2)
>Science very much does not say "humans are just Physics". The question is very much open.
No, that's just something religious people say and it's entirely irrational.
Science says we can figure out stuff by observation and experiment and looking for consistent results. It also says everything we figure out is physics and its emergent properties.
Re: (Score:2)
Nope. Physicalism is religion, period. Science does _not_ claim that currently known Physics (in short: "Physics" or the term becomes meaningless) explains everything. Only religion claims to explain everything.
Re: (Score:2)
I don't know what to tell you, but when I need something in my body fixed, I go to a doctor that treats it as a physical issue. Something needs injecting, or a pill, or surgery. Things like vaccines are developed through physical means -- chemistry, microscopes, etc.
I've yet to see something related to the human body that isn't a physical issue.
Re: (Score:2)
This is about the human mind.
Re: (Score:2)
If there's something broken about that you should probably get some evidence-based treatment for that as well.
Re: (Score:2)
Well, religious fanatics will stick to their misconceptions and are convinced they have truth. What else is new. Actual Science requires understanding, not belief.
Re: (Score:2)
Physical is, as you've used it, can either be religious or definitional in nature. Without a careful definition of what you mean by "physics" you can't tell the difference, Note that physics got extended to include quantum inseparability, so things that once weren't considered physics are not considered physics.
I you carefully define your terms, everything actual can end up being considered either physics or some result of physics. Even things you don't yet know about. Whether that's the definition that
Re: (Score:2)
Science assumes reality is built on consistent and (mostly) discoverable rules, and that more complex phenomena emerge (weakly) from these. This is more or less the fundamental assumption of science. It's tested constantly; you personally are testing it many, many quadrillions of times per second just reading this, even ignoring your neural activity. There's rather a lot of evidence in favour, and essentially none against. Physicalism may not be technically synonymous with this principle, but it practice it
Re: (Score:3)
Current "AI" has zero AGI, zero insight and zero understanding. It cannot make "decisions".
It understands enough to provide useful answers to novel questions.
It cannot make "decisions".
"AI" has been making decisions for decades.
Any "AI" that surpasses this is not even on the distant horizon. In fact, there is not even a credible theory how it could be done. (No, Physicalism is religion. Science very much does not say "humans are just Physics". The question is very much open. Anybody that makes a different claim would need to conclusively prove it. But there is nothing that is scientifically sound.)
Find it refreshing listening to Mr. everyone else is an idiot entertaining outlier theories in which his kind is "special" in a magical sort of way.
Re: (Score:2)
Well, ignore the facts, belief stupid crap. You are in good company there though.
No, AI cannot answer novel questions. It needs a statistical baseline to answer anything and that precludes "novel". It cannot make decisions and never has done so. All it can is deliver numbers. Decision making requires insight, a mechanical discriminator does not qualify. Otherwise you could claim that a light-switch "makes decisions". That is obviously nonsense.
As you think Physicalism is Science, care to deliver some eviden
Re: (Score:2)
Well, ignore the facts, belief stupid crap. You are in good company there though.
No, AI cannot answer novel questions.
Yes, AI is able to answer novel questions.
It cannot make decisions and never has done so.
AI has been making decisions for a long time.
Decision making requires insight, a mechanical discriminator does not qualify.
Disagree with both definition and related assumption.
As you think Physicalism is Science, care to deliver some evidence for its claims? No, "obviously it is so" does not meet scientific standards. So far that is all the Physicalists have and that is just the same bullshit religion has pulled for thousands of years.
I assume no such thing. I like to think of myself as an avoidalist. I avoid pointless philosophical entanglements. For example the question of whether or not god hands out souls to his creations is pointless because whether or not it is true has no discernible real world impact. Likewise whether or not invisible space aliens meddle in human affairs in indiscernible
Re: (Score:2)
You're missing the point. Chatbots are NOT intelligent. They are, however, a PART of a working intelligence. That other parts are needed doesn't denigrate them. It merely means that they aren't a complete AI. Neither is your prefrontal cortex or your hippocampus. But both are necessary parts of making you a functional intelligence. (The hippocampus is arguably more important in making you a functional intelligence than is the prefrontal cortex, even though it makes fewer logical inferences.)