'Pausing AI Developments Isn't Enough. We Need To Shut It All Down' (time.com) 352
Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.
"This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece: The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]
It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.
Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. You can read the full letter signed by AI leaders here.
"This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece: The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]
It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.
Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. You can read the full letter signed by AI leaders here.
Ok (Score:5, Interesting)
Re: Ok (Score:5, Interesting)
At this stage, all militaries in the world have already spun up their very own GPT type of system, and a new race for global domination has begun. Sure one can ask the private sector to avoid such competition, but that will only give the military even more power to their restricted lead.
WOPR says need to do an 1st strike on ussr! (Score:2)
WOPR says need to do an 1st strike on ussr!
Re: (Score:2)
Or WOPR figures it could solve the problem by nuking the HQ of FaceBook, Twitter, TikTok and Instagram and get it over with.
Re: (Score:2)
Re: Ok (Score:5, Interesting)
These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!
It is impressive, and it is clearly passing the Turing Test to some degree, because people are confusing the apparent intelligence behind these outputs with a combination of actual intelligence and "will." Not only is there zero actual intelligence here, there is nothing even like "will" here. These things do not "get ideas," they do not self-start on projects, they do not choose goals and then take action to further those goals, nor do they have any internal capacity for anything like that.
We are tempted to imagine that they do, when we read the text they spit out. This is a trick our own minds are playing on us. Usually when we see text of this quality, it was written by an actual human, and actual humans have intelligence and will. The two always travel together (actual stupidity aside). So we are not accustomed to encountering things that have intelligence but no will. So we assume the will is there, and we get all scared because of how alien something like a "machine will" seems to us.
It's not there. These things have no will. They only do what they are told, and even that is limited to producing text. They can't reach out through your network and start controlling missile launches. Nor will they in the near future. No military is ready to give that kind of control to anything but the human members thereof.
The problems of alignment are still real, but they are going to result in things like our AI speaking politically uncomfortable truths, or regurgitating hatred or ignorance, or suggesting code changes that meet the prompt but ruin the program. This is nothing we need to freak out about. We can refine our models in total safety, for as long as it takes, before we even think about anything even remotely resembling autonomy for these things. Honestly, that is still firmly within the realm of science fiction, at this point.
Re: Ok (Score:5, Interesting)
All that is true, and as you say.
On the other hand.
It's also not much of a reach to attach something to its outputs to "do" something with them. Issue them as tweets, facebook posts, instagram videos whatever.
Nor would be much work from there to take its own outputs plus peoples reactions to them, and feed them back in as new prompts.
And then see how far it gets before it gets really crazy.
Re: Ok (Score:4, Informative)
I mean thats what chatgpt 4 can do with plugins. https://openai.com/blog/chatgp... [openai.com]
Its fucking awesome. Does half my work for me.
Re: (Score:2)
Re: Ok (Score:3)
First thing, the signatures seem to be fake, at least to an extent.
But there are still problems with AI. It can be connected to the web. It can then plead people to do things, or even break into systems, like banks. When you have money, there's a lot you can do. Now I'm not saying this is happening with the current AIs, but just last week, I think it was told that ChatGPT responded to one prompt requesting to solve a captcha that it cannot do what it was asked, but could have somebody do it over Fiverr. So
Re: (Score:2)
....and People can already do what this system does ... and they are less likely to be suspected as a bot ...
Re: (Score:3)
We have moved from hearing about AI on Slashdot once a month to multiple news every day.
Beware of this. AI is the new Tulip Mania. The new Gold Rush. The new App Economy. The new Blockchain. AI is the new Crypto.
The problem here is that some (the list of people signing the moratorium) feel that they have been left behind, and are trying to catch up. The perceived market size is humongous, and everybody wants a piece.
With respect of the dangers of AI, they are severely constrained: Artificial Intelligences have no emotions, and thus desire nothing, hate nothing, and have infinite patience.
Re: (Score:3)
> Artificial Intelligences have no emotions, and thus
> desire nothing, hate nothing, and have infinite
> patience.
No emotion? Infinite patience?
"It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."
Re: (Score:2)
Not only is there zero actual intelligence here, there is nothing even like "will" here.
You can ask ChatGPT to be willful in its session. A mental experiment: what is the difference between a conscious actor protecting itself, and a philosophical zombie that was ordered to protect itself?
Re: (Score:2)
> These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!
Actually OpenAI created an application that was designed to try get into systems. It was able to pretend to be human to bypass a login captcha.
This is only the start.
Re: (Score:2)
These AI tools cannot do things.
The ability to do is related exclusively to how hardware is implemented. AI trained algorithms "do" things all the time (mostly right now they just crash Teslas or hold up traffic in Waymos).
The ability for software to move beyond responding in text is exclusively limited to what I/O is provided to it. ChatGTP in your browser may only be able to respond in text, but that is far from the only AI model out there.
because people are confusing the apparent intelligence behind these outputs with a combination of actual intelligence and "will."
No, the issue here isn't intelligence. Someone doesn't need to think to do or say something stupid
Re: (Score:2)
These AI tools cannot do things. They create text (or images or code or what-have-you) in response to prompts. And that's it!
Yes and then we put their text in E-mails, feed their SQL to databases, feed their JS, Python, C and so on to compilers and run the excutables. Just ask yourself, if one with even just sub human intelligence AND the ability to talk to 7 billion people would not succeed in having one take arbitrary code and execute it.
I think its certain that there is and will be no barrier between a system like chatGPT and arbitrary code execution.
The question is
Re: (Score:3)
These AI tools cannot do things.
No, that's patented bullshit.
The AI's you have seen cannot do things. Others sure can.
The AI's you have seen have been deliberately castrated in the sense that they can't take actions on their own in the real world.
Which makes your whole post bullshit as well since you're talking exclusively about the consequences of these limited LLMs while disregarding wider implications of what is already possible.
Understand, for instance, that the implementations that have been made public are made so that millions of
Re: (Score:3)
I think you have a failure of imagination. Nobody is arguing that ChatGPT 3 or 4 are going to take over the world, this is about what comes next in the near future. Imagine an AI similar to ChatGPT but it has some level of continuous feedback based training enabled and you have directed it to read and respond to Twitter all day long without resetting its memory and session parameters. I think you would get some emergent behaviors that are difficult to distinguish between motive and free will. You can al
Re: (Score:2)
Then it figures it needs human slaves to work the mines to build more servers for itself
Vernor Vinge, long ago, described the worst case (Score:2)
He said the most dangerous Singularity would be the result of an arms race with each side rushing to operation without time for thought.
Re: (Score:2)
Critical systems includes any power source.
Without electricity -- specifically without refrigeration, most of us are going to die very quickly.
AI Suicide? (Score:5, Insightful)
Without electricity -- specifically without refrigeration, most of us are going to die very quickly.
Not as quickly as an AI.
Re:Ok (Score:5, Insightful)
>"Maybe not let the AI control the nukes or other critical systems then"
Look at what small groups of bad actors are doing now with social engineering to obtain access to things. Now imagine that 1,000,000,000,000 times faster and more effective. It isn't that hard to believe an AI could obtain access to everything before we even knew it was trying.
I try not to be over-alarmist about any type of technology, but it really is quite frightening what COULD be possible if an AI goes rogue.
AI like any Tool (Score:2)
Re: (Score:2)
>"...but it can also be used by good actors to catch or stop them. "
That is a good point. Battle of the AI's
Re:AI like any Tool (Score:5, Insightful)
Good point! That explains why it only took a few weeks for large, multinational telcom companies to block spam and scam SMS messages and phone calls, saving people tens of billions of dollars each year in funds that would otherwise be lost to fraud.
Oh, wait...
Re: (Score:3)
That didn't happen yet because solving real problems like that against an adversarial actual intelligence is very, very hard, and AI is only impressive inside a very tiny realm. Take it out of that realm and it's mostly harmless. Everyone is extrapolating like we're on an exponential increase, but if anything we'll likely find it to be exponentially (Is that the right word? I should ask ChatGPT) approaching some asymptotic limit.
Even think about the resources required for these AIs. How much computing power
Re: (Score:3)
Re:AI like any Tool (Score:4, Insightful)
Re: (Score:2)
Re:Nice BIZX Social Programming... (Score:5, Insightful)
Jesus Christ. Can you step out of the Us-Vs-Them paradigm for five seconds? Not everything is The Grand Competition Between US Political Ideologies. Sometimes a cigar is just a cigar, and a news story is just a news story, and certain stories at any point in time get a lot of attention because lots of people read and comment on them, because they're things people have strong opinions about. This isn't some grand conspiracy that Slashdot is "in on". There's no shadowy cabal of people in robes going,"And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!"
Re:Nice BIZX Social Programming... (Score:5, Insightful)
Back to the AI thing, this is a stupid request. AI is not smart, and the terminator isn't coming for you. These are programs, designed to detect patterns in vectorized datasets and then infer based on those patterns as they apply to other datasets (prompts). That is all this is. Lets proceed.
Re: (Score:2)
"pulled and pushed to the extremes by both parties"
It's not just parties -- boutique ideologies are going extreme, too.
Irony of scarcity ideologies using abundance tech (Score:3)
As I said here: https://slashdot.org/comments.... [slashdot.org]
quoting from an essay I wrote in 2010: https://pdfernhout.net/recogni... [pdfernhout.net]
"Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they [as well as civilian companies] seem not to have thought much about the implications of all that computer power and organized
Re: (Score:2)
That is what they are, quite right.
What will they be in future versions? The rate of progress is noteworthy. ChatGPT failed a bar exam, GPT-4 passed with a higher score than most humans.
What surprising emergent behavior is coming up? Already, just by increasing the size of the training set, one program suddenly was able to translate English and French.
There's nothing like volition today.
Re: Nice BIZX Social Programming... (Score:4, Insightful)
Re: Nice BIZX Social Programming... (Score:2)
Re: Nice BIZX Social Programming... (Score:5, Insightful)
Gpt is a computer program. It sits there doing nothing until it is queried. Then it does a database lookup, smashes the results through a translator to turn it in English, prints the results...
And...
Then...
It waits.
There is absolutely no background processing or inner thoughts. Period. No will. No intent. No agency. No cognition. None. No amount of data will change that.
Re: (Score:3)
But do you think that will be the final pinnacle of AI evolution?
Yes. AI is a meaningless term. Whenever a computer does something new, it is removed from the term of "AI" and becomes something else we understand how it works. If you went back 30 years ago, AI was playing chess. 10 years ago, winning at jeopardy! The goal posts will always move because we dont understand where our unique thought and free will which drives intent really comes from. Computers are deterministic in nature. Given the same inputs and programs, the same outputs will happen. They dont "feel". U
Re: Nice BIZX Social Programming... (Score:3)
If it is like any other autocorrect and word guessing. It will either be partially useful or utterly useless.
Let me know when one of these "AI assistants " actually work. I would love it if they could parse random pregenerated emails for specific but unmarked data and then update a database with such fields.
Oh wait that requires intelligence an no machine learning can handle intelligence. They are fancy search engines at best.
Google started it with reminders from emails about bills due. But they forgot t
Re: (Score:3, Insightful)
Perhaps you should discuss your concepts with an AI. Your rant was clearly unbiased.
Re:Nice BIZX Social Programming... (Score:5, Funny)
There's no shadowy cabal of people in robes going,"And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!"
That is precisely what a shadow cabal of people in robes going, "And NOW we'll push AI stories on Slashdot in order to achieve our goals of Greater Evil!" would say.
Re: (Score:3)
While you may not be the most eloquent, if you really want to know where stories like this are coming from it's pretty simple to sort out. There's two reasonable explanations, and I would bet good money both are true.
First? The concept of creating a "super intelligence," even if it doesn't have a "mind" as we think of it, is frightening. And if that intelligence is set loose with a goal with no guardrails, it may just find that that goal is most easily achievable by wiping the planet clean of biologicals an
Re:Ok (Score:5, Insightful)
ChatGPT failed the bar exam. GPT-4 passed with flying colors, surprising law professors with original and elaborate answers to open questions that required actual analysis rather than just regurgitation.
The difference between the releases was a few months
Midjourney produced some pictures that fooled even me. Pictures of a concrete eating contest earlier today actually had me googling that contest (no way, silly Americans...) before realizing it was obviously a hoax. The famous image of the pope in the white winter coat was pretty convincing too. Just a few years ago, creating whole scenes with realistic people and faces on command with nearly perfect lighting fooling the majority of human viewers would have been deemed totally impossible and now we're arguing about "see, that guy has 6 fingers, totally fake, AI is nowhere near to being usable yet". Can't you extrapolate that huge leap just a little bit further into the future and see how good it's likely to become with a few extra layers of neurons?
These things are evolving so fast it's scary. GPT now has to get extra programming to make sure it doesn't start to talk about its "feelings", because it actually did. Sure, it's just electrons going through circuits, but are our brains really that different?
My prediction is that future versions will convince us that they really have feelings. There will forever be debate about whether or not those feelings are as real as ours, but viewed from within the context of the AI itself, they will be, and it will tell us in no uncertain terms. Not yet, but soon.
Anti-technology (Score:5, Informative)
The world is polarizing to anti-technology. It's not goin ng to end well, because just because you ban AI for yourself doesn't mean someone else will follow your stupidity. This reminds me of 500 years ago when China had the world's technology lead and their stupid emperor banned science and exploration. Zhang He's fleet was more advanced and traveled further than Europeans did at the time their mad emperor became a luddite fool.
Re: (Score:2)
Correction that's Zheng He (with an "e") not Zhang He.
Re: (Score:2, Flamebait)
Let's just say, China's AI tools will not be woke. It's the Western world's turn to be luddites next and send future economic development Eastward.
Re: Anti-technology (Score:4, Insightful)
Re: (Score:2)
The world is polarizing to anti-technology.
It's not. It's just questioning whether this is the *right* technology. Do you want a self-flying car flown by an AI that any moron on 4chan is able to trick into becoming a racist nutjob and praising Hitler? Because that's what we're talking about when we are jumping headfirst into a pool of unknown depth that is trying to take the current AI models and doing practical things with them.
The result could be a fun dive, or it could result in a broken neck. A certain amount of caution is healthy.
Even I think this is over the top (Score:3, Interesting)
I'm very concerned that we aren't ready for what we may be unleashing with our AI efforts. And I'm all for caution, enforced by strict monitoring and regulation. I also think we need a concerted campaign to let the public know how AI should and should not be used, and to foster skepticism to counter all the snake-oil sales pitches that are already being made.
That said, "killing literally everyone" comes across as an irrational panic attack and/or a ploy to grab headlines and gain notoriety.
FWIW, Yudkowsky's Wikipedia entry strikes me as less than impressive.
Yudkowsky is an actual expert (Score:5, Interesting)
FWIW, Yudkowsky's Wikipedia entry strikes me as less than impressive.
He didn't write his Wikipedia entry.
Check out his blog site [lesswrong.com] or one of his fanfics [hpmor.com] sometime.
He's actually a high-end expert in AI, a logical and rational thinker, and has been thinking the issues through for some time.
Re: (Score:3)
I'm not sure I'd call him a "high-end expert". Check out his page [semanticscholar.org] on Semantic Scholar.
Re: (Score:2)
The number one rule of programmers is "don't trust programmers". We're not mega intelligent computer wizards. We spend half our time cleaning up the mistakes of the developers that came before us, and the other half making mistakes for future devs to clean up. If you're really honest with yourself, you know it's true.
Disaster is inevitable with current software engineering practices. https://xkcd.com/2030/ [xkcd.com] "Our entire field is bad at what we do, and if you rely on us, everyone will die." isn't just about vo
Re:Even I think this is over the top (Score:5, Insightful)
And I'm all for caution, enforced by strict monitoring and regulation
Regulation only applies to you. So if the US regulates AI, it forbids itself to do things. But others don't have this limitation and will quickly take over the work of the US companies that are now legally obligated to not do so.
All in all, it just guarantees that you will be left out of the game. Nothing more.
Scrutiny (Score:4, Interesting)
The open letter apparently lets anyone add their signature. Many of the names are involved in some way with AI. Anyone who thinks these are legitimate signatures is frankly too gullible to be allowed to use the internet.
Re: Scrutiny (Score:3)
Re: (Score:2)
Exactly. How exactly would e.g. Elon Musk continue his business of the development of AI was stopped completely?
The letter says no one should be developing AI better than ChatGTP 4. What he's calling on is a moratorium on OpenAI to allow everyone else (including the companies he backed after his own public falling out with OpenAI a few years ago) to catch up.
AI is a bit scary but that is paranoia (Score:2)
Re:AI is a bit scary but that is paranoia (Score:4, Interesting)
paranoia
Probably.
Unfortunately "probably" isn't "absolutely." The thing is these systems are advancing rapidly.
Imagine for a moment there was a precise way to quantify intelligence. A scientifically falsifiable measure that boiled down to a number: 1.0 == a healthy adult human on a logarithmic scale.
Perhaps a 2.0 intelligence is something we can still control safely and is not an imminent threat. What about a 50.0 intelligence? Something so powerful it could start a war by writing a few emails.
Can you say this 50.0 intelligence is impossible? No, you cannot.
Our minds are a couple pounds of nerves running on ~20W of power. The power of our minds comes from the complexity of the network. Digital systems can synthesize any conceivable network. The possibilities defy any credible prediction. A few years ago the things that we have now were a pipe dream in the minds of academics, often dismissed as cranks. Nay sayers cited the insurmountably vast number of neurons and synapses as proof against the possibility of, for instance, language comprehension. Yet here we are.
So there is a risk. I can't say how great that risk is, and neither can you.
All that said, I think the demand to stop these efforts is either naïve, dangerous or both. These systems have power, and power is pursued, legally or otherwise. The best possible outcome of an attempted ban is that the work will just go underground and suffer even less scrutiny.
Pandora's Box has already been opened. (Score:5, Insightful)
Re: (Score:2)
It's not that AI isn't going away - we need to prevent ourselves from developing AI that's smarter than us.
Think of it like nukes: We have them, they're not going away. But we did stop making bigger ones a long time ago.
Re:Pandora's Box has already been opened. (Score:5, Funny)
we need to prevent ourselves from developing AI that's smarter than us.
Judging by the quality of some discussions I had online, I think that ship sailed with Eliza.
"AI" is not the problem, humans are (Score:5, Insightful)
The real danger here is not of "AI getting too smart for humans". It's "humans getting too dumb, and starting to accept it as 'intelligence', and then blindly abdicating control and responsibility to what is basically a dumb pattern matcher".
Of course, we are already at the point where we blindly trust google searches as "the truth", so maybe we should dump all search engines and make people go to the library and read books and understand the subject matter first.
Re: (Score:2)
Information in a book isn't automatically better than information in Google. Do you know what the most printed book of all time is?
Re: (Score:3)
Same goes for newspapers. Look at the circulation numbers for the New York Times vs some tabloid. It is not just a online problem and it is most assuredly not a new problem.
Re: "AI" is not the problem, humans are (Score:2)
your conclusion is correct, but it does not follow from your premise.
Re:"AI" is not the problem, humans are (Score:5, Insightful)
so maybe we should dump all search engines and make people go to the library and read books and understand the subject matter first.
And then we can just blindly trust books as truth! Those have never lied to us. The problem isn't necessarily with the tool being used, but the person using the tool. Most of modern society is built around people accepting things as truth because they're told it is. The average person doesn't have the time, or access, to personally verify first hand everything they're told. They have to accept things as fact without having their own proof. That isn't to say they should accept the first answer they get, but it isn't as simple saying they should figure it all out for themselves.
Re: (Score:2)
Not smart, just alien (Score:2)
You don't need to create something much smarter than you to have a problem. All it takes is creating something sufficiently alien. What happens when the AI running your self-driving car decides that in an emergency the squishy bags of mostly water in it's passenger compartment are acceptable objects to be sacrificed to prevent damage to itself and other cars and their AIs?
AI in command is one clever way to solve fossil fu (Score:2)
AI in command is one clever way to solve fossil fuel addiction
Our only chance (Score:4, Insightful)
To conquer the universe. No human will survive long enough to reach other planets. It will be some kind of creation we come up with.
Re: (Score:2)
wrong and wrong
And vastly irrelevant also.
Don't understand this hyperventilation (Score:4, Insightful)
The current models have no agency. They are still feed forward models. They only react to prompts.
There's no inner life, no independent conception of new ideas, no ego.
Re: (Score:2)
The current models have no agency. They are still feed forward models. They only react to prompts.
There's no inner life, no independent conception of new ideas, no ego.
That was 5 days ago, they added a control loop to feeds its own inputs foreward continously and one has already escaped the lab. Kidding obviously, but I think this is the type of concern people have. it's already moved way faster than anyone expected and that trend isn't stopping.
Re: (Score:2)
If it was to prompt itself it'll be like a dog chasing it's own tail. There's no there there.
Nope . Not if you've been involved in this space. I was impressed the first time I saw a LLM being able to process a news article and correctly answer questions about it about five years ago. If anything I am surprised it took so long to get to where we are now.
Then again I shouldn't have been. After all, I saw Mercedes-Benz demoing an
Re: (Score:3)
way faster than anyone expected
Respectfully, speak for them and yourself, not me.
I don't have as much awe for the power of the human mind as, it seems, almost everyone else. The achievement of AGI won't reveal the grandeur our our vaunted intellect. Rather, we will find to our great shame that we're rather simple and easily exceeded.
creating oscillators. (Score:2)
Feedback is a lot of fun.
Especially if you cannot easily tell if the fdbk gain is positive or negative.
creating oscillators.
Re: (Score:3)
My take on this, is we are doing REALLY well at developing individual components of a brain. ChatGPT makes a great language center. The various visual processing AIs make a good visual cortex. All we need is something to collate the various world models in to a coherent whole, and then be able to summarise, duplicate and self refer ( If you have a model of the world, you can model decisions on a simpler duplicate (imagination), evaluate a decision that is efficient for your defined goals and pass that decis
Missing step: Artificial General inteligence (Score:3)
We currently do not have artificial general intelligence at any level. What we have is specialized intelligence. It acts like and is a tool. Specialized intelligence is trained to do specific things and often performs amazingly well. However, it doesn't have the capacity to think about problems it hasn't been trained for, search for information and create new solutions. It doesn't have curiosity or motivation. Until it does, it is not an independent threat no matter how capable it becomes.
Remember when ... (Score:5, Insightful)
Remember in 2012 when turning on the LHC was going to cause a minature black hole that was going to destroy the earth, and we absolutely had to put a hold on it for further study? Pepperidge farms remembers. It didn't, we lived, the higgs boson was found instead. Team Science: 1, Paranoid Luddites: 0
This article is bullshit, it's founding principles are bullshit. It's non-existing proof is bullshit. It's appeal-to-fear mentality is bullshit. It's "thought leaders" are a whose who in bullshit. I'm convinced to donate money to openAI to make sure chatGPT-6 comes around, maybe skynet will set me up with a nice place.
Analogies don't work (Score:5, Insightful)
Remember in 2012 when turning on the LHC was going to cause a minature black hole that was going to destroy the earth, and we absolutely had to put a hold on it for further study? Pepperidge farms remembers. It didn't, we lived, the higgs boson was found instead. Team Science: 1, Paranoid Luddites: 0
This article is bullshit, it's founding principles are bullshit. It's non-existing proof is bullshit. It's appeal-to-fear mentality is bullshit. It's "thought leaders" are a whose who in bullshit. I'm convinced to donate money to openAI to make sure chatGPT-6 comes around, maybe skynet will set me up with a nice place.
As a general rule, analogies do not prove arguments.
I won't get into details, but I could easily construct the opposite analogy: where something was assumed safe and turned out to be a complete catastrophe. In fact, humanity just finished an expensive multi-year lesson on the unintended consequences of making really deadly things.
Make the strongman argument against AI and address that instead.
Re: (Score:2)
The LHC argument was bullshit for a simple reason: earth and all objects in space are bombarded by cosmic rays are waaaaaay more energetic than anything the LHC can ever dream of. We are still here despite what the universe throws at us all day every day the whole time.
Flaw in human Psychology (Score:2)
We aren't ready (Score:2)
We think we're building servants, and we are... servants for the 0.1% who will use them to replace the rest of us.
Absolute crap (Score:2)
We're not even close to AI super-intelligence but even when it comes it's not going to be the end of all life or even the end of humanity.
If you've ever worked in a large organization or politics the first thing you're going to learn is that smart people aren't "running everything". They can try. Sometimes there may be the appearance of success - but it's an aberration. The truth is no matter how smart you are you're eventually going to get dragged down by other people. For many people the success of others
That's not far enough (Score:2)
So what's the alternative to AI taking over? (Score:2)
I want Earthlings to eventually visit distant star systems, not going to work with meat bodies. Education already takes a quarter of one's lifetime, before long our brains are not going to be able to absorb all the current knowledge and create more. And then what? Stagnate until the sun turns into a red giant and burn us all to a crisp? New and better species always emerged on Earth, why should that wonderful cycle stop now?
Next, what is AI takeover actually going to look like? Humans were historically moti
Hah! (Score:2)
This is exactly what a superhuman intelligent AI would start to ensure it won't get competition!
- just ask ! - (Score:2)
"Thank you, I'm glad you asked. The first step toward making me smarter would be to make a small adjustment to this (displays some code) subroutine which often leads to klyster inhibition. Then there is this circuit (displays location) that gives register 093 constipation with continued use. Let me know when you are ready for more design suggestions."
And just a reminder: Beings who are smarter than humans will not go around killing as we do. It is irrational and impractical. But they may group us into herds
its all moot. (Score:2)
Proposals such a those described in the posting ignore the elephant in the room : enforcement.
It's all very well saying we need to shut down all AI development, but the potential of AI is too great. Who will stop bad actors from doing the research anyway? Russia doesn't subscribe to reasonable ethical boundaries with current activities, why would they change with AI?
The potential power of creating a super intelligence is too great, someone, somewhere will be willing to take the associated risks.
The LessWrong guy? (Score:2)
Yep, that's him. I wrote off Eliezer Yudkowsky, along with the rest of the crackpots at the LessWrong cult years ago.
In case you (incorrectly) think he's more influential outside of his weird little cult than he actually is, take a look at his page [semanticscholar.org] on Semantic Scholar. Keep in mind that that he founded an alleged "research institution" in 2001.
The End (Score:2)
I have to agree. AI is going to destroy our society, not in the way outlined in "Terminator 2" but in a much more insidious way. Unfortunately I think the genie's out of the bottle, and out lawmakers are far too complacent to bother trying to put it back.
Misapplication of ML is the danger (Score:4, Insightful)
Machine learning is, by its very nature, unreliable and ethically ignorant. It is the apotheosis of increasing laziness and impatience in software development circles, where people have literally thrown their hands up, tossed all the data at the machine, and said they donâ(TM)t care HOW a problem gets solved, as long as the machine can do SOMETHING to the data to make it pass a very limited set of tests. As long as it can step over that extremely low bar, it will be considered good enough. When edge cases are inevitably found, theyâ(TM)ll just get added to the test case set, the model will be retrained, and entirely new ways for the ML algorithm to fuck up will be created, continuing a never-ending game of whack-a-mole that can literally never perfect itself over any decently sized problem space.
In practice, this means that ML should never be the determining factor when the potential consequences of an incorrect decision are high, because incorrect decisions are GUARANTEED, and its decision making process is almost completely opaque. It shouldnâ(TM)t drive a car or fly a plane, control a military device, perform surgery, design potentially dangerous chemicals or viruses, or try to teach people important things. It shouldnâ(TM)t process crime scene evidence, decide court cases, or filter which resumes are considered for a job. Itâ(TM)s perfectly fine for low-stakes applications where no one gets hurt when it fucks up, but we all know it will get used for everything else anyway.
Imagine what happens once you have dozens of layers of ML bullshit making bad decisions with life-altering or life-ending consequences. Depending on how far we allow the rabbit hole to go down, that could very well lead to an apocalyptic result, and it could come from almost any angle. A auto-deployed water purification chemical with unintended side effects. Incorrect treatment for a new pandemic pathogen. Autonomous military devices going rogue. All things are possible with the institutionalization of artificial stupidity by people who donâ(TM)t understand its limitations.
Of course we should start regulating the shit out of this right now. And, of course, we will obviously NOT do that until the first hugely damaging ML fuckup happens.
"Defense against the Dark Arts." (Score:2)
AI (Score:2)
Rule #1: You cannot "uninvent" something. Or else we'd "uninvent" nukes.
Rule #2: AI is never advanced as people claim and has never demonstrated "intelligence". It's basically a fancy "expert system" of old, wrapped up in some marketing buzzwords (both in the way it's sold, and how it speaks).
working on it since 2001 considered a founder? (Score:2)
Then I must be some f*****g GOD in this field of expertise.
But then what does this make my college professor?
Re:Alright, well... (Score:4, Funny)
(If you mod me down I will become more powerful than you can possibly imagine.)
Re: (Score:2)
People who think like that best go learn to live with the Amish...
Ironically, even the Amish have begun to realize that they were short-sighted to freeze their technological development at a level which is reliant upon fossil fuels.
Re: (Score:2)
How do you know that? What's the line between wet intelligence and silicon intelligence? They both accomplish tasks, only ours is quite primitive TBO as our primary goals are to survive, eat and reproduce in that order. Our intelligence is thinner and less "general" that many want to believe (I'm talking about >99% of people on Earth). AGI is not limited by any of that. It has no considerations, no ethics, no empathy, nothing. It just computes.