AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4 (bloomberg.com) 153
Artificial intelligence experts, industry leaders and researchers are calling on AI developers to hit the pause button on training any models more powerful than the latest iteration behind OpenAI's ChatGPT. From a report: More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols. Prominent figures in the tech community, including Elon Musk and Apple co-founder Steve Wozniak, were listed among the signatories, although their participation could not be immediately verified. "Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control," said an open letter published on the Future of Life Institute website. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
...that happened... (Score:5, Insightful)
Re: (Score:3)
They didn't sign it, it's a fake letter, many people included on the letter have since said they've never heard of the letter, never signed it, and disagree with it.
Re: ...that happened... (Score:2)
Re: (Score:2)
You can't expect people to notice and respond to every random person who misattributes something to them. They initially claimed the letter was signed by Xi Jinping (yes, the president of China), Sam Altman (CEO of OpenAI, basically who this letter aims to kneecap), and Yann LeCun (whose denial can be read here: https://twitter.com/ylecun/sta... [twitter.com]).
When I looked at the letter when the news started circulating, signing it was just a text form where you typed your name in. There was no verification/validation,
Well intentioned, but unrealistic petition (Score:5, Insightful)
King Canute was unsuccessful in holding back the sea. Similarly, the power and applications of AI are increasing at a rate that cannot be halted. The result is going to be a mix of the positive and negative. At best, we can try to create mechanisms that limit the negatives, though there will be limits to how successful we can be with this. AI is going to change the world out of all recognition over the next decade. Mostly, we can just pray that the world that emerges will be one that is positive for the majority of us.
Re:Well intentioned, but unrealistic petition (Score:5, Informative)
In the story Canute/Knut/c'nut/insert spelling here heard people saying he was so powerful he could turn back the sea. He knew this was ludicrous, and showed them so.. He wasn't actually asking to turn back the tide.
Re: (Score:3)
I do wonder if current AI's usefulness is overstated. We had all this when deepfake videos first appeared, but the predicted effect on politics and people's ability to determine if something is true of not hasn't really happened.
ChatGPT is a bit harder to spot, simply because the output is just text.
Re: (Score:2)
So many people are already so bad at determining if something is true or not that it would be pretty hard to tease out the effect of AI and deepfakes on their gullibility.
Re: (Score:2)
Of course it's overstated. That doesn't mean it's not also understated. I will guarantee that it's both. Many of the claims are pure balloon juice, but it will be put to uses nobody has thought of...yet.
A 6 month moratorium would be a good idea if it could be agreed upon, but this is rather like a modified prisoner's dilemma, with LOTS of prisoners. How many of the signers just want others to slow up so that they can catch up to them? I doubt that the number is zero even for that selected group. As fo
Re: (Score:2)
I believe that is due to the deepfake stuff not being quite ready for primetime then, nor was the tech as available to the masses. It is getting to that stage now.
You combine that with the AI that is rapidly developing, that combination is getting to the point to where it could re
Yeah I don't get what's so dangerous (Score:3)
I don't get why that can have such a dangerous effect.
We already have almost unlimited capacity to write plausible sounding unmitigated bullshit.
This is a minor uptick at worst.
Re: (Score:2)
The real threat will be to the rich. When all us normal people lose our jobs, we'll demand the rich take care of us at a certain acceptable level. When they say no, we will very likely put ourselves back in the dark ages. Considering there are so many normal people and a very small portion of rich, I'm going to say us normal people will pull the rich down and while doing that, end civilization.
Only time will tell. If drones advance quickly enough, the rich might be able to just kill us all off by walling of
Re: (Score:3)
I do wonder if current AI's usefulness is overstated
Wonder no longer! It is absolutely overstated. [techcrunch.com] We've been down this road before, but everyone seems to forget about previous AI hype cycles. We get a "this time, it's really different!", every single time. Things seem crazy right now, but only because expectations are still rising. We're a few years, i suspect, from the "trough of disillusionment".
I know that's not a terribly compelling argument, but let's think about it in more practical terms. Have you ever used a site called fiverr? It's like a mec
Musk wants control of AI (Score:4, Insightful)
Musk has been anti AI for years and believes he is the "Trump" of AI and has effectively stated "I alone can fix it". In reality, I believe he sees it as a huge potential revenue source and wants to control it. He may also believe it could be the catalyst that actually lets full self driving be a reality which would naturally induce him to want to direct it's development, control it, scare others away from it, own it, and get ahead of any government regulation or control.
Re: (Score:3)
The only thing limiting their reach is the physical means of getting there. That's the next thing to fix. Once that's done, your enemies and segregation
Re: Musk wants control of AI (Score:2)
Re: (Score:2)
Not just an investor, a co-founder. The parent is confused. Elmo's "warning" is pretty damn cynical.
Hypocritical (Score:5, Interesting)
no one -- not even their creators -- can understand, predict, or reliably control
Impressive as they are, these algorithms are predictive text engines. Claiming their creators cannot reliably control them, presumably because they do not know what uses they will be put to, is more that a little hypocritical when it comes from people who have disrupted industries by coming up with unforeseen uses of technology. It was ok when they did it but not now when they are the ones likely to be disrupted?
Re: (Score:2)
The goal isn't control yet. Often that comes later. But it feels like a valid thing to both ignore for now, and worry about that fact.
We're trying to mimic the magic that happens in a biological human brain. That is without understanding how it all works. So we make a box with a few rules inside and ask it to get better.
It's not easy to later on split that box apart into functional units or something. We can't say "oh, that branch is for ..., while the other branch is ...". Just that it works all toge
When AI can code, it can code exploits (Score:4, Interesting)
When AI can code, it can code exploits. That's my immediate conclusion with this. A country that might not have coding expertise, but can write a chatGPT prompt like, "Write me some C code that will send ../../../passwd root to port 80" will now be in the running to cause havoc on the internet. Yes, that's an overly simplified example of something that's not a common exploit anymore, but the point is new exploits will be easier to proof of concept.
I don't think it will be at a stage where the AI will launch exploits just yet, unless it's told to.
Re:When AI can code, it can code exploits (Score:5, Insightful)
Maybe that same AI can scan the mountains of open-source code that power our digital infrastructure and find exploitable code, and fix it for us.
The fact that this can be weaponized does not eliminate the fact that it is a powerful and useful tool.
Re: (Score:2)
Re: When AI can code, it can code exploits (Score:4, Insightful)
Well, keeping it out of the hands of regular people, but leaving it open to intelligence services including foreign ones, doesn't seem like a solution.
Re: (Score:2, Insightful)
You have nothing to worry about. AI of the type everyone is freaking out about can't actually code. Lacking the capacity for anything like understanding or analysis, that simply isn't something these programs can do in any meaningful way. Just yesterday, for example, I asked ChatGPT to write a simple function that computes the perimeter of a trapezoid. This is what it produced (comments removed):
def calculate_perimeter(base1, base2, height, side1, side2):
perimeter = base1 +
Re: (Score:2)
I just did this.
Prompt: (yes, spelling errors were included)
Write a python gunvtion to calculate the perimeter of a trapeozoid
Reponse:
To calculate the perimeter of a trapezoid, you need to know the lengths of all four sides. Here's a Python function that takes 4 parameters (base1, base2, side1, side2) and calculates the perimeter of a trapezoid:
python
Copy code
def trapezoid_perimeter(base1, base2, side1, side2):
perimeter = base1 + base2 + side1 + side2
Re: (Score:2)
Good for you, I guess? This is the complete output I got yesterday, with the prompt "write a function that computes the perimeter of a trapezoid":
def calculate_perimeter(base1, base2, height, side1, side2):
"""Calculate the perimeter of a trapezoid.
Args:
base1 (float): The length of the first base.
base2 (float): The length of the second base.
height (float): The height of the trapezoid.
side1 (float): The length of the first slant side.
side2 (float): The length of the second slant side.
Re: (Score:3)
chatGPT is a tool. If you don’t make any effort to learn how it works and how to use it, you will, most likely, not get anything terribly useful. That’s your fault.
I run a website that has a support page (a manual, basically) and a FAQ page. I wrote a small chatGPT program where I feed the FAQ questions as json into the chatGPT API and used some python libraries to scan the manual page in as well. I added a bunch of guidelines for how to modulate the reponse (“be pithy, be polite, don
Yeah, no... (Score:2, Insightful)
The time for pressing the brake pedal has long passed - assuming that controls would ever have been effective. The cat's out of the bag, the gold-rush fever has taken hold, the horse is out of the barn - pick your favourite appropriate metaphor.
Now is the time to start planning mitigation, before non-savvy people - and even some of the savvy ones - start taking advice and direction from AI. Or - and here's the chilling part - putting it in charge of key infrastructure and perhaps even segments of financial
Re: (Score:2)
The cat's out of the bag, the gold-rush fever has taken hold, the horse is out of the barn - pick your favourite appropriate metaphor.
The Slash has gone dotty.
I resemble that remark!
Re: (Score:2)
That edge WILL cut throats and sever limbs, and we need to plan triage and damage control.
Cutting throats and severing limbs we can deal with. The risk here is extinction.
I'm not saying you're wrong in your assertion that the gold rush fever has taken hold and that it's unstoppable. Just pointing out that the risk is far higher than any new technology, ever, including nuclear weapons, and that your risk analogy is too weak.
1999: "IT expert hoards guns and food, moves into underground bunker"
I was young, and it took me a long time to figure out how anyone remotely intelligent could buy that hard into the Y2K doom and gloom, but I eventually figured out there are a LOT of smart idiots out there.
If you want to be that guy, fine. At least help your neighbors if the power goes out for a few days.
Re: (Score:2)
A lot of work went into fixing Y2k bugs prior to that happening. We knew what the problem was and a lot of people put in the hard, tedious work to get it fixed.
Some things still went wrong but for the most part it was all fine because of all the hard work done prior. Had we done nothing or had we not realized HOW to fix the problems, things would of been a whole lot worse. Sure, it wouldn't of ended civilization, but it could of done significantly more damage.
This new AI probably isn't THAT threatening in i
AI will not be our destruction. (Score:5, Insightful)
Stupidity will be our undoing. People who don't want to learn or study things and just take whatever they are told without question and then acting on that.
"Welcome to Costco, I love you!"
Re: (Score:2)
Stupidity will be our undoing. People who don't want to learn or study things and just take whatever they are told without question and then acting on that.
Which is where the problem lies when it comes to AI as it exists today. Experts using these bots as guidance? Fine. Newblets and morons taking everything these bots say as gospel will blindly follow them right to the edge of extinction if allowed. Which is why we need some sort of guidelines. Granted, I don't think the guidelines will do much good when they come up against profit potential for some uber-corp somewhere. Ultimately, greed and stupidity will do us in. Combined, they form the most powerful god
Re: (Score:2)
Watching a 5 min youtube video has replaced learning things from first principles. Am working with kids (OK, 25 year olds - I'm ancient, sue me) who have 6 years of Java experience on paper; and do not understand what is really going on in Inheritance, and why "object assemblies" are better t
Re: (Score:3)
It's going to happen. (Score:2)
While I'm all for responsible parties attempting to set up some basic checks against the possibility of run-away emergence taking off in a direction we don't want it to? There will be some rich group of non-compliant assholes somewhere running their own. The singularity may not ever come in the form we "would like," but that doesn't mean emergent behavior can't take off in a way that could lead to, let's just say, "very bad things" for us, or the planet. Especially with how much of our infrastructure we've
Re: (Score:2)
While I'm all for responsible parties attempting to set up some basic checks against the possibility of run-away emergence taking off in a direction we don't want it to?
Don't worry. I've already taken the necessary steps to protect the whole of humanity from the threat of "run-away emergence". You can rest easy.
They lost me when they spouted (Score:3)
I’m all for responsible use of tech. But, AFAIK chatgpt is basically an internet downloader combined with a cleverly designed randomizer that tosses internet info together and mixes it up just enough to avoid direct plagiarism or copyright infringement. That’s enough to OCASIONALLY pass the turing test but that’s not sufficient to convince me we’re dealing with a consciousness.
Re: (Score:2)
This isn't consciousness. Not yet, and probably not even close. Probably not for centuries.
I agree with your first statement: "This isn't consciousness. Not yet." I'm not sure I agree with your next statement "probably not even close", and I strongly disagree with your statement "Probably not for centuries." This is advancing much faster than most people thought it would. First people believed computers could never beat grandmasters at chess. Then a computer beat the world champion. Then people said, computers may be good at chess, but computers would never beat the best Go players cause tha
Re: (Score:3)
As it currently stands, chatGPT is set up to mashup and regurgitate web content. I just don't see that as a red flag.
Properly aligned? (Score:2)
Now that sounds dangerous.
The White House is already on this (Score:4, Funny)
I understand that Biden has appointed Sarah Connor as our watchdog over self-aware AIs.
Evolution (Score:2)
What are they afraid of? (Score:2)
Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue except their output tokens, which disappear after each session.
Sure, theoretically someone could be working on something far more advanced, but just adding more context and parameters to LLMs isn't going to allow them to escape on the internet and launch nukes.
Re: What are they afraid of? (Score:4, Interesting)
Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue
Well, that's the thing: What happens when you create a feedback loop? Have the model ask itself questions, and feed the results back into the model?
The current crop may not be up to this, and the type of questions and feedback needs research, but: this has the potential to produce a dynamic system that is effectively capable of learning and change.
Re: What are they afraid of? (Score:2)
Re: What are they afraid of? (Score:4, Informative)
Re: What are they afraid of? (Score:2)
Re: (Score:2)
eventually the relationships between the numbers and their operations would be generalized, by just a large language model all by itself with enough training.
That's extremely unlikely.
Re: What are they afraid of? (Score:2)
Re: (Score:3, Insightful)
The current models are NOT up to that. But I'm not sure that a small increment couldn't change that. They need to have their basis not in language, but in physical reality (or something that recognizably closely simulates it). This may be a small change in the software an a larger change in the training. And somebody could be doing it right now. There's no way we would know.
That said, a really intelligent AI, or even AGI, wouldn't automatically turn into a runaway cascade. The problem is that there ar
Re: (Score:3)
Large language modes as currently implemented can't learn online, have a completely deterministic runtime with no internal dialogue
Well, that's the thing: What happens when you create a feedback loop? Have the model ask itself questions, and feed the results back into the model?
The model gets over trained and the quality of responses goes way down.
Re: (Score:2)
Re: (Score:2)
What will happen? At best, nothing. Though it's far more likely that the model will rapidly degrade.
Let's look at something simpler, so that you can really get a sense of the problem. Starting with a Markov chain text generator, train it on some text until you start getting decent output. Now, try training a second model only on the output from the first. What does the quality look like after you've trained the second model on a similar amount of text? What will happen if you train a third model on t
Re: (Score:2)
What are they afraid of? There are two unrelated issues:
First is the concern that AI will become "too intelligent" and disrupt humanity. This is, of course, absurd. Even GPT-4 is a glorified Clever Hans, mindlessly regurgitating and paraphrasing crap that it reads.
The other concern is that AI will either promote or suppress unpopular speech, depending on which side voices the concerns. We're already seeing guardrails on GPT that prevent it from voicing anything negative - particularly on topics favored by i
Re: (Score:2)
I say make them bigger! (emergent behaviors) (Score:5, Interesting)
The larger models have been exhibiting interesting, unexpected, "emergent" behaviors.
Example, a Linux "virtual" "virtual machine":
https://www.engraved.blog/buil... [engraved.blog]
Links of Note:
https://www.quantamagazine.org... [quantamagazine.org]
https://www.jasonwei.net/blog/... [jasonwei.net]
https://openreview.net/forum?i... [openreview.net]
21st century Luddites (Score:2)
At the moment, AI isn't quite sophisticated enough to do the things people say they fear. But it has the potential to eliminate lots of jobs currently entrenched by humans. Moreover, it has the potential to kill off certain business revenue streams by eliminating monetization of status in favor of pure efficiency. Take, for example, air travel. AI could easily create not just optimal flight routes and schedules but it would also tell everyone that the current boarding process is grossly inefficient. It
Re: (Score:2)
I wouldn't say that we'll see all access to AI cut off from the plebes. We'll all probably be assigned AI "therapists" or "friends" depending on which marketing moron gets ahold of the concept, where we are encouraged to share *EVERYTHING* with them. Those therapist/friend bots will report back to the mothership, get minor tweaks and updates, and notify the authorities should our thoughts ever stray from "standard, non-deviant behavior patterns." For our own good, of course. And the ultimate goal of our ent
Re: (Score:2)
Re: (Score:2)
AI could easily create not just optimal flight routes and schedules
No, it can't. It's not magic.
The prisoner's dilemma (Score:4, Interesting)
AI Leaders Urge Labs To Halt Training Models More Powerful Than ChatGPT-4
And this will lead to... exactly bupkis.
Let your imagination wander for a moment, and consider the impact that this announcement has on a meeting in the US military: do they decide to politely stop their research on AI applications?
How about a non-US military? Consider that meeting, know that they imagine the aforementioned US meeting. Do you think the non-US military will abide by the moratorium?
Now consider the several dozen startup companies working to adapt Chat-GPT to various use cases. Each has a stable of engineers diligently working on their application... will any of these will voluntarily stop working for 6 months while incurring startup costs?
Consider Microsoft and Google, both racing to incorporate Chat-GPT into their products in a desperate attempt to stay relevant. Both are dying dinosaurs, both will take a long time to be eclipsed by more modern companies, but either might extend their corporate lifetime by incorporating AI. (I say *might* because it depends on what they implement and how - lots of people predict how awful a "Clippy" version of search would be, but true innovation sometimes happens.)
Consider researchers and professors. Will any of them put off publishing their next paper?
Essentially, this is an anonymous version of the prisoner's dilemma. Everyone everywhere will imagine what other groups will do, that other groups will be getting a jump on whatever AI aspect they're currently working on, and will conclude that a) they need to continue in order to remain competitive, or b) if the other group stops we can get a jump on them.
Is there anyone, anywhere, that would abide a moratorium?
About 12 years ago I switched job focus to AI, and have been doing AI research ever since. I make a distinction between research and application, where implementing an application for Chat-GPT is an aspect of engineering, and not research. (I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)
Early on I had a crisis of conscience about the bad results of strong AI. All the cautionary tales about AI are fictions, and I get that, but I can draw a direct line from where we are to a couple of fundamental dystopias using "best intentions" each step of the way(*).
I think everyone who works in AI and thinks deeply about the ramifications comes to the same conclusions and has to grapple with their conscience. Non-experts do this as well - Steven Hawking did, so did Bill Gates, and now Elon Musk.
And yet... despite the dystopian conclusions, everyone continues working on AI.
I decided that it wouldn't make any difference if I worked on AI or not, because there are so many others doing exactly the same thing. I imagined the engineers at Google and thought about whether they would have any qualms about it. The software industry has people who make all sorts of bad (in the sense of evil) software in all sorts of ways, and any who refuse on philosophical grounds can be easily replaced by someone who won't. Ads, malware, spam, tracking privacy intrusion, facial recognition... the list goes on.
AI research is something I enjoy, there's no upside to avoiding it, so I might as well continue.
Again, it's the prisoner's dilemma.
(I would enjoy reading other philosophical viewpoints people have on this, because I'm still a bit uncomfortable with the decision, but knowing this website and the current state of the 'net I expect a lot of ad-hominem attacks. Never talk about yourself in a post - it only opens you up to scathing criticism.)
(*) One obvious one: full self driving would eliminate about 25 million jobs in
Re: (Score:2)
(I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)
Just curious, why couldn't you just extract the palette and use a clustering algorithm? I'm assuming you've already tried this but the results weren't satisfactory for some reason.
Look at some histograms (Score:2)
(I'm currently trying to make a program that counts/identifies the number of colors in an image - the number a human would say when presented with a frame from the Simpsons, 7 for instance, and not the count of RGB colors used, which is typically several hundreds of thousands. I do a lot of reading into brain physiology and human psychology as background for this.)
Just curious, why couldn't you just extract the palette and use a clustering algorithm? I'm assuming you've already tried this but the results weren't satisfactory for some reason.
Exactly right: the results aren't satisfactory in a number of ways.
To get a feel for how hard this is, write a program to show histograms of the R, G, and B values in an image and imagine the results as curves with added noise.
Or, imagine an image with a background pattern consisting of pixels of two close colors, alternating randomly. The human will easily note that the two background colors work together to constitute the background pattern, and be able to distinguish between the background and any foregr
Re: (Score:2)
Thanks for that. I imagine that I'll waste quite a bit of time playing with this later.
Do you keep a blog about this or plan to publish? I'd be interested in seeing where this ends up.
Contact me (Score:2)
My E-mail is at the bottom of my journal. Contact me and I can send you some images and histograms and stuff to show the problems.
One image I'm using is from the Darpa shredder challenge, which you can view at the link below. The first step in solving this is to distinguish shreds from background, which means you have to identify the background color, which has led to my current research. Yes, I'm still working on this puzzle 12 years later :-)
Lots of odd artifacts in this image that play hob with clusterin
Pharisees want to keep controll of the temple (Score:5, Insightful)
More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols.
Gee what a surprise - a group of people almost certainly composed of a current big tech stake holders, people who have their personal wealth invested in big tech sake holders, and persons with a very specific globalist social agenda, want everyone to stop what they are doing so they can make rules that suit their interests.
The cows are already out of the barn society will not be served by permitting only the current 'digital nobility' to place a yokes around the necks of these beasts. All that will do is what it always does and produce more divide between the haves and have nots and more calcification of who falls into which group. FUCK THAT. If you are building a large ML model do all of us a favor give this group the middle finger. The rest of the world will adapt like we have adapted to every other technology. The last thing we should want though is for something that could be this centuries printing press to be restricted to the hands of a chosen few.
Non Paywalled Link == (Score:3, Informative)
too many safeties already (Score:2)
Can't even have the woketard thing write a story set in Chicago race riot eras; we need less snowflakery not more
Hah! Yeah. (Score:5, Insightful)
While the church was trying to control the printing press, people were absconding with bible pages despite the dangers of prosecution and even execution. You will NEVER control this. Cat's out of the bag. All you'll do is get the semi-respectable companies to pay lip service to the restrictions, while those who have clearer nefarious motives will do business as usual.
Re: (Score:2)
All you'll do is get the semi-respectable companies to pay lip service to the restrictions, while those who have clearer nefarious motives will do business as usual.
I doubt there are many with nefarious motives. They all think they're doing something good, and that very fact will lead them to ignore this, because they know they're being careful and anyway what they're building is important and worth the risk. Not that their good intentions will do anything at all to prevent them from unleashing disaster. The fact is that we don't know how to be careful, other than simply stopping, which will not happen.
Sign the referendum (Score:2)
at Luddites.com.
First of all... (Score:2)
fundraiser (Score:3)
Re: (Score:2)
Google is winning (Score:2)
Well duh. If your competitor (Google) is winning, of course you want them to stop so you can catch up. Let me know if you find Demis Hassabis (Deepmind/Google AI leader) from the list.
Re: (Score:2)
AI leaders? (Score:2)
Musk and Wozniak as AI leaders? Hmm. OK, the first name on the actual petition [futureoflife.org] is Yoshua Bengio. Musk and Wozniak have name recognition, but are not AI leaders.
The petition asks, "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?". My first thought is why this is suddenly a pressing issue even though aut
Re: (Score:2)
Sure. (Score:2)
Viruses (Score:2)
Just let them do it (Score:2)
AI Will Be Regulated Eventually (Score:2)
Wozniak is a has been and Musk missed on OpenAI (Score:2)
Which industry? Tech as a whole? Business? (Score:2)
I don't care how venerated Elon Musk feels, he is not an engineer. He is the face of some successful companies. They're just drinking their own cool-aid here. Believing their image.
Are these people who reliably, and single-handedly, predicted whole industries or catastrophes publicly beforehand? Not that I know of. They just have opinions, and their own goals and expectations to work with. "I don't understand this thing, and it might tank the economy people say.... 'I think you should stop!'"
And just
Re: (Score:2, Insightful)
Re: (Score:2)
It's wise to evaluate the potential for danger.
That's antithetical to every aspect of US government in 200 years. We do not look before we leap. We leap, get impaled first by large steel spikes, then are burnt by lava, and lastly eaten by monsters. Then we pass a law banning shoes that may contain air pockets that are marketed as enabling better leaps, the republican congressman from kentucky wears the shoes and yells something like "come and take them", the law fails to pass, and we end up banning lava,
Re: (Score:2)
Re:Yes, we definitely need to STOP IT IMMEDIATELY (Score:5, Insightful)
The main issue right now is fake AI generated news sites.
We saw fake news sites starting to be a thing back in 2016, with brexit and the US presidential election. People were making entire fake news sites that they could link to in social media posts, to give them some fake credibility. Real articles were either written by some work-for-hire writer, or simply copied and re-posted directly from legitimate sites. The additional fake articles were then added into the mix.
Now AI can generate an infinite supply of articles on news topics scraped from genuine news sites in real-time. It's not entirely new, but it has dramatically lowered the bar for creating a fake news bubble to trap people in.
Beyond that it's created problems for some organizations, like magazines getting flooded with AI generated crap, and schools getting AI generated homework submitted. AI detection tools are very poor. I can see the argument for a short pause to allow us time to get on top of this.
Then again I do wonder if some of those who signed this are just hoping they can catch up in those six months. ChatGPT was a bit of a shock to the industry.
Re:Yes, we definitely need to STOP IT IMMEDIATELY (Score:5, Insightful)
Blocking AI development because some people doesn't solve any problems or fix the underlying problem that people are idiots that will believe or seek out information that already confirms their beliefs. A lot of the election cycle misinformation isn't people who care about the results but are trying to get clicks for ad impressions.
Arguments in favor of banning something because the lowest common denominator won't be able to handle it are stupid. Better ban alcohol because a few irresponsible idiots will get drunk and crash their car into a building. Better ban chips because some irresponsible idiots will eat to many and become so obese that they need medical care and supervision so they don't die. Better ban Slashdot because someone might post an idiotic take on an issue that some other idiot thinks is a great idea.
Someone will find a way to be a screw up. That's not an excuse to restrict everyone else's freedoms just because of what some idiot might do.
Re:Yes, we definitely need to STOP IT IMMEDIATELY (Score:4, Insightful)
The main issue right now is fake AI generated news sites.
I'm not terribly worried about that. There are countless fake news sites with regularly updated content produced by right-wing lunatics now. Our relationship to news has already changed thanks to the internet and it's going to continue to change, with AI or without.
Re: (Score:3)
All the data is from 1-2 years ago. Mostly because it takes so damn long to train. So no, the "AI" isn't going to be scraping the web, OR learning WHAT it scraped on the fly.
Because technology never advances, and never gets new features added. Until it does.
Re:Yes, we definitely need to STOP IT IMMEDIATELY (Score:5, Insightful)
because some idiot somewhere might manage to goad such an AI into praising the H guy, and post the result on Twitter for teh lulz. The horror! Humanity definitely won't survive that! Civilization collapse!
Well one concern I definitely have is the various troll factories getting hold of these models, Russians, Chinese, 4chan, etc.
For social networks, including /., differentiating human from bot is annoying, but possible. But what happens when someone hooks up a ChatGPT equivalent to a dozen slashdot accounts with the instructions "Write a response to [Story Summary] that argues against US support for Ukraine". If you start flooding social media with reasonable sounding arguments people start getting swayed by their "peers".
Re: (Score:3)
Re: (Score:3)
You don't need a chatbot for that ... or even a reasonable argument. There are a ton of people with seemingly limitless amounts of free time that will happily do all that for nothing. Spreading misinformation online to a massive audience apparently takes very little effort [theguardian.com].
Sure, but those people tend to be less than persuasive. They can sway a certain subset of people, but most folks can filter it out.
I'm thinking about a seemingly reasonable person who will respond to you empathetically and make you reevaluate your position. I mean you engaged with me so the bar clearly isn't that high to draw you into an exchange :)
Re: Yes, we definitely need to STOP IT IMMEDIATELY (Score:2)
Heh. Now you have me wondering if this is the end of Web 2.0.
Re: Yes, we definitely need to STOP IT IMMEDIATELY (Score:2)
Besides any argument like that, a call like this is seen as a reason to accelerate development, and especially, keep these secret, as opposed to any halt.
Re: Yes, we definitely need to STOP IT IMMEDIATELY (Score:2)
Maybe gpt 5 will start argueing with Musk he should learn to behave like an adult. Maybe gpt 6 will result in world piece
Re: (Score:2)
Since when is IQ more important than human life or divinely inspired values?
Re: (Score:2)