How Should AI Be Regulated? (nytimes.com) 153
A New York Times opinion piece argues people in the AI industry "are desperate to be regulated, even if it slows them down. In fact, especially if it slows them down." But how?
What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That's where the government comes in — or so they hope... [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I'd prioritize.
The first is the question — and it is a question — of interpretability. As I said above, it's not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand... The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It's ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.
The piece also recommends that AI-design companies "bear at least some liability for what their models." But what legislation should we see — and what legislation will we see? "One thing regulators shouldn't fear is imperfect rules that slow a young industry," the piece argues.
"For once, much of that industry is desperate for someone to help slow it down."
The first is the question — and it is a question — of interpretability. As I said above, it's not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand... The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It's ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.
The piece also recommends that AI-design companies "bear at least some liability for what their models." But what legislation should we see — and what legislation will we see? "One thing regulators shouldn't fear is imperfect rules that slow a young industry," the piece argues.
"For once, much of that industry is desperate for someone to help slow it down."
Supreme Judicial Machine Law (Score:5, Funny)
A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.
Prisoner's dilemma (Score:5, Insightful)
A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.
A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.
A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.
The problem is that we're in a prisoners dilemma. Imagine the various US military projects that have sprung up around Chat-GPT: do you think any of them will pause development?
Now imagine the military of a different nation - enemy or ally - do you think any of *them* will pause development? And do they believe that the US military will actually pause development, even if the US military says they will?
And as has been pointed out in the OP, there are at least 3 major players rushing to play "catch up" before one of the other giants eats their lunch, and probably 50 or more "minor" players in the form of companies or people with "one good idea" working feverishly to get a demo product running.
Does anyone believe that *any* player in this field will abide a moratorium, knowing that the others probably won't?
We're in the prisoner's dilemma, where everyone would benefit if everyone acted against their best interest, but there's a huge reward for one player acting selfishly. It's especially bad, because any player acting selfishly can simply keep quiet about it and no one would know.
Just about every smart person who thinks deeply about AI comes to the conclusion that it will bring about widespread disaster of various sort. Elon musk did, Stephen Hawking did, Bill Gates did, lots of others do as well.
I work on strong AI (as my day job) and I came to the same conclusion: a wide range of apocalyptic outcomes come from having infinite human-level labor, or infinite ability to prattle talk, or brain/computer interfaces.
My take was that even if I stopped researching and experimenting, that the people at Google would have no such qualms and continue to push the envelope beyond any sane Rubicon of danger, and I might as well do something that I enjoy and ignore the consequences.
Google certainly hasn't taken a high moral stance against invasive ads, or tracking personal behaviour, or manipulating opinions, or suppressing free speech... and it's likely they won't take the moral stance on AI.
Why should anyone?
Re:Prisoner's dilemma (Score:5, Informative)
Does anyone believe that *any* player in this field will abide a moratorium, knowing that the others probably won't?
Not if they have two functioning brain cells to rub together, lol.
Welcome to the Wild, Wild West of AI, where anything goes and the consequences are still unfathomable.
In 10 years this tech will be used everywhere, especially in places where it shouldn't. Scammers and corporations alike will be humping this as hard as they can.
Soon you simply won't be able to trust live audio/video (Facetime, Skype, Discord, etc), and you won't be able to be sure the 'person ' on the other end is who you think it is unless you quiz them about some shared secret or bit of trivia.
It'll get more subtle, more capable, and more adept.
Frankly I wouldn't be surprised if a couple of my coworkers could be replaced by a well-tuned AI instance.
Re:Prisoner's dilemma (Score:4, Informative)
Oh scammers ARE already humping this technology as hard as they can.
Theres one going around at the moment involving phishing phone calls using AI rendered versions of a loved ones voice (Including a horrifying one where the loved one is claimed to have been kidnapped, demanding a million dollars [because random people who fall for phishing just happen to have a million dollars lying about])
And we've seen actual state level disinfo attempts using it. Earlier in the Ukraine war deepfaked video of Zelenski surrendering got passed about attempting to fool ukranian soldiers into surrendering. That one failed pretty spectacularly as it was an *extremely bad* deepfake that looked obviously altered.
And yeah, the poor old graphic designers and professional writers are already getting termination notices.
I might not be a fully unhinged doomer, but I do have some reservations about instrumental convergence and these things going haywire on a paperclip maximizer task.
Re: (Score:2)
You and the GP don't seem to understand what regulation would do.
Scammers can do this because ChatGPT is available to them. If the regulations allowed them to build ChatGPT, but they had to be more careful about making it publicly available so scammers couldn't prompt it to "write me 100 phishing emails in the persona of IT tech support requesting a password reset" then things wouldn't get so bad so quickly.
Same with the graphical ones, there is a difference between developing them and making them into easy
Re: (Score:3)
You and the GP don't seem to understand what regulation would do.
Scammers can do this because ChatGPT is available to them. If the regulations allowed them to build ChatGPT, but they had to be more careful about making it publicly available so scammers couldn't prompt it to "write me 100 phishing emails in the persona of IT tech support requesting a password reset" then things wouldn't get so bad so quickly.
Same with the graphical ones, there is a difference between developing them and making them into easy to use open source tools that allow people to generate massive amounts of involuntary pornography.
Training costs as a function of capability are going batshit with no end in sight. Legislating access to trained models is not going to do jack except offer a momentary reprieve that may well become effectively worthless by the time it can be enacted.
Re:Prisoner's dilemma (Score:4, Informative)
it will bring about widespread disaster of various sort.
maybe in near future with agi in general, but chat-gpt is not it, besides all the hysteria. it's just another baby step and the biggest impact might be rising unemployment rates. and copyright issues. which is serious, but not necessarily catastrophic level of serious. the singularity will have to wait a bit.
gpt is amazing, and a startling plot twist, but let's not go nuts: it is A TEXT GENERATOR. or speech generator if you want, but in essence just text. text may be awful, and a bunch of lies, but who is supposed to actually read all that text generated by that exponential capability or even give a shit? it will probably change how we go about a lot of things in daily life, work and research, but you can't disrupt a civilization merely by spewing out mountains of generated human-flavored text.
even if you imagine a nightmare scenario were that text were to be made into law or policy, from what i have seen and with just basic supervision we wouldn't be fundamentally worse off than with our current assortment of human think tanks and lawmakers. the corruption would concentrate at much higher level, this would require careful management, but otherwise it would be just the same job done much more efficiently, and much less contaminated by spurious interests.
this is just fear. self driving cars? regardless of the media circus around every isolated incident involving them, self driving cars are already vastly safer than human drivers, the presence of human drivers is actually the problem and if all cars were self driving the rate of accidents could probably be reduced to virtually zero. but self driving cars will just drive us around, not destroy humanity. again, the only negative impact is in the job market. good riddance, let's just stop the stupid wars and the frantic working and enjoy life.
the bias in algorithms, the generation of information bubbles ... all that simply feeds on existing human behavior and bias, so is just more of the same, more efficiently. what's not to like? well, the lack of transparency and unaccountability of the service providers. we knew about that a while ago. that's relatively easy to regulate and we haven't barely started. should we impose a six month moratorium on facebook too now?
once we fabricate an agi that has infinite capability of producing, say, toxins or even weapons, we might be in deep shit, though.
Re: (Score:3)
this is just fear. self driving cars? regardless of the media circus around every isolated incident involving them, self driving cars are already vastly safer than human drivers, the presence of human drivers is actually the problem and if all cars were self driving the rate of accidents could probably be reduced to virtually zero. but self driving cars will just drive us around, not destroy humanity. again, the only negative impact is in the job market. good riddance, let's just stop the stupid wars and the frantic working and enjoy life.
I guess when you don't count all the times a human stopped the computer from doing something exceptionally stupid self driving is vastly safer.
the bias in algorithms, the generation of information bubbles ... all that simply feeds on existing human behavior and bias, so is just more of the same, more efficiently. what's not to like?
once we fabricate an agi that has infinite capability of producing, say, toxins or even weapons, we might be in deep shit, though.
Yea it's all fun and games until someone asks the latest and greatest general AI model to hack into Moderna and covertly add a few tens of thousands of codons to the next batch of vaccines to unleash a virus to kill everyone in the world.
Re: (Score:2)
A well regulated AI, being necessary for the common disinformation society; the rights of the robots shall not be infringed.
Does anyone believe that *any* player in this field will abide a moratorium
Well, that's rather the point of my post above. (However, lesser minds than yours modded it 100% Troll.)
Re: (Score:3)
The simple answer is we should NOT regulate AI development.
Because it will done as you say one way or the other. In the worst case our nation and friendlies abide by some set of rules while the CPP, develops systems that give them all sorts of competitive advantages in secret, until they have such a commanding lead they just show their cards and say too darn bad, we have AI will use it, you can't catch up.
In the best base, all major powers ignore the rules, and the spooky three-letter-agency types build it
Lots of systems nobody understands (Score:2)
Re: (Score:2)
Oh COBOL is easy enough to understand. It just requires a lot of time to read through a lot of code.
But readability is actually COBOLs strongest point. Its that very weird category of languages that are hard to write and easy to read.
Not that I've looked at any COBOL in nearly 30 years. God help me that first job was like having my brain hacked out with a plastic spoon. COBOL is the devil.
Re: (Score:3)
well regulated
Translation (based on current usage of the phrase): Zero regulation.
I was actually making the same pun when I wrote that. "Well regulated" in the 2d Ammendment (18th century English) meant "well practiced". (Having nothiing to do with laws restricting them, which is a very modern-language misinterpretation.) In the same sense as how a mechanical machine is "well regulated" when the gears are aligned and properly oiled.
A well regulated AI would be one where the training data and hand tuning is good, I suppose.
And there obviously can't be any "regulations" against the develop
Re: (Score:2)
One thing is certain. The AIs are here and there is no stopping them.
I for one welcome our new AI overlords....
No one knows enough yet? (Score:5, Insightful)
Re: (Score:2)
In soviet socialist Russia, AI regulates you!
Re: (Score:3)
We should just let AI regulate itself.
Given its stellar job on everything so far, I'm sure that will work out great!
Re: (Score:2)
That actually is their plan, AI regulation/morals etc is called 'alignment'. The researchers are still working on it and (literally) hoping they can get the AIs to regulate themselves by they aren't certain.
Re: (Score:2)
"Researchers". Yeah, the LessWrong nuts and their pretend "research institute" aren't actually researchers.
Re:No one knows enough yet? (Score:5, Interesting)
The real question is why the industry wants to be regulated. They're free to make and follow any strictures they'd like.
My guess is so that they can delay the inevitable crash as we slide down the slope of disillusionment once we crest the top of the hype wave.
It makes for a nice excuse as well. "Sorry investors, it's these darn regulations!" There's about to be a noticeable lack of progress on that front as larger models become prohibitively expensive and the real limits of current models become too obvious to ignore.
Re:No one knows enough yet? (Score:5, Insightful)
The real question is why the industry wants to be regulated. They're free to make and follow any strictures they'd like.
A common reason industries beg even lobby to be regulated is regulation serves as a means of reducing competition by increasing barriers to entry. The big guys can afford the resources to jump thru all the process hoops.
Re:No one knows enough yet? (Score:4, Informative)
They want to be regulated because they're some of the brightest brains on the planet and they know that the next step, which may just be months from now is a genius level AI with an encyclopedic knowledge that far surpasses any human.
People think that everything will be ok because they'll just be able to use AI as a tool to be more productive but what they're missing is that AI will also be good enough to use AI as a tool to be more productive, those humans who want to be more productive by using AI won't be needed. Any job that doesn't require physical work is at risk of replacement within a handful of years.
Re:No one knows enough yet? (Score:4, Insightful)
the next step, which may just be months from now is a genius level AI with an encyclopedic knowledge that far surpasses any human.
That's very obviously not going to happen. How did you come to such an absurd conclusion?
A lot of the other replies suggest that the real goal of all this regulation talk is limit competition by increasing barriers to entry, which makes a lot more sense than experts in the field being afraid of imaginary monsters.
I've also thought that all this fearmongering could be part of a marketing stunt intended to make the technology appear far more advanced than it is, or the pace of development far faster than it is, without making any specific claims that will get them in trouble with the investors they're busy fleecing. Why would I think such a thing? Well, we know that the technology isn't nearly as advanced as people seem to think, the pace of development isn't even remotely as fast as people think, and investors are undoubtedly being fleeced.
Any job that doesn't require physical work is at risk of replacement within a handful of years.
Elmo has been claiming fully self-driving cars within a year every year [futurism.com] for the last 9 years. Every year, some new group of hopefuls, along with the hopelessly credulous from years past, go along with it under the delusion that this time things are different.
You're predicting a pretty optimistic timeline, even for one of the faithful. I wonder how many months or years it will take before you start pretending that you were always skeptical? What will be the excuses along the way?
Re: No one knows enough yet? (Score:3)
AI is over 100 IQ already and yes is is about to be genius level within months white collar jobs will be wiped out if AI isn't banned or very tightly regulated .
We are close to the point where AIs will be able to create more intelligent AIs.
Your assertion that AI won't be very intelligent is one of ignorance, I said them same up until recently but what I've seen in the last couple of weeks has completely changed my mind. See the last month if videos on the AI channel I have linked in my sig and you'll under
Re: (Score:2)
I think one thing people tend to either gloss over or forget entirely is that we don't really *know* what intelligence is. Tie that together with humanity' strange ability to perceive everything as proof that we're somehow special, better, more, (add more descriptors here) than anything ever in the history of anything, and yeah, a lot of people are going to deny that machines can be "intelligent." Hell, a lot of scientists will insist that no animal other than humans have ever been intelligent, tool using m
Re: No one knows enough yet? (Score:2)
I'll take intelligent as being able to get a high score in any IQ test that is thrown at it. These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home
Re: (Score:2)
I'll take intelligent as being able to get a high score in any IQ test that is thrown at it. These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home
Agreed. For the most party, I'd think they could do a lot of that now if the people "training" them didn't have to spend so much time making sure they push the current identity politics soup du jour above "knowledge" in its own right.
Re: (Score:2)
These AI machines will be able to replace coders, lawyers, customer services, anyone in finance or insurance basically most people with a desk job that commute to an office or similarly work from home
There is absolutely no reason to believe that. That's just pure delusion. We know how these things work. We know what their actual capabilities and limitations are. They only seem mysterious to you because you don't actually know much about them. I can assure you that they are not even remotely close to being able to do the things that you think they're doing, let alone whatever future things you've imagined.
You might want to spend a bit less time with pop sci videos and a little more time with a text
Re: (Score:2)
Wow, you really believe this nonsense?
Let me know how it does on an IQ test that requires it to balance a set of parenthesis or do basic arithmetic. LOL!
Re: (Score:2)
Wow, you really believe this nonsense?
Let me know how it does on an IQ test that requires it to balance a set of parenthesis or do basic arithmetic. LOL!
All naysayers really need to watch this...
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:3)
Re: (Score:2)
The ones screaming the most about wanting regulation aren't actually "the industry", it was people like Musky who felt left out and that "AI researcher" guy whose name I forget but who hasn't actually done anything but write idiotic opeds.
Re: (Score:3)
Because the way AI is going, it's going to run straight into existing laws. And if that isn't taken care of, it could sink the AI ship.
Think of it right now - let's say you use ChatGPT to make you a nice Mickey Mouse figure. You put it up on your website as art, but then Disney gets word of it and starts throwing their weight around and you suddenly see yourself at the end of a lawsuit cit
Re: No one knows enough yet? (Score:2)
I don't think AI can be regulated beyond banning it's use for all but a tightly defined set of uses that won't make millions of people redundant.
Re: (Score:2)
We know plenty to begin regulating AI. We don't know enough to enact regulation which will govern the industry for decades to come, but that is a red herring. We know enough to get started now.
One area to start is regulating how companies can obtain and use test data. It is the wild west right now and AI companies have no idea if they are breaking the law or not. The government needs to step in here. We also need regulators to start identifying where we need new laws and where existing laws can be used. Can
Re: (Score:2)
It Shouldn't (Score:3)
Let the marbles fall where they may. It will self heal.
Re: (Score:3)
If you tried to purposefully create the laws of robotics today, you would end up with Robocop's 400 directives which is what drove him electrocute himself.
Re:It Shouldn't (Score:4, Interesting)
I always think back to times when game programmers have tried their hand at self learning AI to generate the model which the realtime AI will use and how inevitably they leave it overnight and come in to find that the AI has solved the problem as denoted by the fitness function but in a way that's not remotely useful to the purposes of the game. So they add more rules to the fitness function and try again. And the same thing happens. The AI exploits gaps in the logic to solve a problem in a way that's useless. By the time they realise they've pissed too much time up the wall the fitness function starts to look a whole lot like what they should have written by hand in the first place...
People are worried that GPT will become Skynet. I'm more worried that we'll have another dotcom crash where massive investment is thrown at companies who all have software that gets 90% of a problem solved and leaves that last pesky 10% up to the gods until the money runs out and one by one they start to fail and take large chunks of the economy with them.
Re: (Score:3)
There was the story that this happened in the development of Oblivion.
The devs tried to make an AI that worked based on goals and priorities, and it had all sorts of weird outcomes like people murdering each other because somebody forgot to give a NPC a tool they needed, or characters would wander off somewhere and leave their post deserted.
That's funny in the abstract, but it's not fun to play a game where you find that crucial NPC Bob is dead because he stole from the blacksmith, who killed Bob, and then
Re:It Shouldn't (Score:4, Insightful)
Isaac Asimov created the 3 laws to illustrate over and over the unintended consequences that arise from simple rules.
Really, a good chunk of his stories is about how the 3 laws have all sorts of unexpected snags and weird outcomes.
Re: (Score:2)
It was a plot device. Maybe it turned out to be prescient, but was he intending to be a futurist? That's something that's hard to know. But maybe somebody who's studied his life and letters, interviews, etc, knows better.
I'd say regardless of original intent, it has served well as a warning against arbitrary "laws" like that. But that's just my interpretation, and a good story has many.
Internationally. (Score:2)
The same Fear Of Missing Out drives the competition between nation-states on this front as well. Without correcting for this pressure, all any amount of domestic pressure can do is play kingmaker -- and not in a good way. Whoever looks the other way the longest probably "wins", but this may well be a case of "play stupid games, win stupid prizes". Only problem is that we all get the prize.
Hopeless (Score:5, Insightful)
Re: (Score:2)
Yeah, like land mines, everyone thinks it is criminal and inhumane to use land mines in modern warfare, and the country that refused to stop using land mines is... America. And then there is the international agreement to restrict bioweapons research, let's see, the only major country that had not signed it.... again it is America.
America is not the only country who has refused to sign these. I don't believe any of the top six militaries in the world have signed the mine ban treaty. Only smaller countries under the protection of these larger nations have signed it. And there are many countries who reject the bioweapons research bans on the grounds that inspections would compromise national security.
And the treaty to ban child labor? Yep, it is the US which refused to sign.
Bill Clinton signed the treaty in 1995, but we haven't had a strong enough Democratic super majority to ratify it since then. Even right-
Re: (Score:2)
Yeah, like land mines, everyone thinks it is criminal and inhumane to use land mines in modern warfare, and the country that refused to stop using land mines is... America.
I disagree. Land mines are a useful area denial technology that raises costs for the attacking party. ERW is addressed by internal timers that explode or physically safe the weapon after a preset interval.
Defensive AD promotes peace by raising the cost of aggression.
Why Regulate It? (Score:4, Interesting)
Re:Why Regulate It? (Score:4, Insightful)
You have way more faith in the power of education than I do.
We've been educating people how to drive properly for decades. You can't get a license to drive without it. Do we see people driving safely and thoughtfully out there on the roads?
We've been educating people in reading, writing, and arithmetic for generations. Do we have a society where everyone can read, write, and calculate well?
We've been educating people about avoiding phishing emails for years now. How's that one going? In every pen-test my company does, about 10% of employees fail, and click the bait link.
Education does help some people, so it's worth doing. Just don't be overly optimistic.
Re: (Score:2)
Think about how bad things would be without any education!
Re: (Score:3)
Of course!
The point the OP made was that education could substitute for regulation.
I say that we need both education *and* regulation, because education alone isn't enough to keep abuses from happening.
Re: (Score:3)
If AI isn't regulated then those 27.6 people can kiss their jobs goodbye and the gov't will have a huge tax hole, employees pay taxes, AIs don't.
Re: (Score:2)
Re: (Score:3)
See my sig:
AI: IQ, Sentience, danger, papers: https://tinyurl.com/3cc7wv9w [tinyurl.com] . . . . Ilya Sutskever: https://youtu.be/Yf1o0TQzry8 [youtu.be]
That tinyurl links to a youtube channel that shows where AI is really at, GPT in combination with autoGPT is already at average human level IQ. GPT can code right now, it's not perfect but the wrinkles are currently being ironed out by better models. The next iterations of GPT won't have an IQ of an average human, they will have genius level IQ, a knowledge that'd take humans a thou
Re: (Score:3)
a youtube channel that shows where AI is really at
No, it doesn't.
Re: (Score:3)
I'm not wasting any more time with your knee-jerk crap,
Watch some of these:
https://www.youtube.com/@ai-ex... [youtube.com]
And then tell me this guy doesn't know what he's talking about.
Until then, it's you who doesn't have a clue.
Do not regulate! Samaritan coming (Score:2)
Already is. (Score:2)
If you use the program to hurt people, such as resulting in discriminatory hiring practices, or physically harming people, those things are already illegal.
I'll wait for the press sacrifice its own freedom of speech first. Then maybe we can think about constraining the right to code.
license use (Score:2)
If it can be dangerous then license use. Log access, etc.
But what about printing? (Score:3, Insightful)
But what about printing? We need to deal with that too, and soon. People keep writing all kinds of stuff to each other and some of it is wrong or bad. The same with speaking. Some people say things which are not true and/or annoy or offend or inconvenience me. This has never before happened in the history of humanity. Something must be done!
Why would we? (Score:2)
It is just a tool. A powerful one. Learn to use it properly.
Re: Why would we? (Score:3)
Answers to these "why" questions come more easily if we dispense with the self-flattering delusion that we are a rational, enlightened, people prone to scientific thought...and instead embrace the unfortunate, but no less true, reality that we are a superstitious people naturally prone to cargo cult pseudoscience where superficial resemblance is paramount, and underlying truth be damned.
When you realize this, you understand why our politicians talk the way they do about things like gun bans and AI regulatio
Re: (Score:2)
Sure, but don't expect to be employed to use the tool because that tool will have genius level AI and will be able to do the job itself.
Regulate its use, not research or development (Score:3)
The only regulation, if any, needed is banning unmonitored AI from life/death decision processes .. not because it would become skynet .. but because it is liable to make stupid mistakes. AI technology sucks really bad, it would be really dumb to even think about regulating AI for at least 30 years if not longer. We still don't have decent humanoid robots .. AI technology sucks. AI still cannot reason. Even chatGPT is nowhere close to demonstrating reasoning ability.I just don't see it happening.
Elon is against AI, but doesn't mind telling us to have it drive us around? Well, that's the type of shit that I'd be scared of.. that the AI becomes sentient it may get pissed off for playing the wrong music or a do something rash.
Re: (Score:3)
AI technology sucks. AI still cannot reason. Even chatGPT is nowhere close to demonstrating reasoning ability.I just don't see it happening.
After watching presentations on the GPT-4 model I would say the ability to reason has been demonstrated. Either that or there was some extreme cherry picking taking place.
Re: (Score:2)
And GPT4 is just a stepping stone, it can be further tuned and GPT5 is expected to be far better.
A criticism has been that GPTs are not good at counting and math the solution is literally to show GPT how to use a calculator and Wolfram|Alpha. Another deficit is the lack of experience, for this memory is being added - learning on top of the learning. And to improve results, the GPTs can check their outputs before outputting them and often correcting mistakes they made, this can happen multiple times.
Regulations always follow abuses (Score:2)
It makes no sense to regulate something that isn't a problem. Doing so wastes time and money. If, for example, we decided to regulate the colors of car paint, that would be absurd, unless there are some colors of car paint that create a traffic hazard, or perhaps because there is some compelling reason to do so. Even if we could imagine a scenario where paint color could cause an issue, doing so with no actual motivation takes time and money away from issues that are more urgent and more dire.
Regulating AI
Re: (Score:2)
When the actual issue is an intelligence great enough to be able to wipe-out humankind you might want to introduce the regulations before it kills everybody.
Re: (Score:2)
When the actual issue is an intelligence great enough to be able to wipe-out humankind
Is there an intelligence not great enough to be able to wipe out humankind? It does not take any kind of AI to be that "great." The level of AI, or machine learning, or just regular software, was "great enough" to wipe out human kind...decades ago. It didn't, however, because the humans that created the software didn't intend for it to do such a thing.
Like regular software, AI has human creators, and those human creators are able to specify it's capabilities. It's not excessive intelligence, by itself, that
Re: Regulations always follow abuses (Score:2)
Genius level AI will only be stopped if it is banned now. Every job that can be done like 'work from home' can be done by AI instead, that is tens of millions of jobs
Without tax and spending from those workers the economy will collapse.
There has not been general AI before now.
Re: (Score:2)
You've been reading way too much science fiction. What the heck is "genius-level AI"? Sentience?
And do you actually think that it can be "banned"? So we ban it in the US, and Europe, let's say. Who's going to stop China or India from marching headlong into AI, leaving us "responsible" nations in the dust?
Factories put blacksmiths worldwide out of business. Form mechanization cost millions of farm workers their jobs. Yet somehow we still have more than enough jobs to go around, at least, in the developed wor
Re: (Score:3)
I agree with this, but the real issue isn’t that pre-emptive regulation is unnecessary, it’s that it’s futile.
If it were merely unnecessary the harm would only be to the benefits foregone, and foregoing some of those benefits might be thought a reasonable price to prevent, oh I don’t know, human extinction. (Yes, this is hyperbolic, but some people really claim that’s what we’re facing).
But without real world effects we are groping in the dark. We’ve no idea what th
Re: (Score:2)
Unnecessary, futile, and also inappropriate.
No (Score:2)
Politicians aren't smart enough to regulate AI.
Re: (Score:2)
Nope. The problem is, they're smart enough to take "campaign contributions" from the AI makers.
Don't believe it! (Score:5, Interesting)
"For once, much of that industry is desperate for someone to help slow it down."
Translation: The big players like Google, Microsoft, etc. are terrified that a disruptive technology like AI might challenge their dominance in the world of computing so they want a big regulatory regime to slow potential startups from overtaking them. The big boys have the resources to work with (and help shape) the regulatory system; it's the new entrepreneurs that would most likely lack the resources to comply with the complex schemes they're suggesting. Think of the other disruptive technologies that have shaped the world we live in today: the semi-conductor, the microcomputer, the internet, etc. etc. None of these were burdened with heavy regulation and as a result small companies like Apple (which started out of a garage) were able to grow and later change the world. Why would we now completely change direction and start regulating the tech industry?
Re:Don't believe it! (Score:4, Insightful)
Regulation is only a "burden" if you're unwilling to act responsibility in the first place.
Re: (Score:2)
"Begging to be regulated" (Score:4, Insightful)
usually means the "beggar" is well-resourced enough to game and/or capture the regulatory framework to his advantage to shut out competitors.
Discuss.
What does science fiction suggest? (Score:2)
Has it been a popular theme that a race to create AI contributed to problematic AI?
Colossus: The Forbin Project, that still packs a punch imo.
Colossus: The Forbin Project (1970) - Clip 1: Missile Launched! (HD)
https://youtu.be/tzND6KmoT-c [youtu.be]
How did Iain M. Banks have the Culture Minds put it, "When a Mind goes bad, it goes, really, really, bad"?
You can try.. (Score:5, Insightful)
But all attempts at regulating popular products ended in failure, and you can't even use dogs to sniff for AIs.
Nuclear weapons (Score:2)
Why didnt they regulate nuclear weapons?
Re: (Score:2)
They kinda tried [wikipedia.org] but we all know how well that went. And for the same reasons that AI regulation won't work globally.
China... (Score:2)
Just regulate it like malicious code. (Score:2)
...because having done so, governments have completely stopped malicious code getting out in the wild.
Cherry picking much? (Score:4)
I call bullshit. Dollars to bagels, the Grey Lady cherry picked a few developers who want to be regulated, just like they can find rich people who claim they want taxes raised.
If you did a statistically valid survey, I find it very hard to believe most AI researchers want government regulation. We talk about AI at my company all the time and I've never, ever heard someone suggest that as a reasonable next step. Caution, sure, but not asking regulators to make decisions for us. We're learning far too much too fast about how to make ML systems work for any regulatory framework to keep up.
They are desperate for a shield (Score:4, Insightful)
The open ended question of interpolating from copyrighted training data scares the shit out of them. Regulation which recognizes the current status quo would give them some legal cover.
It's pretty simple, actually. (Score:3)
Honestly, it seems like the media is being paid to be obtuse about this. It's not rocket science.
Lessons from biological warfare (Score:2)
AI has many of the same attributes. Plus that much of the development (that we know about) is taking place in the public sphere. The same drivers apply too. That if the west decides to slow down
It regulates itself already (Score:2)
I'm already sick of all the disclaimers and reasons not to answer any question because of fears that somebody might be offended somewhere some time.
What regulation would that be? (Score:2)
Even if regulating AI were a realistic option (and many comments point out why it is not), I have to ask: Just which government would be competent to create such regulations?
Government is essentially always decades behind the times. Which is generally a good thing: slow-moving government is better than one that reacts to every trend and fashion. However, it also means that government completely lacks the competence to regulate emerging technologies. Sure, they could pay some expensive consultants, but thos
I have the solution (Score:2)
To Regulate AI ... (Score:2)
You first need a definition, or anyone who does not want to be regulated will redefine what they are doing as Not AI ...
It is not as clear cut as most people think ...
Suggestion (Score:2)
Regulations should require all schools to learn that pictures, videos and writings are all fabrications unless proved otherwise, and it's impossible to prove a negative.
Regulations should require teaching how to use tools for improving quality of life.
Regulations should provide funding for educating elderly that no picture, video or writing is proof of anything.
Regulations should educate the justice system and rules of evidence.
There should be a regulation teaching regulators it's impossible to regulate tec
Rights, for one (Score:3)
I think that before we reach that point, we need to come up with objective criteria for determining if a program has reached this point (if such criteria are even possible to come up with; we're still rather vague on the whole matter as it pertains to ourselves), and more importantly, we need to think about the matter of "human" rights as it pertains to AI. Because really, what we're going to accomplish long-term (assuming we don't destroy ourselves first) is, for all intents and purposes, the creation of intelligent digital life. We must guarantee rights to such a "lifeform," as to do anything less would be morally equivalent to enslavement.
The discussion around artificial intelligence right now is centered around what it can do for us and fear of what people will do with it (hence regulation) because right now it's just an exceptionally powerful tool that we haven't had access to before; it's like the printing press or something. But we need to consider that we'll continue to improve upon this tool and we need to anticipate what comes next and not just fixate on the current problems (which admittedly we're also late in facing).
To loop back to the original topic of regulation, I think any regulation happening right now should also try to anticipate the evolution of the "tool" into something greater and build in guarantees for when we reach that point.
Some regulations I'd like to see (Score:2)
AI should be banned. (Score:2)
Out of all the things society doesn't need, quickly producing enormous amounts of low quality rehashed content with central control/injection enabled in it is among the top. We've already seen the extent of manipulation via subtle alteration of search and social media algorithms that is possible, and that doesn't even approach the realms of possibility a s
these are not general purpose AI's (Score:3)
Text prediction is not going to take over the world.
A long time ago, when we wanted to know something, we went to a library, and looked it up. A certain percentage of the books were crap, and weren't labeled, so sometimes we got good info, sometimes bad.
Then came search engines. Initially, a fancy card catalog of web pages, they eventually got better indexing, and a little information about the types of things you looked for before, to help guide you to the right web pages. Advertisements followed shortly based on what you were looking for. (yes, we're really sorry about that)
Phone autoprediciton looked at what you'd typed in, and tried to guess what you might want to type next, mostly cause typing on phones sucks.
Chat GPT is basically a phone auto predict that is marginally smarter, that is trying to guess what comes next, based on your prompt, and similar things it's seen in as much internet accessible info as it could get it's hands on.
The uses to which ANY of these ways of accessing information can be put to use is what needs to be thought of. I can search for how to make explosives. or any number of things that are not good for society in general. That doesn't make the search engine evil. It makes it indifferent. The use that I put that knowledge to is what is potentially evil.
Does it mean that spammers can use AI to generate better looking spam that tries to evade the blockers? Yes. Does it mean that the AI is evil? No. No more than MS-Word is evil. (Maybe a bad example...)
Does it mean we get deepfake videos of our political leaders doing idiotic stuff? Yes. Will that be detectable from the usual idiotic stuff they do? Maybe not.
Does it mean that we should complain about copyright violation? Yes. We should.
Should we be scared that chatgpt is going to take over the world? No. But the people that make optimal useage of it may find that task easier...
Have to define AI first (Score:2)
What's AI again?
Re: (Score:3)
Automated plagiarism should be illegal.
Silly question mayhap, but which part of the debate, and technology, is this in reference to?
Re: (Score:2)