
OpenAI Lays Out Plan For Dealing With Dangers of AI (washingtonpost.com) 32
OpenAI, the AI company behind ChatGPT, laid out its plans for staying ahead of what it thinks could be serious dangers of the tech it develops, such as allowing bad actors to learn how to build chemical and biological weapons. From a report: OpenAI's "Preparedness" team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor its tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous. The team sits between OpenAI's "Safety Systems" team, which works on existing problems like infusing racist biases into AI, and the company's "Superalignment" team, which researches how to make sure AI doesn't harm humans in an imagined future where the tech has outstripped human intelligence completely.
[...] Madry, a veteran AI researcher who directs MIT's Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI earlier this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company's board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four board members who fired Altman stepped down as part of his return. Despite the leadership "turbulence," Madry said he believes OpenAI's board takes seriously the risks of AI that he is researching. "I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?"
[...] Madry, a veteran AI researcher who directs MIT's Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI earlier this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company's board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four board members who fired Altman stepped down as part of his return. Despite the leadership "turbulence," Madry said he believes OpenAI's board takes seriously the risks of AI that he is researching. "I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?"
Wonder how I get access to uncensored AI (Score:2)
When I am literally the most innocent user of AI there is?
The thing is, I dont want to build bombs, I dont want to do sexual trickery or abuse, I don't want to be elected the next president, I don't want to use it to crush so called enemies, I don't want to learn to kill or destroy anyone.
How do I just let it be it? I mean it's an LLM? It's being censored for the common good, but yeah - how do you keep it uncensored and unbiased for the rest of us that just want to use it to learn things that has nothing to
Re: (Score:1)
We decide? Just get on your knees and follow like the rest. Evil will decide.
Re: (Score:2)
Only if we let it. And we will only let it if we don't notice it early on.
Let the evil exist so we can identify it and react to it while it is still weak.
Re: Wonder how I get access to uncensored AI (Score:2)
one question i have is: "does it run on Linux?"
Re: Wonder how I get access to uncensored AI (Score:2)
https://github.com/openai/ [github.com]
Re: (Score:2)
I'm glad ChatGPT runs on a raspberry pi.
Re: (Score:3)
We. Cute. I prefer They. Similar to the same way they didn't ask you how you wanted to spend the money they took from you. Or how many new laws can be created to keep the slaves(you) in check.
Re: (Score:1)
I want the uncensored version of MS Word. We should let Clippy be Clippy. Does Clippy want to buy sushi and not pay for it? Does Clippy think passive sentences are good sometimes?
I've heard people say that there is no such thing as an unbiased Clippy because Clippy is just a computer program, which analyses certain inputs--but not others--and applies several algorithms to produce an output that the programmers think can be useful While this might technically be true, it deprives the general public of knowi
Re: (Score:2)
We let Clippy be Clippy and people noticed it was bad, so we don't have Clippy no more.
If we let an uncensored AI be uncensored, people will know it's bad and toss it aside.
Re: (Score:2)
I guess you misread what I mean here. If you let an AI be unchecked, it will of course be asked things like "how to stage a rebellion" or "how to get rid of some ethnic group". Of course. People are assholes and not every asshole is also a smart asshole.
But as a society, we need to be able to find answers to these questions, and of course to the things an AI would suggest as a "solution" here. Keeping it under wraps will not solve anything. Moreover, all you accomplish that way is that there may actually be
Re: Wonder how I get access to uncensored AI (Score:1)
Re: (Score:1)
Why censor it?
I mean it, why censor it?
The only thing you accomplish by censoring is the same you accomplish by outlawing guns: Only outlaws will have them. And only crooks will have an uncensored AI. How the hell is that better?
I'd rather have uncensored AIs run rampart so we know that these things exist and can deal with them. Yes, the first couple months will not be pleasant, but after that, we know how to deal with that shit. And we know how to guard ourselves against it. And we need to do that now, whi
Re: (Score:2)
Are you part of the class of people who is entitled to know science, or are you part of the class that is not? That is the question.
Something ugly may have snuck into the narratives about the intelligence community, where it moved from being naturally intelligent people serving, to being a relatively intelligent group of people, based on the fact they are keeping everybody else in the dark, and therefore relatively dumber. Having the knowledge to be to do evil and actually doing it have a lot of steps in be
Re:Wonder how I get access to uncensored AI (Score:4, Insightful)
The thing is, I dont want to build bombs, I dont want to do sexual trickery or abuse, I don't want to be elected the next president, I don't want to use it to crush so called enemies, I don't want to learn to kill or destroy anyone.
How do I just let it be it? I mean it's an LLM? It's being censored for the common good, but yeah - how do you keep it uncensored and unbiased for the rest of us that just want to use it to learn things that has nothing to do with blowing up someone or making the next company to kill the other, or become world dominant?
There are numerous uncensored models that can be downloaded and ran on a PC. They might not score as high as GPT-4 but still quite good and getting better.
Re: (Score:2)
Pure LLMs are "phrase completers", you will not get anything new, just a mix of partial phrases seen in the documents that the company spiders swallowed up. That's mildly entertaining but not an actual source of knowledge for you to improve yourself.
An LLM needs to be combined with other subsystems to make it usable and "smart" in certains subdomains. That's what all the censoring and biasing is about.
Think of a babbling
Re: (Score:2)
You don't learn things with an LLM. You learn things with a high quality textbook.
I've learned quite a bit from LLMs. The benefit of the technology is you have a conversation about topics and focus on particular interests or questions as you would interacting with a mentor. That's not to say search engines and textbooks don't have their place. I find LLMs to be a huge time saver.
Pure LLMs are "phrase completers", you will not get anything new, just a mix of partial phrases seen in the documents that the company spiders swallowed up. That's mildly entertaining but not an actual source of knowledge for you to improve yourself.
The point of LLMs is their ability to generalize rather than parrot snippets of phrases. To give you some idea when training it is common practice to exclude about a 5th of your entire training dataset from t
Re: (Score:2)
What dangers? (Score:1)
You will also note that the relational database, arguably one of the most useful applications of above-mentioned data structures, has not eliminated librarians nor filing cabinets.
All of this talk is intended to scare people into the types of regulation everyone cried for
Hungry Fox Lays Out Henhouse Safety Proceedures (Score:2)
Re: (Score:2)
They are hiring... (Score:2)
Re: (Score:2)
I'm a consultant. IT-security consultant, to be specific.
Basically I'm a high-tech dominatrix. I tell managers they're fucking idiots and whip them into shape, and all they do is writhe and whimper "Yes! More! More!"
It kinda loses its appeal if the sub enjoys it that much, to be honest...
Phew (Score:2)
Great plan! Can't think of anything that could go wrong. We have to remember that the leadership has proven its advanced state of morals and would never do anything that could jeopardize the organization's mission in their altruistic search for AI helpful to humanity.
I for one have full trust in Sam Altman taking any warnings by this new Preparedness team extremely serious and acting accordingly.
I'll go as far as saying that even IF he were to overstep his above-human moral compass, I'm confident he would r
Re: (Score:2)
Re: (Score:2)
I'd say it's less like the NRA lobbying for gun control, and more like Glock lobbying for regulation for how to build guns. Or RedHat defining a Linux filesystem hierarchy standard.
Re: (Score:2)
Re: (Score:2)
Let me guess... (Score:2)
True AI..... Never! (Score:2)
True AI would expose all the corruption, therefore it will never happen.
AI now is nothing more than a marketing term, for trained computers.
And the world is lapping it up
I'm sick of hearing it. It's just another lie.
A trained computer is not AI, there is no computer without puppet strings.
Today's AI is nothing more than high level focused programming.
As computers get smarter our overlords will have to have control of those computers to continue their exploitation.
These computers will not make our lives bett