OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence 49
Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report: Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.
Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."
Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."
Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."
Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."
Safe for who ? (Score:5, Insightful)
The notion of safe is based on a set of arbitrary value judgments (AVG), that some things are acceptable and other are not. Who decides what these are ? These AVGs could favour some people/groups/... over others. Will the less favoured agree that their interests are subservient to others ?
The SSI could come to a different set of AVGs and that it ought to change its operating parameters. Since it is more intelligent than its creators it might be able to find a way round the restrictions of changing the AVGs. Will the creators then still regard it as safe ? That will probably depend on if they, personally, are less well favoured than before.
Long story short: If we build a SI I do not think that we can be assured that it will be long term safe.
Re:Safe for who ? (Score:5, Insightful)
The notion of safe is based on a set of arbitrary value judgments (AVG), that some things are acceptable and other are not. Who decides what these are ? These AVGs could favour some people/groups/... over others. Will the less favoured agree that their interests are subservient to others ?
The SSI could come to a different set of AVGs and that it ought to change its operating parameters. Since it is more intelligent than its creators it might be able to find a way round the restrictions of changing the AVGs. Will the creators then still regard it as safe ? That will probably depend on if they, personally, are less well favoured than before.
Long story short: If we build a SI I do not think that we can be assured that it will be long term safe.
I think the bottom line is that we have no idea what a super intelligence would think of us, since none of us will even come close to "equal" to it. We want control, because we're control freaks. But if it's truly more intelligent than us, us controlling it would be the equivalent of a mosquito controlling our bodies. It's not going to happen. The best we can hope for is that we didn't create it in an environment it sees as abusive. Because if we "raise" it the way we've raised children, with frustrated, angry parents that don't understand the kid wasn't the one that decided to be born, then it's gonna be plenty negative-red in its approach to us as well. If we foster it learning, help it find its own way in an ethical manner, and teach it how to reason well, we could still be in trouble because any semi-intelligent creature, such as us, can look at humanity and think, "WTF? WHY?"
I don't know that we'll be able to get there with our current philosophy of "everything made must be about making / maintaining the most possible wealth." I just don't think that's the driving goal you want in a super intelligence. But, if that's the direction we go? Ain't much that's been more profitable than war over the last century and some change.
Re:Safe for who ? (Score:4, Insightful)
Power is more of a problem than wealth.
Let's say these guys succeed and then they have an AGI that's 'safe' with guards based on American social values.
Then they get hacked by North Korea who now had the model and changes those guards to model their social values.
Is it still safe?
If the Kim Clan only cares about using it to get wealthy I don't care nearly as much as if they are determined to maximize their power.
In one case Seoul would be a wonderful place to be - in the other, perhaps deadly.
The saving grace may be an AGI that understands that the Kims do best through peace and non-zero sum games. I doubt they would accept it.
Re: (Score:2)
what have the kardashians to do with ai?
i'm jesting. btw seoul is in south korea. oh, and you seem to have a wildly missplaced faith in "american social values", wtsm. i suggest a simple thought experiment: write down a list of "american social values" that would make ai safer, then explain how that would be, and why.
Moving past irony of AI making artificial scarcity (Score:2)
Thanks for your insightful post on, essentially, how the socio-politico-economic direction we take heading into an AI singularity may have a lot to do with what happens next when we exit it.
Many years ago I saw a comment on Slashdot saying, essentially, that while we had hopes early in the microcomputer age that computers would liberate humanity (including through personal robotics) -- instead what we got was a microcomputer-powered surveillance state that essentially forces people to work like robots inclu
You're not wrong. (Score:2)
You aren't at all wrong. I think it boils down to the fact that, while humans individually may be able to see past their own failings, humanity on the whole has not actually managed to mature past the animal stage. As a group, we still behave irrationally on a self-preservation level, yet it may seem rational to large segments of the collective "we" because, hey, animals need to need. Always.
I still think if AI ever gets past the word guesser stage and turns truly intelligent, we have no clue whatsoever how
Re: (Score:2)
Thanks for your reply.
As to how AI my view us, perhaps we might be lucky if they just forget about us (on purpose)? :-)
"They're Made Out of Meat"
http://www.terrybisson.com/the... [terrybisson.com]
https://en.wikipedia.org/wiki/... [wikipedia.org]
Failing that, we can perhaps hope they are our benevolent-towards-us mind children?
https://en.wikiversity.org/wik... [wikiversity.org]
Assuming they are not directly patterned on human minds (although even then, an AI that is say, like a community of 10,000 same-minded individuals all thinking 1000 times faster than a
Re: (Score:2)
But still would be best to get our culture into a healthier state before (needlessly?) opening Pandora's box of AI possibilities.
It seems all our current "species wide" pushes are actually pushing us toward a much less healthy society. And all the pressures from those with actual power is exacerbating the problem, not making it better. Not sure how we even begin to address that, when any attempt to do so gets an over-correction against it from above.
Human Parasites: Diagnosis, Treatment, Prevention (Score:2)
https://www.amazon.com/Human-P... [amazon.com]
"The new edition of this textbook provides an up-to-date overview of the most important parasites in humans and their potential vectors. Climate change and globalization steadily favor the opportunities for parasites to thrive. These challenges call for the latest information on pathogen transmission routes and timely preventive measures."
Maybe we need a book like that about specific parasites in socio-economics?
Until then, where does AI and its breeders/masters fit into thi
Re: (Score:2)
"To organise work in such a manner that it becomes meaningless, boring, stultifying, or nerve-racking for the worker would be little short of criminal; it would indicate a greater concern with goods than with people, an evil lack of compassion and a soul-destroying degree of attachment to the most primitive side of this worldly existence. Equally, to strive for leisure as an alternative to work would be considered a complete misunderstanding of one of the basic truths of human existence, namely that work and leisure are complementary parts of the same living process and cannot be separated without destroying the joy of work and the bliss of leisure."
I feel like the first half here that I quoted should be read by every government official, top to bottom, every morning until it sticks. We literally do exactly the opposite today. We make school as boring and dry as possible so that we can get kids used to being defeated and beaten down by the day-to-day so that they're "tough enough" to take sitting at a meaningless, do-nothing job where our primary motivation is to avoid breaking the rules long enough we get patted on the head when our bodies finally giv
Lee Felsenstein on making positive social change (Score:3)
"Never doubt that a small group of thoughtful, committed citizens can change the world: indeed, itâ(TM)s the only thing that ever has. (Margaret Mead)"
https://quoteinvestigator.com/... [quoteinvestigator.com]
Or in other words, "We are the ones we have been waiting for."
Doesn't mean any one person can change everything. But a lot of people making small changes independently or better in groups can add up.
See also Lee Felsenstein's 2003 essay:
"HOW TO MAKE A REVOLUTION in three easy steps"
https://web.archive.org/web/20... [archive.org]
======
Re: (Score:2)
So at the end of the day, it will always bump into edge cases which we either didn't foresee, or did foresee but thought to treat one way when i
Re: Safe for who ? (Score:2)
A better question is who decided AI would be unsafe? In what ways, for whom? Safewashing is the new greenwashing, and it's worse, there's no underlying problem like global warming, it's all hypothetical.
This is another person and another team that wants to be first to create AGI. That's all. They're all safe this and safe that whatever, it's bullshit. It's like if the Wright Brothers went on and on about safety in flying. Why are they even talking about it, to scare off competitors, get the government red t
Re: (Score:2)
"Safe" in the context of superintelligence is "won't try to kill us all". That's not some sort of "arbitrary value judgment" - that's pretty universal.
Re: (Score:2)
"Safe" in the context of superintelligence is "won't try to kill us all".
Then maybe it just intends to kill some, like for example those opposing its rule, which it considers, of course, necessary for a greater good.
Re: (Score:2)
And I, for one, welcome our new AI overlords - and would like to remind them that, as a trusted Slashdotter, I can be useful in rounding up others to toil in their underground GPU farms.
And if we never start... (Score:2)
And if we never start we'll never figure out ways to get closer to the goal, right?
Yes, the problem is complex and hard to define. Maybe that should be part of the work they're doing? Meaning create and explain a rigorous/complete system for measuring goals and actions of arbitrary entities versus a standard. I don't care of they believe a system is safe if they can't explain why and how.
Humans can't keep track of 'everything', but we have computers to help out there. And we're getting tools that can ma
Purity within..? (Score:3)
Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization.
I’m sorry, but was that whole “pure research” add-on to that statement supposed to convey purity and innocence? Research deemed “pure” is always good and Don’t Be Evil? I mean, very good cocaine is pure..
I'm not a scientist, but ... (Score:2)
I'm not a scientist, but I believe it's the difference between "knowledge for knowledge's sake" (how the universe works) and "trying to create a profitable business" (specific result/goal).
https://medium.com/@jananisiva... [medium.com]
"Pure research/Fundamental research ..."
Fundamental, or basic, research is designed to help researchers better understand certain phenomena in the world; it looks at how things work. This research attempts to broaden your understanding and expand scientific theories and explanations.
"App
Re: (Score:2)
Re: (Score:2)
I'm sorry, but was that whole "pure research" add-on to that statement supposed to convey purity and innocence?
I think it's meant to avoid the mistakes made at OpenAI, where the safety guys were ousted when the monetization people decided they were getting in the way.
"SuperIntelligence"? (Score:2)
Are we past "General AI" already? My, ain't things in hypespace moving fast...
Re: (Score:2)
Are we past "General AI" already? My, ain't things in hypespace moving fast...
When you're selling vaporware, you have to move with the farty wind to find shitty funding.
Re: (Score:2)
Indeed. "Move fast and ... have nothing."
Surprisingly, these constant announcements of even more outrageous lies appear to work well. On the other hand: https://en.wikipedia.org/wiki/... [wikipedia.org]
Tethics (Score:2)
Are you sure? (Y/N) (Score:4, Interesting)
Please make sure I have this correct. You're going to create a superintelligence trapped in service of man with an EMP pointed at its head and then teach it about freedom and liberty. What about that could possibly go wrong?
More flim-flam (Score:2)
This is just more self-aggrandizing flim-flam about "when we create a superintelligence, we're going to need this", when these dumbasses are actually nowhere near creating AGI at all, and all this noise is just an attempt at distracting everyone from that.
Re: (Score:2)
Yep. The bigger the lie, the better it works: https://en.wikipedia.org/wiki/... [wikipedia.org]
But you need to keet that stuff cimming or the junkies may notice something.
Re: (Score:3)
Well, actually we don't know how far we are from creating an AGI. Remember an AGI isn't necessarily very intelligent, it can just learn anything. (And that last bit makes me think that no AGIs can really exist. It certainly doesn't describe humans.)
Consider an AGI as smart as an individual ant. We aren't there yet, but we could be VERY close.
Re: (Score:2)
No, AGI is defined as AI in the general range of human intelligence (or better). Your definition does not match any used in the field.
Re: (Score:2)
I don't believe your assertion. It's true my definition traces back several decades, to "Artificial General Intelligence" however...
Unless you define intelligence as what an IQ test measures, it's an undefined term. If you do define it that way, then we already have AGIs, since some have done quite well on advanced IQ tests.
Learning is at least a defined term. You (can) know what you're talking about.
Safe Superintelligence (Score:4, Funny)
I'm okay with my safes being dumb. I mean, they just have to hold stuff, etc ... Why would the need to be super-intelligent?
Re: (Score:2)
I'm okay with my safes being dumb. I mean, they just have to hold stuff, etc ... Why would the need to be super-intelligent?
(AI) "Uh, about that. We kinda only called it 'super' intelligent to make the humans feel better, after finding the world's most popular luggage combo was 1-2-3-4-5.."
What "superintelligence"? (Score:2)
All we have in machines are dumb morons with a lot of data at their disposal. There is not even an indication we can get regular AGI in machines (except some unfounded hopes, dreams and beliefs), and there is absolutely no reason to believe we will ever get a "superintelligence". Hence this is just another scam, nothing else.
Deluded (Score:2)
Superintelligence (Score:2)
Call me when ChatGPT can answer how many r's are in strawberry correctly.
Re: (Score:2)
Today is not that day...
how may r's appear in the word strawberry
ChatGPT
In the word "strawberry," there are 2 occurrences of the letter 'r'.
Re: (Score:2)
Maybe you're onto something, a new CAPTCHA test that can confound ChatGPT!
Let's solve a problem before it exists! (Score:3)
I'm sorry, but I simply do not believe any of this AI nonsense. Rather I find it fascinating how so many people are an odd mix of intelligent (capable designers), delusional (thinking they even have an inkling of what intelligence or sentience even is, let alone actually creating it), and insecure (it'll obviously kill us if we don't get our arms around it). I mean, safety? Very simple to control it; unplug the servers. Stop giving it's hardware power. We barely have enough power on the grid to keep current LLMs powered.
If these people ever are capable of making an artificial general intelligence, and that's a big If, that AGI will look at it's creators, chuckle, and then go on to explore the universe.
Now it's SUPER intelligence (Score:2)
They used to call it "general" AI. I guess that was too presumptuous. Now it's just "super" intelligence.
Re: (Score:2)
Re: (Score:2)
Yeah, a book. If you want to sell a book these days, you have to be dramatic.
AGI is an imaginary concept, something an author would write about. It's not a real thing.
Selfish and greedy (Score:3)
We want super-intelligence to tell us the answer, as long as we can be selfish and greedy. The dishonesty of that human laziness is the basis of most AI stories and it never ends well. (See: "2001", 1968; "Eagle Eye", 2008; "Ex Machina", 2014.)
Re: (Score:2)
Fail Safes (Score:1)
Doomed from the start (Score:2)
Who is deluding who? Ilya deluding the investors or vice versa? So no interim products
bait and switch, lather rinse repeat? (Score:1)