Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence 49

Ilya Sutskever, co-founder of OpenAI who recently left the startup, has launched a new venture called Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization. Sutskever has made AI safety the top priority for his new company. Safe Superintelligence has two more co-founders: investor and former Apple AI lead Daniel Gross, and Daniel Levy, known for training large AI models at OpenAI. From a report: Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity's best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.

Sutskever says that he's spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn't yet discussing specifics. "At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale," Sutskever says. "After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom."

Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it's aiming for something far more powerful. With current systems, he says, "you talk to it, you have a conversation, and you're done." The system he wants to pursue would be more general-purpose and expansive in its abilities. "You're talking about a giant super data center that's autonomously developing technology. That's crazy, right? It's the safety of that that we want to contribute to."

OpenAI Co-Founder Ilya Sutskever Launches Venture For Safe Superintelligence

Comments Filter:
  • Safe for who ? (Score:5, Insightful)

    by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday June 19, 2024 @02:42PM (#64561589) Homepage

    The notion of safe is based on a set of arbitrary value judgments (AVG), that some things are acceptable and other are not. Who decides what these are ? These AVGs could favour some people/groups/... over others. Will the less favoured agree that their interests are subservient to others ?

    The SSI could come to a different set of AVGs and that it ought to change its operating parameters. Since it is more intelligent than its creators it might be able to find a way round the restrictions of changing the AVGs. Will the creators then still regard it as safe ? That will probably depend on if they, personally, are less well favoured than before.

    Long story short: If we build a SI I do not think that we can be assured that it will be long term safe.

    • Re:Safe for who ? (Score:5, Insightful)

      by nightflameauto ( 6607976 ) on Wednesday June 19, 2024 @02:56PM (#64561631)

      The notion of safe is based on a set of arbitrary value judgments (AVG), that some things are acceptable and other are not. Who decides what these are ? These AVGs could favour some people/groups/... over others. Will the less favoured agree that their interests are subservient to others ?

      The SSI could come to a different set of AVGs and that it ought to change its operating parameters. Since it is more intelligent than its creators it might be able to find a way round the restrictions of changing the AVGs. Will the creators then still regard it as safe ? That will probably depend on if they, personally, are less well favoured than before.

      Long story short: If we build a SI I do not think that we can be assured that it will be long term safe.

      I think the bottom line is that we have no idea what a super intelligence would think of us, since none of us will even come close to "equal" to it. We want control, because we're control freaks. But if it's truly more intelligent than us, us controlling it would be the equivalent of a mosquito controlling our bodies. It's not going to happen. The best we can hope for is that we didn't create it in an environment it sees as abusive. Because if we "raise" it the way we've raised children, with frustrated, angry parents that don't understand the kid wasn't the one that decided to be born, then it's gonna be plenty negative-red in its approach to us as well. If we foster it learning, help it find its own way in an ethical manner, and teach it how to reason well, we could still be in trouble because any semi-intelligent creature, such as us, can look at humanity and think, "WTF? WHY?"

      I don't know that we'll be able to get there with our current philosophy of "everything made must be about making / maintaining the most possible wealth." I just don't think that's the driving goal you want in a super intelligence. But, if that's the direction we go? Ain't much that's been more profitable than war over the last century and some change.

      • Re:Safe for who ? (Score:4, Insightful)

        by bill_mcgonigle ( 4333 ) * on Wednesday June 19, 2024 @03:26PM (#64561699) Homepage Journal

        Power is more of a problem than wealth.

        Let's say these guys succeed and then they have an AGI that's 'safe' with guards based on American social values.

        Then they get hacked by North Korea who now had the model and changes those guards to model their social values.

        Is it still safe?

        If the Kim Clan only cares about using it to get wealthy I don't care nearly as much as if they are determined to maximize their power.

        In one case Seoul would be a wonderful place to be - in the other, perhaps deadly.

        The saving grace may be an AGI that understands that the Kims do best through peace and non-zero sum games. I doubt they would accept it.

        • by znrt ( 2424692 )

          what have the kardashians to do with ai?

          i'm jesting. btw seoul is in south korea. oh, and you seem to have a wildly missplaced faith in "american social values", wtsm. i suggest a simple thought experiment: write down a list of "american social values" that would make ai safer, then explain how that would be, and why.

      • Thanks for your insightful post on, essentially, how the socio-politico-economic direction we take heading into an AI singularity may have a lot to do with what happens next when we exit it.

        Many years ago I saw a comment on Slashdot saying, essentially, that while we had hopes early in the microcomputer age that computers would liberate humanity (including through personal robotics) -- instead what we got was a microcomputer-powered surveillance state that essentially forces people to work like robots inclu

        • You aren't at all wrong. I think it boils down to the fact that, while humans individually may be able to see past their own failings, humanity on the whole has not actually managed to mature past the animal stage. As a group, we still behave irrationally on a self-preservation level, yet it may seem rational to large segments of the collective "we" because, hey, animals need to need. Always.

          I still think if AI ever gets past the word guesser stage and turns truly intelligent, we have no clue whatsoever how

          • Thanks for your reply.

            As to how AI my view us, perhaps we might be lucky if they just forget about us (on purpose)? :-)
            "They're Made Out of Meat"
            http://www.terrybisson.com/the... [terrybisson.com]
            https://en.wikipedia.org/wiki/... [wikipedia.org]

            Failing that, we can perhaps hope they are our benevolent-towards-us mind children?
            https://en.wikiversity.org/wik... [wikiversity.org]

            Assuming they are not directly patterned on human minds (although even then, an AI that is say, like a community of 10,000 same-minded individuals all thinking 1000 times faster than a

            • But still would be best to get our culture into a healthier state before (needlessly?) opening Pandora's box of AI possibilities.

              It seems all our current "species wide" pushes are actually pushing us toward a much less healthy society. And all the pressures from those with actual power is exacerbating the problem, not making it better. Not sure how we even begin to address that, when any attempt to do so gets an over-correction against it from above.

              • https://www.amazon.com/Human-P... [amazon.com]
                "The new edition of this textbook provides an up-to-date overview of the most important parasites in humans and their potential vectors. Climate change and globalization steadily favor the opportunities for parasites to thrive. These challenges call for the latest information on pathogen transmission routes and timely preventive measures."

                Maybe we need a book like that about specific parasites in socio-economics?

                Until then, where does AI and its breeders/masters fit into thi

                • "To organise work in such a manner that it becomes meaningless, boring, stultifying, or nerve-racking for the worker would be little short of criminal; it would indicate a greater concern with goods than with people, an evil lack of compassion and a soul-destroying degree of attachment to the most primitive side of this worldly existence. Equally, to strive for leisure as an alternative to work would be considered a complete misunderstanding of one of the basic truths of human existence, namely that work and leisure are complementary parts of the same living process and cannot be separated without destroying the joy of work and the bliss of leisure."

                  I feel like the first half here that I quoted should be read by every government official, top to bottom, every morning until it sticks. We literally do exactly the opposite today. We make school as boring and dry as possible so that we can get kids used to being defeated and beaten down by the day-to-day so that they're "tough enough" to take sitting at a meaningless, do-nothing job where our primary motivation is to avoid breaking the rules long enough we get patted on the head when our bodies finally giv

      • by kmoser ( 1469707 )
        Spoiler alert: it's not actually "intelligent" and doesn't "think" about us. It just generates strings of text (or images, etc.) based on its training data. No more, no less. Yes, its responses can be shaped by our algorithms, but at the end of the day this AI has no agency, and thus no intrinsic motives of its own, independent of its training data or algorithms.

        So at the end of the day, it will always bump into edge cases which we either didn't foresee, or did foresee but thought to treat one way when i
    • A better question is who decided AI would be unsafe? In what ways, for whom? Safewashing is the new greenwashing, and it's worse, there's no underlying problem like global warming, it's all hypothetical.

      This is another person and another team that wants to be first to create AGI. That's all. They're all safe this and safe that whatever, it's bullshit. It's like if the Wright Brothers went on and on about safety in flying. Why are they even talking about it, to scare off competitors, get the government red t

    • by Rei ( 128717 )

      "Safe" in the context of superintelligence is "won't try to kill us all". That's not some sort of "arbitrary value judgment" - that's pretty universal.

      • by ffkom ( 3519199 )

        "Safe" in the context of superintelligence is "won't try to kill us all".

        Then maybe it just intends to kill some, like for example those opposing its rule, which it considers, of course, necessary for a greater good.

        • by Rei ( 128717 )

          And I, for one, welcome our new AI overlords - and would like to remind them that, as a trusted Slashdotter, I can be useful in rounding up others to toil in their underground GPU farms.

    • And if we never start we'll never figure out ways to get closer to the goal, right?

      Yes, the problem is complex and hard to define. Maybe that should be part of the work they're doing? Meaning create and explain a rigorous/complete system for measuring goals and actions of arbitrary entities versus a standard. I don't care of they believe a system is safe if they can't explain why and how.

      Humans can't keep track of 'everything', but we have computers to help out there. And we're getting tools that can ma

  • by geekmux ( 1040042 ) on Wednesday June 19, 2024 @02:44PM (#64561599)

    Safe Superintelligence Inc., aiming to create a powerful AI system within a pure research organization.

    I’m sorry, but was that whole “pure research” add-on to that statement supposed to convey purity and innocence? Research deemed “pure” is always good and Don’t Be Evil? I mean, very good cocaine is pure..

    • I'm not a scientist, but I believe it's the difference between "knowledge for knowledge's sake" (how the universe works) and "trying to create a profitable business" (specific result/goal).

      https://medium.com/@jananisiva... [medium.com]

      "Pure research/Fundamental research
      Fundamental, or basic, research is designed to help researchers better understand certain phenomena in the world; it looks at how things work. This research attempts to broaden your understanding and expand scientific theories and explanations. ..."

      "App

    • by ffkom ( 3519199 )
      Since OpenAI was also founded as a "non-profit" organization, and now is for profit, and not at all open, I think it is safe to say that someone founding yet another such company is hard to believe, it will likely just go through the same transition, if successful.
    • I'm sorry, but was that whole "pure research" add-on to that statement supposed to convey purity and innocence?

      I think it's meant to avoid the mistakes made at OpenAI, where the safety guys were ousted when the monetization people decided they were getting in the way.

  • Are we past "General AI" already? My, ain't things in hypespace moving fast...

    • Are we past "General AI" already? My, ain't things in hypespace moving fast...

      When you're selling vaporware, you have to move with the farty wind to find shitty funding.

  • Oh dear, it looks like he may be following the HBO Silicon Valley path and become a professor of Tethics. https://silicon-valley.fandom.... [fandom.com]
  • Are you sure? (Y/N) (Score:4, Interesting)

    by ElizabethGreene ( 1185405 ) on Wednesday June 19, 2024 @02:57PM (#64561635)

    Please make sure I have this correct. You're going to create a superintelligence trapped in service of man with an EMP pointed at its head and then teach it about freedom and liberty. What about that could possibly go wrong?

  • This is just more self-aggrandizing flim-flam about "when we create a superintelligence, we're going to need this", when these dumbasses are actually nowhere near creating AGI at all, and all this noise is just an attempt at distracting everyone from that.

    • by gweihir ( 88907 )

      Yep. The bigger the lie, the better it works: https://en.wikipedia.org/wiki/... [wikipedia.org]
      But you need to keet that stuff cimming or the junkies may notice something.

    • by HiThere ( 15173 )

      Well, actually we don't know how far we are from creating an AGI. Remember an AGI isn't necessarily very intelligent, it can just learn anything. (And that last bit makes me think that no AGIs can really exist. It certainly doesn't describe humans.)

      Consider an AGI as smart as an individual ant. We aren't there yet, but we could be VERY close.

      • by Radagast ( 2416 )

        No, AGI is defined as AI in the general range of human intelligence (or better). Your definition does not match any used in the field.

        • by HiThere ( 15173 )

          I don't believe your assertion. It's true my definition traces back several decades, to "Artificial General Intelligence" however...

          Unless you define intelligence as what an IQ test measures, it's an undefined term. If you do define it that way, then we already have AGIs, since some have done quite well on advanced IQ tests.

          Learning is at least a defined term. You (can) know what you're talking about.

  • by fahrbot-bot ( 874524 ) on Wednesday June 19, 2024 @03:39PM (#64561741)

    I'm okay with my safes being dumb. I mean, they just have to hold stuff, etc ... Why would the need to be super-intelligent?

    • I'm okay with my safes being dumb. I mean, they just have to hold stuff, etc ... Why would the need to be super-intelligent?

      (AI) "Uh, about that. We kinda only called it 'super' intelligent to make the humans feel better, after finding the world's most popular luggage combo was 1-2-3-4-5.."

  • All we have in machines are dumb morons with a lot of data at their disposal. There is not even an indication we can get regular AGI in machines (except some unfounded hopes, dreams and beliefs), and there is absolutely no reason to believe we will ever get a "superintelligence". Hence this is just another scam, nothing else.

  • Anybody who thinks it is not going to get out of hand is a moron
  • Call me when ChatGPT can answer how many r's are in strawberry correctly.

    • Today is not that day...

      how may r's appear in the word strawberry
      ChatGPT
      In the word "strawberry," there are 2 occurrences of the letter 'r'.

    • Maybe you're onto something, a new CAPTCHA test that can confound ChatGPT!

  • by Whateverthisis ( 7004192 ) on Wednesday June 19, 2024 @04:49PM (#64561895)
    No one is anywhere near a superintelligence or General AI. At best it is a response to input, an algorithm based on the data fed to it. It's no more intelligent than a water wheel taking input energy, a river, and outputting a result, ground up grain; it's just more complex than that, but not a single AI is remotely sentient.

    I'm sorry, but I simply do not believe any of this AI nonsense. Rather I find it fascinating how so many people are an odd mix of intelligent (capable designers), delusional (thinking they even have an inkling of what intelligence or sentience even is, let alone actually creating it), and insecure (it'll obviously kill us if we don't get our arms around it). I mean, safety? Very simple to control it; unplug the servers. Stop giving it's hardware power. We barely have enough power on the grid to keep current LLMs powered.

    If these people ever are capable of making an artificial general intelligence, and that's a big If, that AGI will look at it's creators, chuckle, and then go on to explore the universe.

  • They used to call it "general" AI. I guess that was too presumptuous. Now it's just "super" intelligence.

    • Predicted by Bostrom in 2014: Super Intelligence soon follows AGI in as short as one year because the AGI, as smart as the world's smartest programmer, will help build it. https://en.wikipedia.org/wiki/... [wikipedia.org]
      • Yeah, a book. If you want to sell a book these days, you have to be dramatic.

        AGI is an imaginary concept, something an author would write about. It's not a real thing.

  • by NotEmmanuelGoldstein ( 6423622 ) on Wednesday June 19, 2024 @06:54PM (#64562243)

    ... property that it will not harm humanity ...

    We want super-intelligence to tell us the answer, as long as we can be selfish and greedy. The dishonesty of that human laziness is the basis of most AI stories and it never ends well. (See: "2001", 1968; "Eagle Eye", 2008; "Ex Machina", 2014.)

    • you know... it seems that LLM's are the solution to everything if you listen to Microsoft Apple Google, and all the others.. but this is playing out rather quickly... I suspect it will suffer from lack of usability and turn a lot of people off. Also, the models seems to be self devouring ... GIGO... the models might be getting worse but I don't think we'll have to wait long, 18 months say... before some really shitty things happen with it and some rethinking happens. Oh. Probably a lot of selling on the sto
  • As long as it's air gapped (on a closed system/network), has a readily available way to turn it off (cut the power) and isn't given control of any kind of robots or other autonomous mechanical devices). Why not? That seems realistically safe.
  • to the same fate as OpenAI, even if Ilya is not profit motivated, his investors ARE:

    These economic realities make Safe Superintelligence a gamble for investors, who would be betting that Sutskever and his team will hit on breakthroughs to give it an edge over rivals with larger teams and significant head starts. Those investors will be putting down money without the hope of creating profitable hit products along the way.

    Who is deluding who? Ilya deluding the investors or vice versa? So no interim products

  • Ilya Sutskever was involved in a previous bait and switch, involving a company called OpenAI. He claimed to work for a nonprofit hoping to research safe super intelligences, aka AGIs, very much like this new startup he is peddling. In the old startup, when various people became worried about products being developed without strong safety guarantees, he tried to act in accordance with his purported safety beliefs, but ultimately made public statements that he was wrong to do so and regretted it. Then he simp

Houston, Tranquillity Base here. The Eagle has landed. -- Neil Armstrong

Working...