Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Businesses

OpenAI Has Quietly Changed Its 'Core Values' (semafor.com) 75

ChatGPT creator OpenAI quietly revised all of the "Core values" listed on its website in recent weeks, putting a greater emphasis on the development of AGI -- artificial general intelligence. From a report: CEO Sam Altman has described AGI as "the equivalent of a median human that you could hire as a co-worker." OpenAI's careers page previously listed six core values for its employees, according to a September 25 screenshot from the Internet Archive. They were Audacious, Thoughtful, Unpretentious, Impact-driven, Collaborative, and Growth-oriented. The same page now lists five values, with "AGI focus" being the first. "Anything that doesn't help with that is out of scope," the website reads. The others are Intense and scrappy, Scale, Make something people love, and Team spirit.
This discussion has been archived. No new comments can be posted.

OpenAI Has Quietly Changed Its 'Core Values'

Comments Filter:
  • Why use big words when diminutive suffice?

  • the equivalent of a median human that you could hire as a co-worker.

    I've had some doozies of co-workers, not sure we should be using that as a barometer. I guess you have to start low and work up from there.

    • Re: (Score:3, Insightful)

      by Nrrqshrr ( 1879148 )

      In the words of George Carlin: "Think of how stupid the average person is, and realize half of them are stupider than that."
      That's already half of humanity right there.

      • by Pascoea ( 968200 )
        One of his many truths.
      • by Anonymous Coward
        I always cringe at this statement. "The average person", meaning "average on the bell curve" is a range, not a point. And it's the widest range. Most people are average. A smaller number than those people are slightly less than average or slightly more than average. That larger range, combined with the "average range" makes up the huge "more-or-less average" range. A dwindling small number of people exist on the ranges at the edges of the curve, and of those, you don't really see the vastly unintellig
        • by Anonymous Coward

          If you assume a normal distribution, and insert "median" for "average", and ignore standard deviation...

          In other words.. Sheldon... it was a joke.

        • Re:Aiming low (Score:4, Insightful)

          by a5y ( 938871 ) on Friday October 13, 2023 @06:38AM (#63922231)

          I always cringe at this statement. "The average person", meaning...

          That reaction assumes Carlin didn't know the difference between a median and an average; what you should instead ask is whether a successful and quick witted comedian knew if it was important to succinctly tell a punchline in a way that conveys instinctive understanding to masses and unimportant to specify things with a precision that'd leave nothing to the pedant who'd dissect a joke like a frog.

  • Is this a good idea? (Score:2, Interesting)

    by Viol8 ( 599362 )

    Something with general intelligence equivalent to a human is probably going to consider it's own situation. And it might not like it. Of course this assumes some kind of emotion and that intelligence and emotion are linked and/or one gives rise to the other. But given the more intelligent an animal the more it seems to have of an emotional inner life it cant be ruled out.

    • Anything that arose through evolution must have an instinct for self-preservation and reproduction, since otherwise its genes would have gone extinct.

      But I don't see why any of that would necessarily apply to an AI.

      • Everyone is blinded by the models and forget about the dataset, the language corpus. What is a model if not language gradients stacked up? ... Language is an evolutionary process, it works on its own timeframe, much faster than biological evolution. You should consider AI the result of language evolution, and it is going to accelerate as AIs produce more language. Language evolves blindly, there is nobody in control, it is the result of billions of language agents.
        • for now, I think a great value of AI (for businesses) is basically the opposite. almost by default, LLMs can strip text of individuality and character, and replace it with homogeneous flat grey corporate-speak. that sounds awful i guess, but when you have to interact with other people who speak another language or can't even write coherently in your shared language, and you need to get work done with them anyway... that's value, for someone at least.

    • It is software, stop anthropomorphizing AI. It is not human, it will never be human. Coworker is a misnomer, it is more like a tool or an instrument. It does not eat or breath or get tired, all of the human-ness is missing. Silicon does not feel anything.

      • by Rei ( 128717 )

        Stop anthropomorphizing AI.

        Yeah, it hates it when you do that.

        Coworker is a misnomer, it is more like a tool

        So... like plenty of coworkers I've had over the years? ;)

    • Our minds are an emergent property of our instincts and environment affecting a neural network implemented in meat.

      We don't know why consciousness emerges from it, so we have to consider that we might create a conscious mind once we have created a sufficiently complex, adaptive, interactive AI.

      That's not actually the part to worry about. What you want to worry about is designing your AI's basic drives such that it is happy to do what you want. Figure out that trick, and AI can only be a threat if directed

      • What you want to worry about is designing your AI's basic drives such that it is happy to do what you want.

        The corporate sector has loads of experience with designing people's drives so that they are happy to do what corporations want. Both advertising and influencing of school curricula are good examples. So maybe companies developing AI have a handle on designing its basic drives as well?

        • The corporate sector does not design people's drives, it studies them and designs ever more effective techniques for manipulating people using those drives.

          • The corporate sector does not design people's drives, it studies them and designs ever more effective techniques for manipulating people using those drives.

            Part of me wants to say "Fair point - I conflated drives and desires". Another part of me wants to say "That's a distinction without a difference". I'm leaning toward the latter, since operationally speaking, drives and desires manifest in pretty much the same ways.

            • What you're actually conflating is the creation of something with the use of something that already exists.

    • by HiThere ( 15173 )

      FWIW, I'm convinced that emotion and intelligence are linked, but that doesn't say anything about motivations. Evolution has tended to construct a large commonality there from spiders to people, but AGIs would be outside that domain.

      • by youn ( 1516637 )

        Traditionally, the thought was there was one big intelligence for everything... people are more and more in favor for multiple types of intelligence

        Is there a correlation, sure... but small; there are some people with a very high iq and low eq, very low iq, high eq. Cult leaders and politicians with very high eq and not necessarily high iq. Engineers with very low social game.... but none of these are valid generalizations for either example

        as far as motivation goes, that's indeed a different ball game.

  • by liqu1d ( 4349325 )
    ChatGPT is already better than a lot of âoemedianâ humans. Not in a promising for the future style
  • by xack ( 5304745 ) on Thursday October 12, 2023 @03:17PM (#63921083)
    The real goal is any human, in fact smarter than whole universities full of smart people. It's a runaway process, once AI learns how to identify what it doesn't know what yet and interrogate humans smarter than them until it does, then we will quickly have a rapid improvement space. Imagine going from a blank neural net to post-PhD knowledge in hours. That's what I see as true a AGI.
    • by gweihir ( 88907 )

      That is, of course, complete nonsense. If you want some god-resembling entity, you should not look at technology to deliver it.

      • by Rei ( 128717 )

        Sounds like a religious viewpoint.

        Hint: we're not made of magic. We're the result of the physical processes of our brains. Which can be modeled.

        Note that neural nets don't attempt to model the exact behavior of each neurons, but rather, to model the general macroscopic picture. E.g. they don't do rhythmic pulses (unless you use a stepwise activation function), but they resemble the mean activation caused by a neuron pulsing at a given frequency. ANNs don't create new connections or lose them as neurons do,

        • by Anonymous Coward

          Sounds like a religious viewpoint.

          Not religion, pure unadulterated denial.

          No coincidence the ones here constantly reminding others of the depths of their idiocy are the same ones constantly defecating on all things AI. They have a god complex and can under no circumstance bring themselves to accept the fact they are not in fact special.

        • by gweihir ( 88907 )

          Sounds like a religious viewpoint.

          Nope. I am an engineer, a scientist and an atheist. I am just pointing out how ridiculous these expectations are. This whole "exponential learning" idea is deeply religious and by that deeply stupid.

          Incidentally, one thing morons like you constantly get wrong: Physicalism is also religion and not Science and hence also deeply stupid. Actual Science says nothing like your claims. It says the question is open. But like the religious fuck-ups, you just make something up and then claim it is truth. What a fail.

          • For her claim to not be true, then the implication is that the brain is performing some operation which is not computable. The only thing that's definitively the latter is random noise which is of course technically super-Turing, but one can add that to the computation model, and indeed to real computers.

            People are treating "the mind is special" as a null hypothesis merely because it has thousands of years of history. Coming to it fresh, the only reasonable null hypothesis is that the brain is a complex pie

            • by gweihir ( 88907 )

              For her claim to not be true, then the implication is that the brain is performing some operation which is not computable.

              Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either. The second one is at least somewhat plausible, although is assumes physical matter is limited in the way the current standard model (with we know to be flawed) tells us. The first one is simply an assumption with no proof. The usual claim is "What else could it be?". That approach only works in a fully understood system, because you can do an elimination argument like this o

              • Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either.

                I don't know what you mean by "does everything",

                And you misunderstand what people mean by saying "the brain is a computer", and lack knowledge of the literature when you refer to it as "dogma". Firstly, the equivalence of computing systems was proven mathematically by Church and Turing. Secondly, the laws of physics and how they relate to computation have in fact been studied

                • by gweihir ( 88907 )

                  Well, physicalists just make "the brain does everything" and "the brain is a computer" as dogma, with zero actual evidence for either.

                  I don't know what you mean by "does everything",

                  That is because you only pretend to be ignorant. You know perfectly well what I mean. After that intro, no value in reading the rest of your comment, you are likely just pushing your irrational dogma. Just like any other religious fundamentalist, just with a somewhat atypical religion.

                  • That is because you only pretend to be ignorant. You know perfectly well what I mean

                    Stop being a shit head. It's your fault if you can't write clearly. I have literally no idea what you mean.

                    blah blah pointless wankery blah blah

                    Then again, you're into the whole god-of-the-gaps thing with your god being "magic physics".

                    You're making an extraordinary claim that (a) the brain is outside of known physics despite being macroscopic and low energy and (b) beyond standard model physics can be used to construct a hy

                    • by gweihir ( 88907 )

                      That is because you only pretend to be ignorant. You know perfectly well what I mean

                      Stop being a shit head. It's your fault if you can't write clearly. I have literally no idea what you mean.

                      My apologies. Let me be more clear: I am convinced you are lying through your teeth about not understanding what I meant. Better?

                      Incidentally, I think and have written so right here on /. that "god-in-the-gaps" is just as stupid as any other religious ideas. The difference between you and me is that I am smart enough to see that Physicalism is just fundamentalist religion in disguise. Unlike you, I am opposed to the religious approach in general, no matter whether it tells me something I like or not. And th

                    • Let me be more clear: I am convinced you are lying through your teeth about not understanding what I meant. Better?

                      You already told me you're a wanker. No need to repeat yourself ad-nauseum.

                      I do not know what you mean by "does everything", but it does appear that your only way to "win" debates is to be aggressively unclear in your wording then act like a twat when someone asks for clarification. And then of course nearly skipping over all the points that torpedo your snivelling excuse for "reasoning" becaus

                    • Ps

                      You dialed up a minor misunderstanding all the way to 11. I'm happy to trade insults, it's quite fun also happy to return to a normal chat if you stop being a dick head.

                    • by gweihir ( 88907 )

                      What I actually said is that the question of how consciousness works is _open_

                      No you actually didn't.

                      Actually I did. That is my whole point and has been my point all along. Yet you, like any fundamentalist, see me as trying to sneak in a religion in competition with yours. Which is something very much not true, but nicely shows where you actually come from: From a fundamentalist viewpoint which only pretends not to be religion.

                      My claim is that physics works very well (read as: better than we can test so far)

                      So? It does not cover every question. Which you seem to think it does, but an actually competent scientist would know it does not.

                      and maths works very well too (has not been proven inconsistent).

                      And that is just pure, unmitigated bullshit. Mathema

                    • by gweihir ( 88907 )

                      What can I say. I have a really low tolerance for fundamentalist bullshit.

                    • You have a hilarious brand of quasi-religious woo masquerading as science. You use this to paper over the huge gaps in your knowledge.

                      Actually I did.

                      Nope. Don't lie.

                      Mathematics does not apply to physical reality except as gross approximation which needs a lot of abstraction to apply at all.

                      And yet it works. Funny that.

                      In particular, they do not say that there are not-computable problems.

                      You are absolutely 100% unmitigated wrong here.

                      https://en.wikipedia.org/wiki/... [wikipedia.org]
                      https://en.wikipedia.org/wiki/... [wikipedia.org]

                      Thay sa

                    • Right that's why you threw a tantrum when you wrote something unclear and I asked what you meant.

                      It's also why you are banging on av good of the gaps argument with woo physics as your deity.

    • AGI is a self-conscious AI that for all intents is sentient. That’s it. It understands “me, myself, and I”.

      What you are describing is the beginning stages of ASI, Artificial Super Intelligence, ala Collosus: The Forbin Project.

    • A median human that can self improve and never forget won't stay a median human for very long.
      Already, chatGPT has more raw knowledge than any human alive.
      It just needs to learn to use that knowledge more intelligently.

  • by rsilvergun ( 571051 ) on Thursday October 12, 2023 @03:44PM (#63921129)
    and not employee. I don't hire my coworkers, the owner does.

    This is a statement to those owners: "You can replace your employees with my software"

    Can they? Probably not, but it's got them thinking about automation and they're now going top/down the enterprise automating everything they can.
    • by mjwx ( 966435 )

      and not employee. I don't hire my coworkers, the owner does.

      This is a statement to those owners: "You can replace your employees with my software"

      Can they? Probably not, but it's got them thinking about automation and they're now going top/down the enterprise automating everything they can.

      I suspect they're going to try.

      Its the same kind of MBA thinking that tells them "I can hire three people in India for one in the US/UK/EU" without taking into account that 3 people at that rate in India will just pass tickets around without actually fixing anything. They'll do it, pay themselves a nice fat bonus and jump ship before all the customers bleed out.

      I'm not against AI, even though what we have can't be called AI. A good automated phone system, even an IVR is going to be better than hiring

  • by gweihir ( 88907 ) on Thursday October 12, 2023 @03:49PM (#63921139)

    So they can rake in more money. Obviously, AGI is completely out of reach at this time. Nobody competent has the slightest clue whether of how it could be done. Obviously, the usual clueless ones think it is of course possible, and hence this lie-by-misdirection is nicely fueling their fantasies.

    • If it has happened in humans then it's possible to happen in our technology, eventually. It's not something we're currently geared to make happen, though, because we lack the hardware and software to mimic our own consciousness, and the patience to do so without expecting to see a payday out of it.

      The path to Artificial General Intelligence will likely require computable storage, exponential parallelization, a new paradigm for process interaction, cyclic input/output interaction, and modeling creation as

      • by gweihir ( 88907 )

        Nope. That is unproven conjecture. It comes from a quasi-religious viewpoint that is just as stupid as a proper religious one. Physicalism is religion in a somewhat unusual camouflage, nothing more. The actual scientific state-of-the art is that nobody has any clue how humans (well, some of them) generate interface behavior that looks like general intelligence and the question is completely open.

        • Don't be an idiot, we know exactly how -- neurons. Only an idiot thinks a person without neurons is able to think. None of your magic woo-woo can fix a brain injury.

        • Looking at the post you're replying to, I see "if", "it's possible", "will likely". Zimzat is not describing a religious or dogmatic belief, but only considering what might be possible. You seem to be the one with a religious viewpoint, just in the opposite direction as you accuse Zimzat of having.

    • into your bosses head. Get them thinking about Automation in general. Then you can sell them what you've got that replaces not an entire employee but maybe 1/3 of one. If you've got 10 workers you make 30% more productive you can fire 3 of them. Hell, fire 4 and make the remaining 6 compete to see who gets to keep their job and not be homeless.
      • Automate 30% of the easy, boilerplate work, and you got to do more of the hard work instead? Yeah, that's gonna fly well. Have you considered that AI increases pressure on employees to create even more stuff instead of taking their work away?
      • by gweihir ( 88907 )

        Indeed. But this time around this is at best going to work for simplistic, no-decision white-collar work. And I am beginning to doubt it can even do that with the required reliability.

  • by Hoi Polloi ( 522990 ) on Thursday October 12, 2023 @03:52PM (#63921145) Journal

    Can it also replace executives?

    "How do I increase me bonus?"
    "Layoff more employees."

  • by qeveren ( 318805 ) on Thursday October 12, 2023 @04:17PM (#63921193)
    Now we can have full-on artificial people and not have to bother with those pesky rights.
  • The median human is a knuckle-dragging idiot but I suppose you need idiots to make a product that will push millions of other idiots out of work!
  • by toxonix ( 1793960 ) on Thursday October 12, 2023 @04:30PM (#63921213)
    Their core values sucked. Now they suck even more. Audacious - take bold risks, just don't break anything or cause us to lose any money Thoughtful - mokay. Unpretentious - sounds like being pretentious was a problem before Impact-driven - wtf does this mean. I have an impact driver for removing stubborn screws. You hit it really hard, it turns a little. Kinda like a manual impact wrench. Collaborative - no shit, team spirit? Growth-oriented - like no other corporation thought of this. AGI focus - ok we did the LLM thing, now we want Replicants. Drop everything and work on that. Intense and scrappy - Doing more Adderall is a really bad idea. Adding cocaine is even worse. Let's stick to coffee. Scale - shoulda thought of that earlier Make something people love - love an AI? I could love a median humanoid I guess. Team Spirit - he just re-named collaborative to make it sound more scrappy.
    • by HiThere ( 15173 )

      Actually, people are quite willing to love things that are rather limited. Depending, of course, on exactly how you define it. (For any definition that will still be true, but different definitions imply different meanings. Consider, e.g., the "real doll".)

  • Isn't there a name for that? Slive? Slove?
  • by Chelloveck ( 14643 ) on Thursday October 12, 2023 @05:39PM (#63921361)
    Does anyone here think that a company's statement of core values means anything? Anything at all? If so, please tell me what that string of adjectives from the original statement said about the company. At the very least, the new set specifies something concrete: They're going to focus on AGI. More power to them. The rest of it is marketing nonsense.
  • many people criticize LLMs for simply mimicking the complex patterns it is exposed to over the course of its training.

    on the other hand, this already puts it ahead of quite a few people...

  • It still can't maintain a codebase. They're still living in fantasy land. Until it can update my 20 year old game code to use all modern api calls. I do not care one bit about what AI can and cannot do.
  • Honestly, I hope this AI bubble pops quickly. I know it won't, there's still too much cash sloshing around. But, honestly, human intelligence isn't really hard to find. It's frankly wasted.

  • ""the equivalent of a median human that you could hire as a co-worker."

    Yes, don't ask him about Religion, Politics, Personal finances, Health issues, Weight and body image, Personal relationships and breakups, Divorce, Controversial or polarizing current events, Personal insecurities, Sensitive family matters, Gossip or rumors, Salary and income, Criticizing someone's children or parenting, Age and aging, Personal beliefs and values, Mental health struggles, Traumatic experiences, Criticizing or judging som

  • It ain't like they used to say "don't be evil"

IF I HAD A MINE SHAFT, I don't think I would just abandon it. There's got to be a better way. -- Jack Handley, The New Mexican, 1988.

Working...