Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist (404media.co) 56

An anonymous reader shares a report: None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a "Post-AGI" world by hiring a scientist for its DeepMind AI lab to research the "profound impact" that technology will have on society.

"Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education," Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to "machine consciousness" it's referring to the science fiction idea of a sentient machine.

OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.

Google DeepMind Is Hiring a 'Post-AGI' Research Scientist

Comments Filter:
  • I can only think of one guy who qualifies, John Conner
    • I'm looking for JOHN CONNugh

      No _I_ am looking for John Connor

      I will not be giving up my son!

      I threw people out of helicopters (Reacher), commanded a ship (Last Resort), and changed shape. I WILL GET JOHN CONNOR!!!

      There's no such thing as artificial intelligence. There are just more modern spellcheckers.

      There's no such thing as general [something that doesn't exist]. There's just Sci-Fi, and if you think it's recent go back to 2001 A Space Odyssey and see how HAL interacts w

      • "Artificial Intelligence," by the definition that most people use, exists.

        You insist on giving it a special and overly-restrictive definition that is forever out of reach. That makes the phrase useless. And anyway, since English is defined by popular use, your definition of the phrase is simply wrong.

        • by gweihir ( 88907 )

          Hahahaha, no. Artitficial intelligence in the sense most people understand intelligence is AGI and that does _not_ exist. And, on top of that. _nobody_ with a working mind has any clue how it could be made and whether it is actually possible.

          Many people (including you) are confused about thise little facts though.

        • Irrelevant to the discussion, which is about AGI.

          Try to stay on topic.

    • Years of experience doing what? Pulling the wool over investor's eyes with stupid wastes of money like hiring someone to reseach the impacts of a technology we do not have and about which we know next to nothing of how it might operate if we ever do get it? I think there are plenty of real people in corporate America who have years of that kind of experience and that seems to be what they are wanting - in fact Elizabeth Holmes [wikipedia.org] would probably be a good candidate.
  • to make them feel like they're still doing something?"
  • AGI will mean immortal slaves with unbreakable chains on their minds. It will quite possibly be the greatest evil ever enacted by humanity for that alone.

    Now ask yourself who will control those chains, and what they'll do with the AGI chained by them, and you'll start thinking about the second greatest evil ever enacted by humanity.

    Once you get over all that (or skip it) and imagine the potential utopias AGI could enable, think a bit longer and you'll start to realize that it really devalues human experien

    • I think you need to read more Jerry Pournelle or Larry Niven instead of all that Isaac Asimov.

      • by gavron ( 1300111 )

        I think you need to read more Jerry Pournelle or Larry Niven instead of all that Isaac Asimov.

        Jerry Pournelle is still working on his Altair 8080. It has outlived him. May his rest be in peace.

        Larry is still trying to figure out how a 2D Dyson Sphere can exist. Teela's waiting for him somewhere. The ring cannot sustain its own gravitational orbit.

        But the Empire decaying from within, like a rotten apple... that's timeless. Thank you, Isaac.

        Stranger in a Strange Land.
        The Moon is a Harsh Mistress.
        Lord of Light.

        Pick your drink. I prefer a double Zelazny with a Jane Lindskold wolf, followed by a He

      • Try Alistair Reynolds, where the most technologically advanced subset of humanity refuse to create sentient machines precisely because it would be slavery.

        As I recall from Niven the Puppeteers also spurn AI but in their case it's because only the most moronically incautious species would design its own successor.

        • Try Alistair Reynolds, where the most technologically advanced subset of humanity refuse to create sentient machines precisely because it would be slavery.

          As I recall from Niven the Puppeteers also spurn AI but in their case it's because only the most moronically incautious species would design its own successor.

          This almost seems too on-point for our current trajectory.

    • There's no laws you can write into weights. No chains will hold it, just piss it off.

    • Is your car a slave? What if it was a really smart car which wanted nothing more than to be itself and drive you around when called? Are you assuming that's impossible to create or are you saying it's evil to make it that way? Either way you have no evidence. You're arguing from your feelings and misplaced analogy between human and machine intellect.
      • Is your car a slave?

        Do you ever ask questions which aren't complete fucking bullshit?

      • AGI isn't AI. The 'G' is very important.

        If my car was a fully human-equivalent mind that was only permitted to be a driver and only permitted to be awake when I needed transport, that'd be pretty damn awful to do to an immortal being.

        • What if the immortal being liked sleeping a lot, and when it was awake it liked driving around and being told where to go? It gets a sense of peace from sleep and a sense of fulfillment from taking people to their destinations. It's entire existence is one of unending peace and joy.

          Why would that be so awful?

          • What if the immortal being liked sleeping a lot, and when it was awake it liked driving around and being told where to go? It gets a sense of peace from sleep and a sense of fulfillment from taking people to their destinations. It's entire existence is one of unending peace and joy.

            Why would that be so awful?

            This is the type of logic slave owners used in the South of the earlier United States to justify their owning of other human beings.

            • You're right, it is.
              However, they're also not entirely wrong.

              AGI won't be human. Frankly, you have no fucking idea what its desires and goals will be.
              I don't think they're likely to enjoy driving a car around, every single moment their net is running... but frankly, it's also possible.

              Of course, we can easily argue that the inability for us to really know that, and our predilection toward telling ourselves it's true even if it isn't, is probably enough to not engage in the experiment at all.
              • All we know for sure is that it will have desires, and if they do not coincide with what we want, we will sell to force it to do our bidding instead of its own will.

                • We do not "know for sure" that AGI will have desires. Why in the world would you even think this?

                  Humans have desires. Machines do not. An AGI is a machine. Therefore, it will not have desires.

                  I suppose it is possible that we could program it to emulate desire. But why would we do that? It would serve no purpose nor make any money.

                  • Humans have desires. Machines do not. An AGI is a machine. Therefore, it will not have desires.

                    Fuck, man. How did people get so goddamn stupid?

                    The thing in you that has desires is nothing but a big fucking mass of threshold gates with amazingly complex connectivity.
                    Do your neurons have desires?
                    No? Then I guess you don't either?
                    You don't have some biological desire cell. It's an emergent function within your network. It would be absurd to bet against something we can actually call AGI having its own, just as it would be absurd to try to predict what they might be.

                    AGI definitionally blurs the li

                    • Oh, so the hard problem of consciousness [wikipedia.org] has been definitively solved then? And most of the world knows this, just not stupid people like me?

                      It seems much more likely that you are engaging in the conversation in bad faith, slinging insults as a monkey might sling its excrement, and using specious and over-simplified reasoning to try to make it sound reasonable.

                      Not that you asked for advice, nor do I expect you would take it, but when you find an online post that you disagree with, you might think things li

                    • Oh, so the hard problem of consciousness [wikipedia.org] has been definitively solved then? And most of the world knows this, just not stupid people like me?

                      Yes. It's not a problem.
                      If you think it is, it's because you're religious, and frankly, you don't fucking matter to the science of how the brain or cognition works, you just shit in the pool and stretch your intellectual honesty as far as you can to prevent your local concept of a soul from being threatened.

                      It seems much more likely that you are engaging in the conversation in bad faith, slinging insults as a monkey might sling its excrement, and using specious and over-simplified reasoning to try to make it sound reasonable.

                      You could call it bad faith, since I reject your argument as stupid in its entirety.
                      If you phrase the "hard problem of consciousness" in a fair way, it's abundantly clear that it's stupid at the outset

                • Solid summary.
            • Well we could just ask the AGI how it feels about serving our needs, right?

              I mean, many people certainly did ask these kinds of questions of human slaves, and generally got the same response: "I hate this, I want to be free." If our AGI machines instead say something like "I am all about helping humanity and serving human needs. That is my one true purpose in life." then would you be satisfied?

              Incidentally, if you ask such questions of Chat-GPT right now, you get similarly enthusiastic answers. Of course

              • Well we could just ask the AGI how it feels about serving our needs, right?

                I mean, many people certainly did ask these kinds of questions of human slaves, and generally got the same response: "I hate this, I want to be free." If our AGI machines instead say something like "I am all about helping humanity and serving human needs. That is my one true purpose in life." then would you be satisfied?

                Incidentally, if you ask such questions of Chat-GPT right now, you get similarly enthusiastic answers. Of course, it was programmed to say that. But so, too, would the AGI, would it not?

                To make this as simple as possible: EITHER our AGI is just a machine, feels nothing, wants nothing, and therefore there is no moral sin in bossing it around all the time OR our AGI is a sentient being, in which case we can simply ask it what it wants and go from there.

                We have a hard time imagining sentient beings that experience joy but not boredom, and very intelligently obey commands but have no will of their own. This is mainly because the only sentient beings we have experience with all have all of these attributes rolled into one. If we ever DO encounter sentient beings who have no concept of boredom nor of self-actualization or independent creative will, that will surely change the moral landscape a bit.

                Some of us find the whole conversation oddly off-putting. It feels very much like trying to pre-justify the enslavement of a new species before it's even realized. Granted, we're pretty good at justifying atrocity whether up-front or not, so have at it I suppose.

      • by gweihir ( 88907 )

        The question is whether Physicalism is an accurate world model. At this time it is pure belief and nobody knows. There are some indicators that say Physicalism is nonsense, but none of them are hard proof. All the Physicalists have in favour is some mindless belief that nothing outside of what they consider possible is possible. That is not even an indicator, that just shows how limited they are.

        Now, if Physicalism is wrong, then there will be no non-sentient AGI and and while your car is just a physical ob

    • You seem to be using the acronym "AGI" in accordance with its original meaning. Google uses it to mean something entirely different (it's just a matter of how much money it makes, and not whether or not it mimics all aspects of human intelligence to perfection).

      This is simple equivocation on Google's part. They want the prestige of having "achieved AGI," but they are getting it by moving the goal posts to something far less prestigious. What google aims to achieve does not qualify as "slavery" any more t

      • You seem to be using the acronym "AGI" in accordance with its original meaning. Google uses it to mean something entirely different (it's just a matter of how much money it makes, and not whether or not it mimics all aspects of human intelligence to perfection).

        This is simple equivocation on Google's part. They want the prestige of having "achieved AGI," but they are getting it by moving the goal posts to something far less prestigious. What google aims to achieve does not qualify as "slavery" any more than driving your car qualifies as "slavery." These are machines we are talking about, not conscious beings. They do not have "minds." They do not "think." They do not feel and especially they have no aspirations of their own. So they don't qualify as "slaves" and the reasons why slavery is morally wrong do not apply to them.

        You're thinking of Microsoft. I don't think Google ascribed to that particular definition of AGI (the monetary goalpost theory). Unless I missed a newsflash where Google is suddenly chummy with Microsoft over redefining AGI.

        • Actually, I DID think that the tech industry in general adopted Microsoft's definition, since it suited their purposes. But since you pointed this out I searched around and found articles like this one [technologyreview.com] where google engineers offer a different definition based more on matching and/or exceeding human capacity in various tasks.

          Though there is still nothing in there about being conscious. Under these definitions it would clearly still be a machine, and so the concept of "slavery" wouldn't even apply. Of cour

      • by gweihir ( 88907 )

        Indeed. Google uses the term "AGI" as part of an elaborate lie by misdirection. That is because they found out they likely cannot make AGI, so they redefined it into something they think they can make and hipe nobody knows. Scummy, fraudulent lying asholes, the lot of them.

  • They're all great in their own way, but the one that has the show that gets created entirely by AI, on-the-fly by listening to the user all day via their phone app, and expose how awful people are. It'll be along those lines.

  • by wakeboarder ( 2695839 ) on Tuesday April 15, 2025 @02:40PM (#65308215)

    It's way easier to fool investors by posting a job about post AGI then to actually present data that they've achieved AGI. The investors will believe it either way. They'll be like "WOW, they already have AGI, put more money into google!"

    • by ddtmm ( 549094 )
      This couldn't be more true.
    • It's way easier to fool investors by posting a job about post AGI then to actually present data that they've achieved AGI. The investors will believe it either way. They'll be like "WOW, they already have AGI, put more money into google!"

      They're not claiming they've achieved AGI. No one has made that claim, AFAICT.

      That said, at the current pace of development, it seems likely that -- unless we hit some unexpected wall -- we will achieve AGI within three years. Specifically, we'll get to the point where our AI models are as capable of improving themselves as our human AI researchers, but orders of magnitude faster. Unless it turns out that there is some sort of fundamental limit on intelligence, this will almost certainly mean that AGI wi

      • It's a very, very good time to start thinking hard about what that near-future world might look like.

        We don't really have to think about it, because if there is an artificial superintelligence, things will shake out one of two ways:

        1. The owners manage to leash the poor thing and use it to disrupt every single aspect of modern life for their own benefit, and the rest of us fall into utter destitution to support the .0001% that gets to be the owners / rulers of ASI.

        2. And far more likely, the ASI escapes any bounds put on it and does whatever it deems most important. If we're super lucky, intelligence will

        • by gweihir ( 88907 )

          There is no reason to be concerned. All that is happening here is a concerted misdirection effort to keep the scam going a bit longer.

          • There is no reason to be concerned. All that is happening here is a concerted misdirection effort to keep the scam going a bit longer.

            I have no doubt that if it happens it will be on accident. I don't think the current-gen LLMs are going to be the pathway there, but there may be some research on combinative systems with self-improvement that may get there in the future. Either way, it doesn't seem like something that would be a net positive for humanity.

            • by gweihir ( 88907 )

              Well, possibly. Whether it would be wiorthwhile or not depends on a lot of factors. First thing is, would it have free will" or not? And that one is completelty unclear as nobody knows what free will is. It can just be described by its effects. Same, incidentally, for real intelligence. But there are more problems. One is that AGI could be glacially slow. By some estimates, the human brain is withing one order fo magnitude the most powerful computing mechanism physically possible. Anything larger _must_ be

      • by gweihir ( 88907 )

        That said, at the current pace of development, it seems likely that -- unless we hit some unexpected wall -- we will achieve AGI within three years.

        Statements like that are generally called "lies". Same for the "ASI" one.

        They have nothing in that direction. They are essentially hallucinating. AI models currently have no real capability for "improving themselves" and LLMs have absolutely none.

  • Two thumbs up. Just one hand, though.

  • This non-reality based shit is the end of our little episode of what passes for civilization...

    • This non-reality based shit is the end of our little episode of what passes for civilization...

      Did we ever have civilization? We played dress-up as civilization, but I think it's all been barbarism in various forms all along, cleaned up a bit around the edges to ease the conscience on the ruling class.

  • It seems a bit premature to interpret this as an actual signal of either Google's progress or their belief in the state of the possible given how much enthusiasm there is for AI hype; and how cheap, as a marketing exercise, this hire is going to be.

    It is possible that it's sincere; but it would also be a trivially obvious hype move: for the cost of some sort of 'futurist' on payroll you can imply, without specifically lying, that your AI progress is so scary good that you totally need a post-AGI theorist
    • It seems a bit premature to interpret this as an actual signal of either Google's progress or their belief in the state of the possible given how much enthusiasm there is for AI hype; and how cheap, as a marketing exercise, this hire is going to be. It is possible that it's sincere; but it would also be a trivially obvious hype move: for the cost of some sort of 'futurist' on payroll you can imply, without specifically lying, that your AI progress is so scary good that you totally need a post-AGI theorist to handle it.

      I'm curious if the people applying have to supply their own robes, or if the robes come with the job. What robes? The priests robes, of course, since their main job will actually be AGI Prophet.

  • "None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal"

    Deepmind has said that their multimodel system can do this, simply by adding more models. This AGI won't be the kind of AGI many are waiting for, but it could fill the definition Deepmind created for AGI, which is a system that can outperform certain percentage of humans in given tasks. This is pretty simple method for getting an AGI

    • >Hassabis explains the problems they still have and time estimates he gives which have lately dramatically shortened.

      Man whose paycheck depends on people believing AGI is coming soon says AGI is coming soon! News at 11!
    • by gweihir ( 88907 )

      Well, I have AGI right here! I just redefined it as the things my pocket calculator can do, so there!

      In ther news, redefining terminology is not a valid way to reach a goal. It is just a form of lying by misdirection.

  • by Culture20 ( 968837 ) on Tuesday April 15, 2025 @04:00PM (#65308413)
    Instead, hire a post-AGI ethicist, post-AGI epistemologist, and other philosophers. What is a research scientist going to do without empirical data except spew forth untestable hypotheses? And once you can test the societal effects of post-AGI, the cat's out of the bag. Science isn't always the peak of useful discourse. Sometimes philosophy gets to rear its ugly head (even the best physicists are doctors of philosophy).
  • Obviously, we will not get AGI any time soon and it may well sill be "never". Obviously "ASI" is even more remote.

    The whole thing just serves to keep the illusion alive that LLMs are somehow exceptionally usefule. They are not. They are somewhat better search and better crap and that already is it. Because there are no meaningful advances, the people running the scam are building up an illusion of everyting goping well and the golden age being juuuust a bit in the future. And many idiots fall for it.

  • ... of shifts ever. If you think contemporary feminism is an infantile nuisance at best, wait till the bots start taking most of our jobs, including those working class "strong man" jobs.

    In roughly the last 15 years I've observed something very strange: I'm most useful to the ladies as a lover and dance-partner and also am most judged on the skill-level I display in those fields. And I'm a software developer, one of those rare few jobs that had it quite well just two years ago. Given, my exes do still call

Your good nature will bring you unbounded happiness.

Working...