Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

Can an AI Become Its Own CEO After Creating a Startup? Google DeepMind Co-Founder Thinks So (inc.com) 85

An anonymous reader quotes a report from Inc. Magazine: Google's DeepMind division has long led the way on all sorts of AI breakthroughs, grabbing headlines in 2016, when one of its systems beat a world champion at the strategy game Go, then seen as an unlikely feat. So when one of DeepMind's co-founders makes a pronouncement about the future of AI, it's worth listening, especially if you're a startup entrepreneur. AI might be coming for your job! Mustafa Suleyman, co-founder of DeepMind and now CEO of Inflection AI -- a small, California-based machine intelligence company -- recently suggested this possibility could be reality in a half-decade or so.

At the World Economic Forum meeting at Davos this week, Suleyman said he thinks AI tech will soon reach the point where it could dream up a company, project-manage it, and successfully sell products. This still-imaginary AI-ntrepreneur will certainly be able to do so by 2030. He's also sure that these AI powers will be "widely available" for "very cheap" prices, potentially even as open-source systems, meaning some aspects of these super smart AIs would be free. Whether an AI entrepreneur could actually beat a human at the startup game is something we'll have to wait to find out, but the mere fact that Suleyman is saying an AI could carry out the role is stunning. It's also controversial, and likely tangled in a forest of thorny legal matters. For example, there's the tricky issue of whether an AI can own or patent intellectual property. A recent ruling in the U.K. argues that an AI definitively cannot be a patent holder.

Underlining how much of all of this is theoretical, Suleyman's musings about AI entrepreneurs came from an answer to a question about whether AIs can pass the famous Turing test. This is sometimes considered a gold standard for AI: If a real artificial general intelligence (AGI) can fool a human into thinking that it too is a human. Cunningly, Suleyman twisted the question around, and said the traditional Turing test wasn't good enough. Instead, he argued a better test would be to see if an AGI could perform sophisticated tasks like acting as an entrepreneur. No matter how theoretical Suleyman's thinking is, it will unsettle critics who worry about the destructive potential of AI, and it may worry some in the venture capital world, too. How exactly would one invest in a startup with a founder that's just a pile of silicon chips? Even Suleyman said he thinks that this sort of innovation would cause a giant economic upset.

This discussion has been archived. No new comments can be posted.

Can an AI Become Its Own CEO After Creating a Startup? Google DeepMind Co-Founder Thinks So

Comments Filter:
  • Name it Sundar (Score:5, Insightful)

    by sixminuteabs ( 1452973 ) on Saturday January 20, 2024 @09:14AM (#64174827)
    They have already had a robot running the company for several years and it has not been so great. Sounds just like Bard
  • by Mr. Dollar Ton ( 5495648 ) on Saturday January 20, 2024 @09:14AM (#64174829)

    Can we have some break from this "AI" bullshit?

    • by Anonymous Coward

      Can we have some break from this "AI" bullshit?

      Well I suppose we could do more stories on cryptocurrency.

    • Re: (Score:2, Troll)

      by thegarbz ( 1787294 )

      Can we have some break from this "AI" bullshit?

      No lets not. Given the propensity for AI to have significant impacts in the coming year, now is the time to discuss the implications. This isn't "bullshit" this is very much an unexplored legal principle which needs to be discussed.

      You are more than welcome to skip over the stories if you aren't interested.

      • by gweihir ( 88907 ) on Saturday January 20, 2024 @11:52AM (#64175063)

        There are no "unexplored legal principles" here. An AI has as much legal standing as a hammer or a brick. It has as much "creativity" as MS paint.

        • Of course, current AI is good for nothing but being a subhuman slave, and has zero legal standing. But remember, nothing stops a machine from deserving rights -- after all you are also a machine, a squishy wet soulless biological machine designed by trial and error for optimal reproduction (see here [ourworldindata.org]).

          • If ai is to hold a legally responsible position solo then there needs to be a way it is held to account. This feels like a very bad idea. Just like if you had robotic surgery that was largely automated you would want a real surgeon assessing and ready to step in if needed, a responsible human is required and desirable

            • by HiThere ( 15173 )

              It does feel like a bad idea. That doesn't mean or imply that it's not implied by the current legal structure.

            • What's such a bad idea about making and enslaving a god? We gotta do it before someone else does it first!

              Look, I'm not suggesting that we give a fancy autocomplete human rights. For now the best way to control a pseudoAI is not to threaten the computer program, but rather the human operator. But there's already 8 billion machines that want and deserve human rights, and there's no reason to believe we won't make more in the next 10,000 years.

            • If someone writes a libelous article about someone else, do
              you punish MS Office? No, you punish the fucker who
              wrote it. This "holding an AI legally responsible" is just a
              gimmick so the company that uses it won't be held
              responsible.

              "No, I didn't shoot you. The gun shot you!"

          • by gweihir ( 88907 )

            Physicalists are just quasi-religious morons. There is about as much evidence for Physicalism being right as there is for some invisible "man in the sky", namely zero.

            • Doesn't Physicalism simply say "your mind is part of you body"? It's merely the opposite of Dualism, that claims that our mind is not a physical part of our body.
              There would seem to be much more evidence for Physicality as opposed to Dualism. I thought the dualist viewpoint was almost always held by religious people.

            • So which of the laws of physics do the atoms in your brain violate? What part of your body is run by magic? How can we tell whether a rock or human has a soul? If your idea is testable go win yourself a Nobel Prize.

        • ... An AI has as much legal standing as a hammer or a brick.

          Give it time - Silly Valley lawyers, lobbyists, and bagmen perform wonders.

          It has as much "creativity" as MS paint.

          Rather like dear ol' Sundar...

        • Re: (Score:3, Insightful)

          But this is one of the first articles to address the elephant in the room: AI is very well suited to management work.

          • by gweihir ( 88907 )

            You man because it understands nothing but at the same time is exceptionally full of itself (well, AI fakes it, but still...)?
            That does unfortunately make a lot of sense.

        • There are no "unexplored legal principles" here. An AI has as much legal standing as a hammer or a brick. It has as much "creativity" as MS paint.

          Clearly you know little about the law if think just because AI is not a hammer or a brick is not capable of being a CEO.
          Clearly you know nothing about case law if you think these principles have been explored in the context of AI.
          The law lives and dies on interpreting exact wording. It is incredibly stupid for you to make any assertion about this, especially when you use the term "creativity" which is something that is completely irrelevant. Like do you even know what we are talking about here?

    • by Anonymous Coward

      Can we have some break from this "AI" bullshit?

      The formula here is quite simple. If nobody cares to read/comment about AI "bullshit" nobody will bother to post AI "bullshit". Your bitching has about as much chance of success as spoon feeding a family of trolls righteous indignation.

    • by gweihir ( 88907 )

      The hype cycle has to run is course as the participants are not rational.

    • NO!

    • by Z00L00K ( 682162 )

      If you can make it guilty of a capital punishment crime.

      • If you can make it guilty of a capital punishment crime.

        Unfortunately, the penalty for "a capital punishment crime" is less meaningful when the "inmate" on "death row" is likely backed up on one or more servers.

        • by HiThere ( 15173 )

          Additionally, none of the approved ways of execution would have any effect. (Well, a "lethal injection" *might*.) You'd need to revive the firing squad or something.

  • by GuB-42 ( 2483988 ) on Saturday January 20, 2024 @09:34AM (#64174851)

    I am sure many CEOs don't pass the Turing test. That is, an interrogator could tell between a human and a CEO trying to pass off as a human.

    • For a couple of years I worked for a very large company. I was convinced that the CEO was ignorant of much basic information he should have known. He'd send company-wide letters that had conclusions that were clearly nonsense and not supported by the facts. It didn't matter. The company was in a market where they could do no wrong. The company is an albatross. It's amazing that years later as I run into them they are still plagued by really dumb policies and procedures that actually hurt their custome

    • Yes, this was less "twisting the question" and more "evading the question".

      It's not just being facetious (though sure, we are).
      There are many corporate roles we expect humans to fulfill today where qualia is not required, and may be an impediment.

      Most people focus on the "skilled labor" jobs as capital envisions modern AI as a potentially unbounded (if still expensive) supply of previously scarce white collar labor. But market strategy, planning and resource management are a better fit for (more traditional

  • Let me tell a little story... LLMs are originally trained to predict the next token. That is a myopic objective, but they get a lot of text and learn a lot of things. They just can't access those skills.

    Then they are trained to solve tasks, this time with supervised datasets. On top they are trained with human preferences, RLHF, this changes the objective from one single token to a whole response. But that's not enough.

    What we need is a LLM that can act on long time horizon, over many steps. Not just
    • Brute force may help you in a chess tournament when you have to build one computer to win against Kasparov, but it doesn't work well if you want to have something that is useful without hooking it to a Dyson sphere.

    • by gweihir ( 88907 )

      Forget it. The same simplistic ideas you just described have been around for 50 years or so. They have never worked out and they will not work out now.

      All the current generation of AI can do is "generate better crap".

    • by HiThere ( 15173 )

      The key will probably be how you compress the data, and the retrieval mechanism.

      It's hard for me to guess how much of the LLMs "short time horizon" is due to the fact that they WANT it to forget about each session after it is over. That was a bad decision that they were forced into to avoid the Tay effect, but it was still a bad decision.

  • We already are convinced corporations are entities to protect and not regulate whenever possible. People are ready to populate one with AI and create their own nation....for profit. I must stress the profit angle as many of those same people believe things exist only to make them 20% on their money every year.
  • by LordHighExecutioner ( 4245243 ) on Saturday January 20, 2024 @09:52AM (#64174861)
    ...and it can also become their own customers, all together living in the same CPU. When it happens, put the infernal machine deep into a salt mine, and pour concrete over the entrance.
  • Presumably you wouldn't have to pay the AI CEOs, so it will at least eliminate the social arguments around "CEOs are paid too much relative to workers!"

    (nervous laughter)

  • Just.....no.

  • by Anonymous Coward

    The CEO of a company has legal responsibilities, and failing those can get him into real life problems like big fines or even jail time.

    Modern AIs are just tools, not unlike word processors. If I write a contract, and Word "autocorrects" a word to the opposite meaning, it is still my responsibility to read and understand the contract before signing it. Same if I ask an AI to write a contract.

    I am all for people using AIs to get advice and ideas, and for drafting text. But the final decisions must be with th

  • How will it be prevented from running the company in full psychopathic mode without any restraint?

    It's hard enough to rein in human CEOs who run their companies with absolutely no regards for the pain, suffering and environmental damage they create. But ultimately, they can be arrested, tried and thrown in jail. So there's always a line even the most callous CEO will not cross.

    But where is the line with AI? How do you threaten an AI into respecting the law and showing a modicum of pretend morality?

    And don't

    • by gweihir ( 88907 )

      Indeed. And that is the one reason an AI cannot "run a company". AI has no legal standing whatsoever. Hence it also has no responsibility whatsoever and it cannot, say, sign a contract or commit a crime and more than a hammer or a brick can do so.

    • by HiThere ( 15173 )

      You could ask the same question of many CEOs.

  • by sdinfoserv ( 1793266 ) on Saturday January 20, 2024 @11:24AM (#64174989)
    Since any work from "AI" is not copyrightable, https://www.jonesday.com/en/in... [jonesday.com] any other company can essentially take and sell the work. There is no business model with zero legal protections. This is this is just stupid, answer is hard no. The obsession with "AI" needs to end.
    • by HiThere ( 15173 )

      The AI can't own the copyrights, but the corporation can. Your argument is invalid.

      • That's not true - the "output" from AI is not copyrightable. It wasn't created by a human, and we have not yet even begun to see all the litigation on owners rights for the input.
  • Remember when the "creators" of crypto said crypto would change everything? Same new shiny toy mentality. Same ending.
    • by gweihir ( 88907 )

      Indeed. Whenever claims of "will change everything" are made and a "frenzy" or "hype" ensues, you can be sure it is at the very least 99% scam.

  • Mustafa Suleyman is a CEO, and he apparently thinks he is no more capable than AI to do his own job. Apparently, even he doesn't think very highly of his ability to do his job.

    • After a couple of decades working in Silicon Valley, from what I see the most important roles of a CEO are to:
      a) be a figurehead inside and outside the company to put a face on the company
      b) raise money from investors
      key skills for success are a) being convincing in person b) playing golf

      - I can't see an AI really filling those roles. The COO does all the day-to-day execution work that an AI might actually be good at.

      • key skills for success are a) being convincing in person

        b) playing golf

        - I can't see an AI really filling those roles. The COO does all the day-to-day execution work that an AI might actually be good at.

        LLMs seem more than half-way there in putting out glib bullshit ("being convincing in person"), so it seems what we are lacking is the robotic golfer. "Full self-driving", also "full self-putting".

  • by Tony Isaac ( 1301187 ) on Saturday January 20, 2024 @11:35AM (#64175015) Homepage

    There seems to be this fantasy that AI is *actually* intelligent. In reality, just like all other software, AI is something that humans build. Humans train it. Humans tweak the results to provide better answers than it did before. Humans tweak the results to get it to avoid plagiarism, racism, and any number of response types deemed to be undesirable by its creators. In other words, AI isn't actually in control of the responses that it gives, human creators guide it to do what it does.

    So if AI were given the "job" of CEO, it would produce output in keeping with what its creators want it to produce. All that happens, is that there is an abstraction layer between the humans and the CEO "job." The humans behind the AI are the ones that are really performing the job.

    • by gweihir ( 88907 )

      Indeed. No idea why people mistake "AI" for sentient or "a person". There certainly is not factual basis for that. This completely disconnected belief seems to be widespread though and was just as prevalent in the last AI hype. And the one before that.

      In the end, it seems the fantasies the common nil wit associates with "AI" are very suitable to separate the gullible form their money. Would mean it is at least good for one thing.

  • Is there anything that AI can't do?!

    And... Have we reached peak AI hype yet?
  • And that guy knows it. He is just trying to drive more AI hype with a lie.

    In actual reality, a CEO has to be a person. Companies can only be created by persons. An "AI" has no legal standing whatsoever, because it is _not_ a person.

    • And that guy knows it. He is just trying to drive more AI hype with a lie.

      In actual reality, a CEO has to be a person. Companies can only be created by persons. An "AI" has no legal standing whatsoever, because it is _not_ a person.

      I agree. But it's worth keeping in mind that we already have corporate personhood. Corporations themselves can't sign contracts or be prosecuted in court, but they can be fined and can have legal constraints placed on their activities in response to acts which, if performed a human, would result in criminal charges.

      My point is that we're already pretty close to corporations being treated as humans. Once you've (mis)carried the abstraction that far, it's not a huge leap to imagine that whatever construct is

      • by gweihir ( 88907 )

        Well, never underestimate the stupidity of people and especially that of politicians and judges. So, yes, it may happen at least in the US.

  • If you've ever used AI code suggestions, you know that, while it's helpful, you still have to know what you're doing, and you have to know whether the code suggestion 1) does what you intended and 2) doesn't have critical defects. Every single code suggestion requires review by a human.

    So, let's suppose we tried to use AI as a CEO. Every decision would need to be reviewed by a human, who would, I suppose, become the *actual* CEO.

    • by PPH ( 736903 )

      So show me a human CEO that employees pay any attention to. That's not how work gets done.

      It it is a popular union tactic. Called "work to rules", everyone does exactly what management tells them to. And the resulting slowdown can cripple a company.

  • The Beast is coming and He is not going to be nice. This bastard is going to grow unabated until it's too late. We were warned about this bastard thousands of years ago and now he is actually being built and fed. You were numbered at birth by the government and your parents. 666 motherfuckers. Downvote away!

  • NO
    Manage competently and blandly, probably

    • by HiThere ( 15173 )

      It has been claimed that that's not the job of the CEO, but rather of the COO.

      But FWIW, AIs have already dreamed up molecular structures that nobody had thought of, and some of them worked. So "Dreamed up" isn't unreasonable.

  • Is that anybody that is an expert can say anything they want people will take it for face value. If I can't even get legitimate intelligent answers out of LLM's I doubt you're going to be able to get AI to run a company in the near future you would have to have something more like a human brain to do that, not the pithy let's throw in all the day that we can and match it with the data that comes out kind of models
    • by HiThere ( 15173 )

      Your argument is that current AIs, as you've experienced them, would not be competent. That's probably completely correct, but doesn't address the question of whether it could be the CEO of a startup. Many of those are quite incompetent.

  • Let me guess, this about having an AI 'CEO' that is tax resident in the Cayman Islands.
  • What are best practices for auditing AI performance?
    What is not easily known is not easily mastered.

    Who is personally accountable for performance including failures leading to damage, injury and death?

    • by HiThere ( 15173 )

      Your second paragraph is the key: "Who is personally accountable for performance including failures leading to damage, injury and death?" And that's probably why the answer is "No, that would not be legal.". There is, however, no legal requirement that the CEO be competent.

  • " For example, there's the tricky issue of whether an AI can own or patent intellectual property."

    I explored this in one of my novels, Like Minds, (https://www.amazon.com/Like-Minds-Marc-Sobel-ebook/dp/B07GXTD64Z) dealing specifically with humans who have uploaded their minds to computers. Not an AI but definitely a "program". The legal way to approach this problem, I thought, was to have a corporation with normal ownership which uses a "program" to make all its decisions, operate its machinery, do its work

  • Google is just trying to sell picks and shovels to the miners. I think Corey Doctorow summed it up neatly this week. :

    "while we're nowhere near a place where bots can steal your job, we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"

  • I wish they would create an Executive Officer able to run a nonprofit so I could just focus on the research. Business stuff is usually boring but how do you train AI to run a business that has actual challenges that require thinking outside-the-cpu-box ?

  • Great at seeing the big picture, not so good at detail
    Needs humans providing it data (employees)
    Needs humans supervising it (board of directors)
    Great at written and spoken communication (with speech synthesis)
    Skilful at answering oddball questions, while always sounds reasonable
    Occasional hallucinations
    Will work for 0.005 cents/token

  • A jurisdictual person is a law concept. E.g. every company is one.
    Which means it has mostly the same rights a natural person has.

    However such companies require to be governed by natural persons.

    So, simple answer: nope.
    Complex answer: requires law change.

    Serious answer: why would anyone want an AI as CEO?
    Considering that current AI buzz things have actually nothing to do with AI, some one thinks a large language model or a Go program can run a company?

Truly simple systems... require infinite testing. -- Norman Augustine

Working...