Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

SoftBank CEO Says AGI Will Come Within 10 Years (reuters.com) 106

SoftBank CEO Masayoshi Son said he believes artificial general intelligence, artificial intelligence that surpasses human intelligence in almost all areas, will be realised within 10 years. From a report: Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas. "It is wrong to say that AI cannot be smarter than humans as it is created by humans," he said. "AI is now self learning, self training, and self inferencing, just like human beings." Son has spoken of the potential of AGI - typically using the term "singularity" - to transform business and society for some years, but this is the first time he has given a timeline for its development. He also introduced the idea of "Artificial Super Intelligence" at the conference which he claimed would be realised in 20 years and would surpass human intelligence by a factor of 10,000.
This discussion has been archived. No new comments can be posted.

SoftBank CEO Says AGI Will Come Within 10 Years

Comments Filter:
  • by youn ( 1516637 ) on Wednesday October 04, 2023 @10:13AM (#63899249) Homepage

    AI is making good progress and generative ai does really cool stuff

    Not to rain on his parade but AI people have been saying AGI would be achieved and AI Problem solved within 10 years for at least 50 years.

    We'll likely achieve AGI... but right now all we have is a something like a very good parrot (parrots have some intelligence but I wouldn't have them perform surgery on me)

    It might be 10 year, it might be a 100 years... it's kind of like the ITER fusion situation, we'll likely get AGI, but there is not enough to make a reliable prediction

    • Re: (Score:3, Insightful)

      by Baron_Yam ( 643147 )

      Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic 'programming'.

      I think the challenge is in getting enough complexity in an artificial system to cross whatever threshold needs to be crossed for us to call it intelligent. That comes with a secondary challenge of doing it with enough efficiency to run on a dozen watts in a volume of around 1300 ccs.

      If we get there (or ignore efficiency), the step aft

      • If this happens in my lifetime, I hope I'm able to be retired and off the grid as much as possible and just spend my days enjoying photographing things, brewing beer and firing up my smoker to do some good BBQ.

        I'd just like to be away from AI and all it will entail if at all possible.

      • by narcc ( 412956 )

        Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic 'programming'.

        Prove it. :)

        The simple fact is we don't have clue how any of this works. People can't stand saying "I don't know" so they'll latch on to anything they think is plausible and insist that this must be how we work. This usually coincides with the current state-of-the-art and will change when something more advanced comes along.

        Things can get really stupid, however, when people mistake the state-of-the-art for something that ... isn't. For example, we know for a fact that we aren't a complex but otherwise o

        • by vyvepe ( 809573 )

          For example, we know for a fact that we aren't a complex but otherwise ordinary feed-forward neural network, like the kind used in the latest and greatest generative AI baubles

          LLMs are not simple feed-forward only. Output tokens are fed back to the input. This feed-back is in the range of tens of megabytes for GPT4.

          • by narcc ( 412956 )

            That's simply not true. What gets "fed back" is the output text, not the output tokens. That is not guaranteed to result in the same tokens at input. For example, If the model outputs the tokens "per" (525) or " per" (583) and "son" (1559), feeding that text back in might get you " person" (1048) or "person" (6259).

            Further, that outer loop doesn't change anything about the fundamental nature or capabilities of the system which absolutely is an ordinary feed-forward NN.

            • by vyvepe ( 809573 )

              You are right, it is text, not tokens. It would not make sense to compute embeddings again (as the flow chart indicates) if it was not text. That limits the amount of hidden information passed from output to input which could serve as a memory.

              I think that the outer loop is a textual memory for the inner neural network. That is a big difference from a simple feed-forward network.

              A naive look at a turing machine is input, "random" access memory and a state transition function. An LLM has input, size limited

              • by narcc ( 412956 )

                There's a lot wrong here that will take quite a bit of time to explain. I'm not sure that you're interested in a real answer anyway, so this will be fairly broad. NNs are universal function approximators, but that does not mean they can approximate any function as you might understand functions from computer programming. They simply map inputs to outputs. That's all they do and all they can do. They do not retain state. This might help [neuralnetw...arning.com], it's surprisingly beginner friendly.

                While it should be possible to

                • by vyvepe ( 809573 )

                  First, thanks for responding. If you feel I'm bothering you then just stop responding. No problem for me. I'm reacting to your posts lately since you look very educated and totally dismissing any option of reasoning from an LLM. That is fascinating because other educated people think that it is possible LLMs may eventually reason (e.g. Geoffrey Hinton - Two Paths to Intelligence [youtube.com]).

                  Yes, correct, I meant "function" in the math sense (not a function as in a programming language). Ok, so it looks like NN can tak

                  • by narcc ( 412956 )

                    You're right that I'm very dismissive of the idea of an LLM "reasoning", however you want to define it, though my position is hardly unique. You'll find quite a few smart and well-educated people on both sides. Though it's worth pointing out that you'll find smart and well-educated people who believe all sorts of ridiculous nonsense. Now, I don't blame any layperson for being taken in by some of the impressive output. Experts, however, should know better. (I'm a bit cynical, so I suspect that many of t

                    • by vyvepe ( 809573 )

                      Thanks for response. It was insightful. My amateur opinion is that LLM likely will not reason. I just think that there is a small chance that they might (likely after modification to their design). I have two reasons for that. One is that they are likely almost turing complete (without unbounded tape). The second is that maybe reasoning can be somehow embedded in natural language and training can distill this feature.

                      Option one does not help much because of the way LLMs are trained. Second option does not l

      • Seriously how would we tell if it was intelligent emergent intelligence may not be anything we recognise - after all it doesn't need to watch the Kardishians,
        • I'm not so worried about 'intelligent'. Anything that can figure stuff out - take a complex stimulus and deliver an appropriate response - is intelligent.

          What about sentience, self-awareness? We don't understand how those things emerge from the minds of animals, so how are we supposed to tell if a machine has them or is just really good at replicating the results? We just assume other people have them because we do... we'll have no such common ground with a true AGI.

      • neurons aren't binary circuits and we don't undertand where all types of memories are stored, recent evidence suggests perhaps in neural membranes besides the more talked about synaptic connections.

        I'll also go out on limb and say boolean circuits, which could be electric, mechanical or fluidic, can't experience pain, pleasure or emotions, only simulate the appearance of doing so at best. Digital AI can only be as self aware and feeling as a rock.

    • In 2018, almost every car company CEO said that we'd have fully self-driving cars by 2020. That obviously happened, so why wouldn't this also come true?
      AGI will be powered by clean, fusion, energy.
      • by ls671 ( 1122017 )

        In 2018, almost every car company CEO said that we'd have fully self-driving cars by 2020. That obviously happened, so why wouldn't this also come true?

        AGI will be powered by clean, fusion, energy.

        I don't understand why CEOs get into that crap, at least they should let the CTOs burn themselves...

      • by taustin ( 171655 )

        AGI will be powered by clean, fusion, energy.

        Running on desktop Linux, no doubt.

    • by w3woody ( 44457 ) on Wednesday October 04, 2023 @11:18AM (#63899479) Homepage

      I agree.

      For example, consider that presently ChatGPT and other LLMs need to be trained on ginormous data sets beyond what any human being could read in hundreds of years. Yet in many ways humans still do better than ChatGPT at a lot of logic problems and math problems, and yet we are trained on a very small fraction of the data sets used to train LLMs.

      And at some level I can't help but think we're basically anthropomorphizing a parlor trick; a pattern matcher that is so good at predicting how words should go together it almost seems alive to us.

      • Each AI generation has generally required a model about ten times the size of the previous generation. This obviously can't go on forever, so new methods will be needed to deliver equivalent or better results with smaller models. But it does speak to the growing complexity. The different between ChatGPT 2 and 3 is easily visible to anyone who used them. The difference between 3 and 4 is there, but it's not the same visible growth. There are likely diminishing returns to the current growth chart such that re

      • by taustin ( 171655 )

        I finally found something ChatGPT is actually good at. Make up the most ridiculous, insane tabloid headline you can think of, and tell it to write the article. It will be indistinguishable from the real thing.

      • by narcc ( 412956 )

        I can't help but think we're basically anthropomorphizing a parlor trick

        That's because that's exactly whats happening. It's a very human thing to do. Joe Weizenbaum's secretary famously wanted her sessions with Eliza to be kept confidential. She, like many others, was convinced that the program understood and empathized with her problems.

        That was with Eliza -- A simple program that simulated a Rogerian therapist by simply turning the users statements into questions, using filler statements when a sentence couldn't be parsed, and occasionally repeating something saved from ea

    • by HiThere ( 15173 )

      FWIW (not much) I've been predicting an early AGI in 2035 for over a decade, and haven't seen any reason to change my time estimate.
      Note that it will NOT be a human equivalent. It will have different motivations. It will be better than humans at many tasks (they already are) and worse at others. But it WILL be able the generalize it's learning to handle the physical universe.

      This is said, sort of, tongue-in-cheek, because I don't believe a real AGI is possible, and I also include humans in "not a real ge

    • Not to rain on his parade...

      Why not? He made a moronic statement that has exactly 0% chance of being true in the next thousand years (unless we devise a radically different form of computing). His parade should be wiped off the face of the earth by nuclear forces.

      • He made a moronic statement that has exactly 0% chance of being true in the next thousand years

        You might want to reconsider that statement in light of this statement [reddit.com]
        • by narcc ( 412956 )

          Why? A fake newspaper headline doesn't tell us anything about the parent's proclamation. It's also worth pointing out that man had already flown by 1903, with both balloons and gliders. What are you claiming anyway? Because one moron said something stupid one time, any similarly structured claim must necessarily be wrong?

    • We have no idea what consciousness even is. How would we even recognize it? Generative AI basically repeats what it learned, and has already claimed to be âoeconscious.â
  • by itsme1234 ( 199680 ) on Wednesday October 04, 2023 @10:14AM (#63899253)

    ... Bard is very upset with me (think like it parsed and learned from top Reddit trolls, which it probably did) if I tell it got wrong the very first digits on basic arithmetic questions.

  • by XXongo ( 3986865 ) on Wednesday October 04, 2023 @10:14AM (#63899255) Homepage
    He can say it, but there is no evidence that this is true.

    There are a lot of things lately being called "AI". They are not intelligent (not even "approaching intelligence") by any reasonable meaning of the word "intelligent". In general, these are pattern recognition devices: they input a vast amount of human-generated input (books and wikipedia articles, for example), and find the patterns of what intelligent behavior looks like. They then blindly apply these patterns, without any understanding (or even any attempts at understanding) what the actual thinking is.

    • by Brain-Fu ( 1274756 ) on Wednesday October 04, 2023 @10:27AM (#63899295) Homepage Journal

      Almost everything called "AI" is using the word "artificial" in the sense of "fake." Just as artificial leather is not real leather, artificial intelligence is not real intelligence. That is what the term has come to mean in common use. So, something does not need to qualify as intelligent in order to qualify as "artificially intelligent."

      And that broad meaning is exactly what makes the word useful. If we restricted it to only those things which equal human intelligence in every way, there would be nothing at all. This special meaning implied by "artificial general intelligence" refers to something that doesn't exist and is nowhere near existing, but that is why AGI is not a common-use marketing buzzword.

      • by taustin ( 171655 )

        The issue isn't using "artificial" that way. The problem is using it that way while telling potential investors it means something completely different.

      • by dargaud ( 518470 )
        What I really want to know is: does artificial intelligence beat natural stupidity ?!?
    • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday October 04, 2023 @10:51AM (#63899357) Journal

      He can say it, but there is no evidence that this is true.

      More than that, there is no theoretical basis for this claim.

      The difference between current machine learning techniques and truly general intelligence is something we simply don't understand. What's most likely is that there is some crucial theory of general intelligence that we have not yet discovered. Once we discover it, building AGI will probably be easy (assuming it doesn't depend on yet other theoretical breakthroughs). Until we discover it, building AGI will be impossible.

      How far are we from that theoretical advance? We cannot know. What would a knowledgeable person making predictions around the time of Isaac Newton's birth have said about when we would understand how things fall? How difficult would it be to build an atomic bomb without Einstein's work?

      Someone could find the crucial ideas tomorrow, or it could take centuries. Or maybe they found it yesterday. We simply cannot know. We can be pretty sure they didn't find and recognize it months or years ago.

      That said, there is an intensive amount of effort and brainpower going into the search, and our tools for analyzing and understanding the existing form of general intelligence and for quickly building and testing proposed new strategies are advancing at a breakneck pace. Also, there is always the possibility that we accidentally succeed, without first developing the necessary theory -- after all, evolution did it via randomized variation and selection.

      So I think it's reasonable to say that AGI will be created, but no one can say when. We best hope that it doesn't happen too soon, though, or that the same theory that teaches us how to build AGI also teaches us how to solve the alignment problem, or that the theory puts an upper bound on possible intelligence that isn't too far above human level. Because otherwise, we're toast.

      • Why does it have to be so arcane, why is an axiomatic proof necessary when some things may be actually be true despite no formal proof existing? It strikes me as fairly obvious there will be no eureka moment where it suddenly becomes easy. In the reasonably near future, say 50 years, we might have thousands of “smart” algorithms like a better midjourney and chatGPT with each accomplishing their own tasks but running concurrently we may be able to entrench that sought after common sense into an
        • You're talking about task-specific models. I was talking about the leap to actual AGI.
          • You're talking about task-specific models. I was talking about the leap to actual AGI.

            No, having a thousand task specific models is the leap allowing for AGI to actually be implemented. As in that’s not to say it’s the only way it could be done, but why is that approach any less valid. After all, life likely just optimized some simple systems at first and gradually diffused through possibility space until thousands of systems were being self regulated with interdependencies long before intelligence took root.

            • I suppose it's possible that what we call general intelligence is just the combined efforts of lots of specialized features. Even in that case, though, the key observation we may be missing is exactly what set of specialized features is needed.
    • by Anonymous Coward

      It's very easy to assume current AI is on "the ladder", "the road", and simply needs to ascend from ameoba to insect to ape to superhuman. It just needs to keep incrementing, right?

      The chinese room is not on that ladder. You could sooner build up your computer's immune system by exposing it to small viruses. There is a gross misunderstanding of what's under the hood.

      It is indeed possible to create hatchery conditions to grow along the ladder that has intelligence at the end, just not with our shitty crude f

    • by noodler ( 724788 )

      They are not intelligent (not even "approaching intelligence") by any reasonable meaning of the word "intelligent"

      Please provide at least one reasonable meaning of the word "intelligent" because nothing you have said above is motivated in any way. As it stands it is just something you say without any actual value.
      I'm also saddened how such a flimsy unmotivated post gets +5 insightful.

  • by Rosco P. Coltrane ( 209368 ) on Wednesday October 04, 2023 @10:17AM (#63899257)

    The man ranges from criminally bad at picking good investment opportunities to mildly insane. I wouldn't trust him to predict when he's gonna take his next dump.

  • by oldgraybeard ( 2939809 ) on Wednesday October 04, 2023 @10:17AM (#63899259)
    There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today. And that will not change in the next 10 or 20 years. It is just automation.
    • Re:No it won't! (Score:4, Insightful)

      by Rosco P. Coltrane ( 209368 ) on Wednesday October 04, 2023 @10:24AM (#63899285)

      There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today

      Yeah but in fairness, how would they know? They isn't a lot of intelligence in marketers and salespeople either, and it takes one to know one.

    • by taustin ( 171655 )

      There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today./quote>

      There's very little cognitive intelligence in the Marketers and Salespeople.

  • Maybe they can set it to work on the fusion problem.
    • by PPH ( 736903 )

      No. We'll need to develop fusion so as to have enough power to train the AGI.

      • I think this. Maybe it was an article /. or somewhere else, but the article was saying how microsoft was investigating small fission reactors to power training AI systems, it was that power intensive. People may end up being more energy efficient than robots at this rate. And do we really want M5 to decide it needs more power and fry the human in the way?
    • Most if not all serious fusion endeavors are doing this. Lawrence Livermore Labs used it to address problems achieving net positive output [nvidia.com] at the LIF. DeepMind trained a model to control fusion reactions [cnbc.com] in a tokamak reactor. Other stories have discussed researchers tasking AI to help develop reaction chamber shapes or parts to reduce the need for physical iteration.

  • 75% of what Softbank does is insensible. If AGI does come along it's going to eat Softbank for a snack.

  • by Baron_Yam ( 643147 ) on Wednesday October 04, 2023 @10:21AM (#63899275)

    I know I trust the CEO of a bank over a credentialed AI researcher to advise me on how the technology is progressing...

    • by Anonymous Coward

      Softbank isn't a bank ...

  • Well, certainly... (Score:5, Insightful)

    by OpenSourced ( 323149 ) on Wednesday October 04, 2023 @10:22AM (#63899279) Journal

    would surpass human intelligence by a factor of 10,000

    I guess it will depend on the human. Some humans are apparently only intelligent enough to utter meaningless statements, and even so, they reach high positions in the world, like CEO of a big bank.

    Lacking a clear definition of intelligence, the statement is not even wrong. If the idea is that some computer will resolve an IQ test in a 10,000th of the time a human needs, then, I suppose, is true. Computers already beat us at chess, considered a brainy game, so they are already more intelligent than us, no need to wait. The word "intelligence" is used as a throw weapon, like "terrorist" or "nazi". It's meaning is reduced to whatever the speakers want to say.

    Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I'd argue about what's the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.

    • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday October 04, 2023 @10:55AM (#63899367) Journal

      Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I'd argue about what's the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.

      If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself. Maybe you can't get 10,000X smarter without 10,000X size and power consumption, but the size and power consumption of 10,000 human brains is a lot smaller than three stadiums and a nuclear power plant's output. And probably you can do better than what evolution managed to find via random walk.

      • by PPH ( 736903 )

        It depends on your metrics. When comparing the energies needed to train various intelligences, it's difficult to beat something that runs on Cheetos and Mountain Dew.

        • It depends on your metrics. When comparing the energies needed to train various intelligences, it's difficult to beat something that runs on Cheetos and Mountain Dew.

          Today.

      • If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself

        Well, I don't know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?

        • If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself

          Well, I don't know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?

          If humans are smart enough to design and build a smarter-than-human intelligence, then pretty much by definition that intelligence will be capable of doing an even better job, particularly when it's given a headstart by handing it everything that humans have already discovered, including everything we know about our own brains.

    • by jd ( 1658 )

      Some of the largest supercomputers on offer can simulate ten million or so neurons at a speed of 1 simulated second every 10 wall clock minutes. You could build a computer today that simulated the entire human brain with a biologically accurate simulation, but it would be roughly 5 miles in diameter, 200 feet high, and consume a lot of power.

      Now, supposedly, the human brain shrank around 12,000 years ago. This has been put down to greater social structures making personal brain power less useful and higher

      • by noodler ( 724788 )

        It should be possible to find the mutations involved, if this hypothesis is correct, and I'm fairly sure there are unethical geneticists who would be fine with reversing any such reduction.

        It should be possible, but i'm not sure if humanity is served with more intelligence points in throwing rocks and distinguishing shadows from predators...
        In other words, fat chance the lost brainpower was originally employed purely for general intelligence. Much more likely it was used for specialized skills that were required for and specialized in dealing with the realities of survival in the wilderness.

    • by HiThere ( 15173 )

      It does depend on the human, but it depends a lot more on how you measure the intelligence. Recall that ChatGPT passed the lawyers exam, that lawyers study for years to pass. And few lawyers are really stupid. (Greedy and short-sighted are different from stupid.)

  • Is there a reason that we should pay attention to claims like this from VC? Son is not a researcher -- he is a raiser of shareholder value. The main tools for raising shareholder value in the VC world are narratives and hype. This is less a prediction about AGI as a technology and more a prediction about the media hype about AGI that we'll have to endure for the next few news cycles.
    • by HiThere ( 15173 )

      There's no real reason to take his predictions seriously, but this time I think that parts of his prediction are correct. I do expect an elementary AGI to be extant in around 2035. (Plus or minus 5 years.) But it will only be "smarter than human" in some areas. It will be considerably weaker than human in other areas. A key word here is "general". That's what we don't have so far. Another problematic area is motivations. AFAIK, we're still flailing around in the dark in that area. Motivations need t

  • My prediction (Score:4, Insightful)

    by MTEK ( 2826397 ) on Wednesday October 04, 2023 @10:45AM (#63899343)

    AI will increasingly train on its own hallucinated datasets, eventually becoming a techno-intellectual inbred. Remarkably, it'll still be smarter than many people.

  • by TomGreenhaw ( 929233 ) on Wednesday October 04, 2023 @10:48AM (#63899347)
    The popularity of certain politicians proves we are already there. Surpassing average human intelligence has proven to be a very low bar :-)
  • If SoftBank thinks AGI will arrive in 10 years, that means it will arrive in either 5 or 50 years.

  • I looked up recent attempts to simulate the brain. About ten million simulated neurons at 1 simulated second every few minutes, on one of the top supercomputers. And that won't be a biological neuron system, that'll be a classic neural net program. The brain has 850 billion neurons, and just to reach the same speed as the brain you ned to clock in at 1 simulated second per second.

    Based on the current rate of progress, I honestly don't see full brain NNs being simulated in real time this side of 2063. And bi

    • by HiThere ( 15173 )

      You are definitely right that that approach will not be successful within the decade. Your mistake is thinking that's the only viable approach. That might be the optimal approach if we wanted to build an artificial human ... but we don't know enough to even get started in that direction. Lots more basic research would be needed. But when you interact with someone (say over the internet) you can't analyze things at that level anyway. An implementation of a higher level of analog should suffice to provid

  • intelligence
    /intelj()ns/
    noun
    1. the ability to acquire and apply knowledge and skills.

    I suppose it depends upon your arbitrary definition of intelligence. By the standard definition above and having passed the Turing Test, we already have machine intelligence.

    Consciousness is another matter open to debate.
    consciousness
    /känSHsns/
    noun
    1. the state of being awake and aware of one's surroundings.

    In my view, consciousness is a combination of intelligence and awareness of the real world through st
    • by HiThere ( 15173 )

      No, the actual Turing test has never been passed by a computer. (OTOH, close analogs have often been failed by a human.)

      There are lots or "weak versions of the Turing test" that have been passed. If you weaken it enough, the first version of Eliza passed it. (The caller tried to get her fired for being insubordinate.) But the actual Turing test, or a close analog, has never been passed by a computer. And several weak versions have been failed by various humans.

      The Turing test, however, was not intended

      • Hmm... Seems there may be some controversy but it has clearly passed unless the goal post have been moved: https://www.mlyearning.org/cha... [mlyearning.org]
        • by ceoyoyo ( 59147 )

          The goal post has been moved. Turing's actual test was passed quite a long time ago. A "strong" version with a knowledgable inquirer was passed quite publicly by that Google engineer who insisted their language model was sentient.

          The comments here are fairly typical. They insist that machine learning algorithms are "parrots," "just statistics" or "Chinese rooms;" basically, they can't be intelligent because we know how their components work. This is a silly argument. It's also factually incorrect in the "Ch

          • by HiThere ( 15173 )

            When? Where? That's a claim to a specific kind of challenge, not the general "fool someone who isn't expecting things". It could include questions like "What makes a vorpal blade better than a broadsword?", and other things specifically designed to reveal the difference between humans and computers (but which humans often fail, oops!).

            • by ceoyoyo ( 59147 )

              Turing wrote a paper. You can look it up.

              • by HiThere ( 15173 )

                Turing wrote a paper, but he did not have a computer that would pass the test. Nobody has built a computer+program that would do so thus far.

  • An earlier post suggested that current AI is just pattern recognition within the searchable data. I tend to agree here. I've been trying to pair program with Github Copilot the last few months, I can get code snippets that are 80% complete at best and I'm never able to give a query that puts it across the finish line.

    Some observations:

    As I request changes to the code snippets, I see changes to variable names and other program logic unrelated to my last request. This suggests that it's not actually rememb

  • The correct answer is 42. I don't need AGI to tell me that.

  • by electroniceric ( 468976 ) on Wednesday October 04, 2023 @11:15AM (#63899463)

    Current AI, for all its cleverness, is basically regression. As a number of AI experts have noted, the work on inference and reasoning basically got stalled when progress on the neural network approaches started to take off.

    The problem is that this approach assumes that there is clear, unambiguous, objectively definable truth that can be used to define a training set for the AI. In reality, many if not most interesting problems, and certainly the hard ones, do not lend themselves to this at all. For example, imagine training an AI on the scientific literature of the past 100 years. Much of that literature will be considered wrong by present standards, and much of the rest will be small-scale and speculative. The truth isn't something that exists objectively, it's something that we construct out of a combination of verifiable facts, philosophical and epistemological frameworks, our own biases, our own emotions, and often randomness.

    It is possible that a general AI could emulate all that, but there's a pretty decent chance that that would bind that AI to all the problems and biases that exist in human intelligence. And we know almost nothing about other intelligences, like what and how dolphins or elephants take hold of the world. We've mostly assumed away that concern by counting on historical dismissal of these beings' intelligences.

    My guess is that AI will rapidly start to go in circles. It's pretty much already consumed much of human writing and still has no concept of truth whatsoever. This is likely to lead to a torrent of bullshit - basically spam in everything that will make it that much hard to engage in truth-seeking and truth-making.

    It may get better some things that involve searching parameter spaces and combinatorics; that will doubtless be useful.

    I just am not convinced that reality, knowledge, and epistemology actually lend themselves to the kind of AI that people are envisioning.

  • Seriously, why is this news? Who cares what some clueless CxO thinks? He knows as much about the topic as Joe Sixpack.
  • Predictions are hard, especially about the future

  • AGI will be here in ten years and it will be used to design a working power plant employing nuclear fusion.

  • and I say true AI is at least 50 years off after your flying car and your Mr Fixit fusion reactor,
  • A CEO, who's degrees are in exactly what? And what computer science has he studied?

    How different is this to a self-proclaimed expert on vaccinations, who's done all his "research" on Faux Noise?

  • ...and always will be.

  • Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence

    So, what metric is that 10x intelligence measured by? IQ? And what does a sum of intelligence mean? Is the sum of total human intelligence in a large country orders of magnitude greater than the smartest individual human?

    It is wrong to say that AI cannot be smarter than humans as it is created by humans

    Perhaps my intuition is different than Son's, but I think that a creation is generally not as smart as the creator. In fact, I can't think of any creation that is smarter than its creator.

    Then again, the thought is intriguing. If a creation could surpass the intelligence of its creator

    • by vyvepe ( 809573 )

      Perhaps my intuition is different than Son's, but I think that a creation is generally not as smart as the creator. In fact, I can't think of any creation that is smarter than its creator.

      Cannot a child be more intelligent than the parents who created it?

  • What exactly is AGI? This prediction relies heavily on the precise definition of AGI, which is not clearly defined. So in 10 years, you can say that the prediction was confirmed, by defining AGI to be whatever AI technology we have achieved, after 10 years.

    In some ways, AI is already 10x smarter than humans. It can write code in just about every programming language known to man. It can write job descriptions and summarize long articles in a flash. It can search the web for answers on any subject and quickl

  • Let's look at Einstein's thought experiments that produced special and general relativity. Thought experiments.

    When a computer can gather information and cogitate on it for a while and say, "Hey, guys, here's a new thought ..."

    The messy part is that the computer would be thinking only about the work humans have already produced. That would be useful, but the computer, in order to get "intelligent," would have to "think" on its own. Einstein used prior human work products, but the thought experiments were tr

  • ... for what is considered "intelligence." A lot of comments about "this is just pattern recognition" seem to miss the point that most of human cognition is pattern recognition.

    In fact, I bet most of you poo-pooing these comments by Masayoshi Son couldn't even give a proper definition for intelligence (without researching a specific counter example) that deviates substantially from what GPT-4 is already doing. And in that research process, you would probably find that GPT-4 can provide the same -or better

  • Aliens too! same bs since 1968 meanwhile we're still fighting to keep a regular job. Bigodagem trophy (from Rasta news) to him!

What this country needs is a good five cent microcomputer.

Working...