SoftBank CEO Says AGI Will Come Within 10 Years (reuters.com) 106
SoftBank CEO Masayoshi Son said he believes artificial general intelligence, artificial intelligence that surpasses human intelligence in almost all areas, will be realised within 10 years. From a report: Speaking at the SoftBank World corporate conference, Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence. He noted the rapid progress in generative AI that he said has already exceeded human intelligence in certain areas. "It is wrong to say that AI cannot be smarter than humans as it is created by humans," he said. "AI is now self learning, self training, and self inferencing, just like human beings." Son has spoken of the potential of AGI - typically using the term "singularity" - to transform business and society for some years, but this is the first time he has given a timeline for its development. He also introduced the idea of "Artificial Super Intelligence" at the conference which he claimed would be realised in 20 years and would surpass human intelligence by a factor of 10,000.
Great prediction from guys who brought you wework (Score:5, Interesting)
AI is making good progress and generative ai does really cool stuff
Not to rain on his parade but AI people have been saying AGI would be achieved and AI Problem solved within 10 years for at least 50 years.
We'll likely achieve AGI... but right now all we have is a something like a very good parrot (parrots have some intelligence but I wouldn't have them perform surgery on me)
It might be 10 year, it might be a 100 years... it's kind of like the ITER fusion situation, we'll likely get AGI, but there is not enough to make a reliable prediction
Re: (Score:3, Insightful)
Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic 'programming'.
I think the challenge is in getting enough complexity in an artificial system to cross whatever threshold needs to be crossed for us to call it intelligent. That comes with a secondary challenge of doing it with enough efficiency to run on a dozen watts in a volume of around 1300 ccs.
If we get there (or ignore efficiency), the step aft
Re: (Score:1)
I'd just like to be away from AI and all it will entail if at all possible.
Re: Great prediction from guys who brought you wew (Score:2)
Re: (Score:2)
Fundamentally, our intelligence and self-awareness seem to be emergent properties of a bunch of interconnected neural nets with inputs, outputs, and some basic 'programming'.
Prove it. :)
The simple fact is we don't have clue how any of this works. People can't stand saying "I don't know" so they'll latch on to anything they think is plausible and insist that this must be how we work. This usually coincides with the current state-of-the-art and will change when something more advanced comes along.
Things can get really stupid, however, when people mistake the state-of-the-art for something that ... isn't. For example, we know for a fact that we aren't a complex but otherwise o
Re: (Score:2)
For example, we know for a fact that we aren't a complex but otherwise ordinary feed-forward neural network, like the kind used in the latest and greatest generative AI baubles
LLMs are not simple feed-forward only. Output tokens are fed back to the input. This feed-back is in the range of tens of megabytes for GPT4.
Re: (Score:2)
That's simply not true. What gets "fed back" is the output text, not the output tokens. That is not guaranteed to result in the same tokens at input. For example, If the model outputs the tokens "per" (525) or " per" (583) and "son" (1559), feeding that text back in might get you " person" (1048) or "person" (6259).
Further, that outer loop doesn't change anything about the fundamental nature or capabilities of the system which absolutely is an ordinary feed-forward NN.
Re: (Score:2)
You are right, it is text, not tokens. It would not make sense to compute embeddings again (as the flow chart indicates) if it was not text. That limits the amount of hidden information passed from output to input which could serve as a memory.
I think that the outer loop is a textual memory for the inner neural network. That is a big difference from a simple feed-forward network.
A naive look at a turing machine is input, "random" access memory and a state transition function. An LLM has input, size limited
Re: (Score:2)
There's a lot wrong here that will take quite a bit of time to explain. I'm not sure that you're interested in a real answer anyway, so this will be fairly broad. NNs are universal function approximators, but that does not mean they can approximate any function as you might understand functions from computer programming. They simply map inputs to outputs. That's all they do and all they can do. They do not retain state. This might help [neuralnetw...arning.com], it's surprisingly beginner friendly.
While it should be possible to
Re: (Score:2)
First, thanks for responding. If you feel I'm bothering you then just stop responding. No problem for me. I'm reacting to your posts lately since you look very educated and totally dismissing any option of reasoning from an LLM. That is fascinating because other educated people think that it is possible LLMs may eventually reason (e.g. Geoffrey Hinton - Two Paths to Intelligence [youtube.com]).
Yes, correct, I meant "function" in the math sense (not a function as in a programming language). Ok, so it looks like NN can tak
Re: (Score:2)
You're right that I'm very dismissive of the idea of an LLM "reasoning", however you want to define it, though my position is hardly unique. You'll find quite a few smart and well-educated people on both sides. Though it's worth pointing out that you'll find smart and well-educated people who believe all sorts of ridiculous nonsense. Now, I don't blame any layperson for being taken in by some of the impressive output. Experts, however, should know better. (I'm a bit cynical, so I suspect that many of t
Re: (Score:2)
Thanks for response. It was insightful. My amateur opinion is that LLM likely will not reason. I just think that there is a small chance that they might (likely after modification to their design). I have two reasons for that. One is that they are likely almost turing complete (without unbounded tape). The second is that maybe reasoning can be somehow embedded in natural language and training can distill this feature.
Option one does not help much because of the way LLMs are trained. Second option does not l
Re: (Score:2)
Re: (Score:2)
I'm not so worried about 'intelligent'. Anything that can figure stuff out - take a complex stimulus and deliver an appropriate response - is intelligent.
What about sentience, self-awareness? We don't understand how those things emerge from the minds of animals, so how are we supposed to tell if a machine has them or is just really good at replicating the results? We just assume other people have them because we do... we'll have no such common ground with a true AGI.
Re: Great prediction from guys who brought you wew (Score:2)
Re: Great prediction from guys who brought you wew (Score:2)
neurons aren't binary circuits and we don't undertand where all types of memories are stored, recent evidence suggests perhaps in neural membranes besides the more talked about synaptic connections.
I'll also go out on limb and say boolean circuits, which could be electric, mechanical or fluidic, can't experience pain, pleasure or emotions, only simulate the appearance of doing so at best. Digital AI can only be as self aware and feeling as a rock.
Re: (Score:1)
AGI will be powered by clean, fusion, energy.
Re: (Score:2)
In 2018, almost every car company CEO said that we'd have fully self-driving cars by 2020. That obviously happened, so why wouldn't this also come true?
AGI will be powered by clean, fusion, energy.
I don't understand why CEOs get into that crap, at least they should let the CTOs burn themselves...
Re: (Score:2)
AGI will be powered by clean, fusion, energy.
Running on desktop Linux, no doubt.
Re:Great prediction from guys who brought you wewo (Score:4, Interesting)
I agree.
For example, consider that presently ChatGPT and other LLMs need to be trained on ginormous data sets beyond what any human being could read in hundreds of years. Yet in many ways humans still do better than ChatGPT at a lot of logic problems and math problems, and yet we are trained on a very small fraction of the data sets used to train LLMs.
And at some level I can't help but think we're basically anthropomorphizing a parlor trick; a pattern matcher that is so good at predicting how words should go together it almost seems alive to us.
Re: (Score:3)
Each AI generation has generally required a model about ten times the size of the previous generation. This obviously can't go on forever, so new methods will be needed to deliver equivalent or better results with smaller models. But it does speak to the growing complexity. The different between ChatGPT 2 and 3 is easily visible to anyone who used them. The difference between 3 and 4 is there, but it's not the same visible growth. There are likely diminishing returns to the current growth chart such that re
Re: (Score:2)
I finally found something ChatGPT is actually good at. Make up the most ridiculous, insane tabloid headline you can think of, and tell it to write the article. It will be indistinguishable from the real thing.
Re: (Score:2)
I can't help but think we're basically anthropomorphizing a parlor trick
That's because that's exactly whats happening. It's a very human thing to do. Joe Weizenbaum's secretary famously wanted her sessions with Eliza to be kept confidential. She, like many others, was convinced that the program understood and empathized with her problems.
That was with Eliza -- A simple program that simulated a Rogerian therapist by simply turning the users statements into questions, using filler statements when a sentence couldn't be parsed, and occasionally repeating something saved from ea
Re: (Score:2)
FWIW (not much) I've been predicting an early AGI in 2035 for over a decade, and haven't seen any reason to change my time estimate.
Note that it will NOT be a human equivalent. It will have different motivations. It will be better than humans at many tasks (they already are) and worse at others. But it WILL be able the generalize it's learning to handle the physical universe.
This is said, sort of, tongue-in-cheek, because I don't believe a real AGI is possible, and I also include humans in "not a real ge
Re: (Score:2)
Not to rain on his parade...
Why not? He made a moronic statement that has exactly 0% chance of being true in the next thousand years (unless we devise a radically different form of computing). His parade should be wiped off the face of the earth by nuclear forces.
Re: (Score:2)
You might want to reconsider that statement in light of this statement [reddit.com]
Re: (Score:2)
Why? A fake newspaper headline doesn't tell us anything about the parent's proclamation. It's also worth pointing out that man had already flown by 1903, with both balloons and gliders. What are you claiming anyway? Because one moron said something stupid one time, any similarly structured claim must necessarily be wrong?
Re: Great prediction from guys who brought you wew (Score:2)
Meanwhile in the real world ... (Score:3)
... Bard is very upset with me (think like it parsed and learned from top Reddit trolls, which it probably did) if I tell it got wrong the very first digits on basic arithmetic questions.
No evidence for this (Score:5, Insightful)
There are a lot of things lately being called "AI". They are not intelligent (not even "approaching intelligence") by any reasonable meaning of the word "intelligent". In general, these are pattern recognition devices: they input a vast amount of human-generated input (books and wikipedia articles, for example), and find the patterns of what intelligent behavior looks like. They then blindly apply these patterns, without any understanding (or even any attempts at understanding) what the actual thinking is.
Re:No evidence for this (Score:4, Insightful)
Almost everything called "AI" is using the word "artificial" in the sense of "fake." Just as artificial leather is not real leather, artificial intelligence is not real intelligence. That is what the term has come to mean in common use. So, something does not need to qualify as intelligent in order to qualify as "artificially intelligent."
And that broad meaning is exactly what makes the word useful. If we restricted it to only those things which equal human intelligence in every way, there would be nothing at all. This special meaning implied by "artificial general intelligence" refers to something that doesn't exist and is nowhere near existing, but that is why AGI is not a common-use marketing buzzword.
Manipulation [Re:No evidence for this] (Score:5, Insightful)
It doesn't have to kill us - manipulation through media is all that is necessary. Have us do it to ourselves and the rest are sheep. Not hard to do when just about everything comes through the internet.
That's the part that nobody foresaw. Manipulating people is, at the heart of it, two things: pattern recognition, and access. Pattern recognition is what "AI" is good at-- recognizing what messages work and what don't (and what makes a message one that people pay attention to), and computers can spew out millions of messages across every possible medium that people use to communicate.
(used to be spam was copy & paste & flood everybody with copies of an identical message. But with AI, each message can be individually tailored to the person targeted, and the AI will have access to pretty much everything about that individual and what works to make the message hit the target.)
The floodgates are open, the flood isn't here yet, but the storm is coming, and we are completely vulnerable.
Re: (Score:2)
The issue isn't using "artificial" that way. The problem is using it that way while telling potential investors it means something completely different.
Re: (Score:2)
Re:No evidence for this (Score:5, Interesting)
He can say it, but there is no evidence that this is true.
More than that, there is no theoretical basis for this claim.
The difference between current machine learning techniques and truly general intelligence is something we simply don't understand. What's most likely is that there is some crucial theory of general intelligence that we have not yet discovered. Once we discover it, building AGI will probably be easy (assuming it doesn't depend on yet other theoretical breakthroughs). Until we discover it, building AGI will be impossible.
How far are we from that theoretical advance? We cannot know. What would a knowledgeable person making predictions around the time of Isaac Newton's birth have said about when we would understand how things fall? How difficult would it be to build an atomic bomb without Einstein's work?
Someone could find the crucial ideas tomorrow, or it could take centuries. Or maybe they found it yesterday. We simply cannot know. We can be pretty sure they didn't find and recognize it months or years ago.
That said, there is an intensive amount of effort and brainpower going into the search, and our tools for analyzing and understanding the existing form of general intelligence and for quickly building and testing proposed new strategies are advancing at a breakneck pace. Also, there is always the possibility that we accidentally succeed, without first developing the necessary theory -- after all, evolution did it via randomized variation and selection.
So I think it's reasonable to say that AGI will be created, but no one can say when. We best hope that it doesn't happen too soon, though, or that the same theory that teaches us how to build AGI also teaches us how to solve the alignment problem, or that the theory puts an upper bound on possible intelligence that isn't too far above human level. Because otherwise, we're toast.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
You're talking about task-specific models. I was talking about the leap to actual AGI.
No, having a thousand task specific models is the leap allowing for AGI to actually be implemented. As in that’s not to say it’s the only way it could be done, but why is that approach any less valid. After all, life likely just optimized some simple systems at first and gradually diffused through possibility space until thousands of systems were being self regulated with interdependencies long before intelligence took root.
Re: (Score:2)
Re: (Score:1)
It's very easy to assume current AI is on "the ladder", "the road", and simply needs to ascend from ameoba to insect to ape to superhuman. It just needs to keep incrementing, right?
The chinese room is not on that ladder. You could sooner build up your computer's immune system by exposing it to small viruses. There is a gross misunderstanding of what's under the hood.
It is indeed possible to create hatchery conditions to grow along the ladder that has intelligence at the end, just not with our shitty crude f
Re: (Score:2)
They are not intelligent (not even "approaching intelligence") by any reasonable meaning of the word "intelligent"
Please provide at least one reasonable meaning of the word "intelligent" because nothing you have said above is motivated in any way. As it stands it is just something you say without any actual value.
I'm also saddened how such a flimsy unmotivated post gets +5 insightful.
I totally believe him (Score:3)
The man ranges from criminally bad at picking good investment opportunities to mildly insane. I wouldn't trust him to predict when he's gonna take his next dump.
No it won't! (Score:3)
Re:No it won't! (Score:4, Insightful)
There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today
Yeah but in fairness, how would they know? They isn't a lot of intelligence in marketers and salespeople either, and it takes one to know one.
Re: (Score:2)
There is zero cognitive intelligence in anything the Marketers and Salespeople are call AI today./quote>
There's very little cognitive intelligence in the Marketers and Salespeople.
Good (Score:2)
Re: (Score:3)
No. We'll need to develop fusion so as to have enough power to train the AGI.
Re: (Score:2)
Re: (Score:3)
Most if not all serious fusion endeavors are doing this. Lawrence Livermore Labs used it to address problems achieving net positive output [nvidia.com] at the LIF. DeepMind trained a model to control fusion reactions [cnbc.com] in a tokamak reactor. Other stories have discussed researchers tasking AI to help develop reaction chamber shapes or parts to reduce the need for physical iteration.
Let's just say he's right for a sec (Score:2)
75% of what Softbank does is insensible. If AGI does come along it's going to eat Softbank for a snack.
Trust the source! (Score:3)
I know I trust the CEO of a bank over a credentialed AI researcher to advise me on how the technology is progressing...
Re: (Score:1)
Softbank isn't a bank ...
Well, certainly... (Score:5, Insightful)
would surpass human intelligence by a factor of 10,000
I guess it will depend on the human. Some humans are apparently only intelligent enough to utter meaningless statements, and even so, they reach high positions in the world, like CEO of a big bank.
Lacking a clear definition of intelligence, the statement is not even wrong. If the idea is that some computer will resolve an IQ test in a 10,000th of the time a human needs, then, I suppose, is true. Computers already beat us at chess, considered a brainy game, so they are already more intelligent than us, no need to wait. The word "intelligence" is used as a throw weapon, like "terrorist" or "nazi". It's meaning is reduced to whatever the speakers want to say.
Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I'd argue about what's the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.
Re:Well, certainly... (Score:4, Interesting)
Of course there will be computers more intelligent, in almost any sense, than a human being. However, if that computer takes three stadium-sized data centers, and consumes the power of a hefty nuclear station, I'd argue about what's the point. Just breed a more intelligent human being, who will consume just a couple of sandwiches.
If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself. Maybe you can't get 10,000X smarter without 10,000X size and power consumption, but the size and power consumption of 10,000 human brains is a lot smaller than three stadiums and a nuclear power plant's output. And probably you can do better than what evolution managed to find via random walk.
Re: (Score:2)
It depends on your metrics. When comparing the energies needed to train various intelligences, it's difficult to beat something that runs on Cheetos and Mountain Dew.
Re: (Score:2)
It depends on your metrics. When comparing the energies needed to train various intelligences, it's difficult to beat something that runs on Cheetos and Mountain Dew.
Today.
Re: (Score:2)
If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself
Well, I don't know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?
Re: (Score:2)
If the vastly-smarter-than-humans computer is huge and power-hungry, you just direct it to design a more efficient version of itself
Well, I don't know. You are intelligent, but can you design a more efficient version of yourself? If not, why you assume the computer will be able to?
If humans are smart enough to design and build a smarter-than-human intelligence, then pretty much by definition that intelligence will be capable of doing an even better job, particularly when it's given a headstart by handing it everything that humans have already discovered, including everything we know about our own brains.
Re: (Score:2)
Some of the largest supercomputers on offer can simulate ten million or so neurons at a speed of 1 simulated second every 10 wall clock minutes. You could build a computer today that simulated the entire human brain with a biologically accurate simulation, but it would be roughly 5 miles in diameter, 200 feet high, and consume a lot of power.
Now, supposedly, the human brain shrank around 12,000 years ago. This has been put down to greater social structures making personal brain power less useful and higher
Re: (Score:2)
It should be possible to find the mutations involved, if this hypothesis is correct, and I'm fairly sure there are unethical geneticists who would be fine with reversing any such reduction.
It should be possible, but i'm not sure if humanity is served with more intelligence points in throwing rocks and distinguishing shadows from predators...
In other words, fat chance the lost brainpower was originally employed purely for general intelligence. Much more likely it was used for specialized skills that were required for and specialized in dealing with the realities of survival in the wilderness.
Re: (Score:2)
It does depend on the human, but it depends a lot more on how you measure the intelligence. Recall that ChatGPT passed the lawyers exam, that lawyers study for years to pass. And few lawyers are really stupid. (Greedy and short-sighted are different from stupid.)
Let's stop taking VC predictions seriously (Score:1)
Re: (Score:3)
There's no real reason to take his predictions seriously, but this time I think that parts of his prediction are correct. I do expect an elementary AGI to be extant in around 2035. (Plus or minus 5 years.) But it will only be "smarter than human" in some areas. It will be considerably weaker than human in other areas. A key word here is "general". That's what we don't have so far. Another problematic area is motivations. AFAIK, we're still flailing around in the dark in that area. Motivations need t
My prediction (Score:4, Insightful)
AI will increasingly train on its own hallucinated datasets, eventually becoming a techno-intellectual inbred. Remarkably, it'll still be smarter than many people.
AI surpassing average human intelligence? (Score:3)
10 years? (Score:2)
If SoftBank thinks AGI will arrive in 10 years, that means it will arrive in either 5 or 50 years.
Not convinced (Score:2)
I looked up recent attempts to simulate the brain. About ten million simulated neurons at 1 simulated second every few minutes, on one of the top supercomputers. And that won't be a biological neuron system, that'll be a classic neural net program. The brain has 850 billion neurons, and just to reach the same speed as the brain you ned to clock in at 1 simulated second per second.
Based on the current rate of progress, I honestly don't see full brain NNs being simulated in real time this side of 2063. And bi
Re: (Score:3)
You are definitely right that that approach will not be successful within the decade. Your mistake is thinking that's the only viable approach. That might be the optimal approach if we wanted to build an artificial human ... but we don't know enough to even get started in that direction. Lots more basic research would be needed. But when you interact with someone (say over the internet) you can't analyze things at that level anyway. An implementation of a higher level of analog should suffice to provid
We are already there (Score:2)
noun
1. the ability to acquire and apply knowledge and skills.
I suppose it depends upon your arbitrary definition of intelligence. By the standard definition above and having passed the Turing Test, we already have machine intelligence.
Consciousness is another matter open to debate.
consciousness
noun
1. the state of being awake and aware of one's surroundings.
In my view, consciousness is a combination of intelligence and awareness of the real world through st
Re: (Score:2)
No, the actual Turing test has never been passed by a computer. (OTOH, close analogs have often been failed by a human.)
There are lots or "weak versions of the Turing test" that have been passed. If you weaken it enough, the first version of Eliza passed it. (The caller tried to get her fired for being insubordinate.) But the actual Turing test, or a close analog, has never been passed by a computer. And several weak versions have been failed by various humans.
The Turing test, however, was not intended
Re: (Score:2)
Re: (Score:2)
The goal post has been moved. Turing's actual test was passed quite a long time ago. A "strong" version with a knowledgable inquirer was passed quite publicly by that Google engineer who insisted their language model was sentient.
The comments here are fairly typical. They insist that machine learning algorithms are "parrots," "just statistics" or "Chinese rooms;" basically, they can't be intelligent because we know how their components work. This is a silly argument. It's also factually incorrect in the "Ch
Re: (Score:2)
When? Where? That's a claim to a specific kind of challenge, not the general "fool someone who isn't expecting things". It could include questions like "What makes a vorpal blade better than a broadsword?", and other things specifically designed to reveal the difference between humans and computers (but which humans often fail, oops!).
Re: (Score:2)
Turing wrote a paper. You can look it up.
Re: (Score:2)
Turing wrote a paper, but he did not have a computer that would pass the test. Nobody has built a computer+program that would do so thus far.
Github Copilot suggests otherwise (Score:2)
An earlier post suggested that current AI is just pattern recognition within the searchable data. I tend to agree here. I've been trying to pair program with Github Copilot the last few months, I can get code snippets that are 80% complete at best and I'm never able to give a query that puts it across the finish line.
Some observations:
As I request changes to the code snippets, I see changes to variable names and other program logic unrelated to my last request. This suggests that it's not actually rememb
42 (Score:2)
The correct answer is 42. I don't need AGI to tell me that.
AI cannot overcome inherent problems (Score:5, Interesting)
Current AI, for all its cleverness, is basically regression. As a number of AI experts have noted, the work on inference and reasoning basically got stalled when progress on the neural network approaches started to take off.
The problem is that this approach assumes that there is clear, unambiguous, objectively definable truth that can be used to define a training set for the AI. In reality, many if not most interesting problems, and certainly the hard ones, do not lend themselves to this at all. For example, imagine training an AI on the scientific literature of the past 100 years. Much of that literature will be considered wrong by present standards, and much of the rest will be small-scale and speculative. The truth isn't something that exists objectively, it's something that we construct out of a combination of verifiable facts, philosophical and epistemological frameworks, our own biases, our own emotions, and often randomness.
It is possible that a general AI could emulate all that, but there's a pretty decent chance that that would bind that AI to all the problems and biases that exist in human intelligence. And we know almost nothing about other intelligences, like what and how dolphins or elephants take hold of the world. We've mostly assumed away that concern by counting on historical dismissal of these beings' intelligences.
My guess is that AI will rapidly start to go in circles. It's pretty much already consumed much of human writing and still has no concept of truth whatsoever. This is likely to lead to a torrent of bullshit - basically spam in everything that will make it that much hard to engage in truth-seeking and truth-making.
It may get better some things that involve searching parameter spaces and combinatorics; that will doubtless be useful.
I just am not convinced that reality, knowledge, and epistemology actually lend themselves to the kind of AI that people are envisioning.
Re: (Score:2)
What is the job of this assistant?
If it's to recognize simple tasks and put them in a task list, we're just about there and the stuff you describe is about improve the assistant's ability to communicate with you.
If it's to start doing the task, the assistant will rapidly run into problems of decision-making. Should it buy you ice cream at the store because you're feeling a little down and could use some ice cream, or should it buy you kale because you haven't really done as good a job eating your vegetables
Who cares? (Score:2)
As the great philosopher said... (Score:2)
Predictions are hard, especially about the future
Fusion soon afterward (Score:1)
AGI will be here in ten years and it will be used to design a working power plant employing nuclear fusion.
well my qualifications are in buggy whip making (Score:2)
Riddle me this... (Score:2)
A CEO, who's degrees are in exactly what? And what computer science has he studied?
How different is this to a self-proclaimed expert on vaccinations, who's done all his "research" on Faux Noise?
Re: (Score:2)
His degree is in economics. He knows how to make money, and this is just part of that.
AGI is ten years away... (Score:2)
...and always will be.
At least this is entertaining (Score:2)
Son said he believes AGI will be ten times more intelligent than the sum total of all human intelligence
So, what metric is that 10x intelligence measured by? IQ? And what does a sum of intelligence mean? Is the sum of total human intelligence in a large country orders of magnitude greater than the smartest individual human?
It is wrong to say that AI cannot be smarter than humans as it is created by humans
Perhaps my intuition is different than Son's, but I think that a creation is generally not as smart as the creator. In fact, I can't think of any creation that is smarter than its creator.
Then again, the thought is intriguing. If a creation could surpass the intelligence of its creator
Re: (Score:2)
Perhaps my intuition is different than Son's, but I think that a creation is generally not as smart as the creator. In fact, I can't think of any creation that is smarter than its creator.
Cannot a child be more intelligent than the parents who created it?
Defining AGI (Score:2)
What exactly is AGI? This prediction relies heavily on the precise definition of AGI, which is not clearly defined. So in 10 years, you can say that the prediction was confirmed, by defining AGI to be whatever AI technology we have achieved, after 10 years.
In some ways, AI is already 10x smarter than humans. It can write code in just about every programming language known to man. It can write job descriptions and summarize long articles in a flash. It can search the web for answers on any subject and quickl
That's not how any of this works ... (Score:2)
Let's look at Einstein's thought experiments that produced special and general relativity. Thought experiments.
When a computer can gather information and cogitate on it for a while and say, "Hey, guys, here's a new thought ..."
The messy part is that the computer would be thinking only about the work humans have already produced. That would be useful, but the computer, in order to get "intelligent," would have to "think" on its own. Einstein used prior human work products, but the thought experiments were tr
Most of you set the bar too high (Score:2)
... for what is considered "intelligence." A lot of comments about "this is just pattern recognition" seem to miss the point that most of human cognition is pattern recognition.
In fact, I bet most of you poo-pooing these comments by Masayoshi Son couldn't even give a proper definition for intelligence (without researching a specific counter example) that deviates substantially from what GPT-4 is already doing. And in that research process, you would probably find that GPT-4 can provide the same -or better
Remember when... (Score:2)
Should arrive about the same time as... (Score:2)
SoftBank CEO Says AGI Will Come Within 10 Years. (Score:1)