Ray Kurzweil Still Says He Will Merge With AI 151
Renowned futurist Ray Kurzweil, 76, has doubled down on his prediction of the Singularity's imminent arrival in an interview with The New York Times. Gesturing to a graph showing exponential growth in computing power, Kurzweil asserted humanity would merge with AI by 2045, augmenting biological brains with vast computational abilities.
"If you create something that is thousands of times -- or millions of times -- more powerful than the brain, we can't anticipate what it is going to do," Kurzweil said. His claims, once dismissed, have gained traction amid recent AI breakthroughs. As Kurzweil ages, his predictions carry personal urgency. "Even a healthy 20-year-old could die tomorrow," he told The Times, hinting at his own mortality race against the Singularity's timeline.
"If you create something that is thousands of times -- or millions of times -- more powerful than the brain, we can't anticipate what it is going to do," Kurzweil said. His claims, once dismissed, have gained traction amid recent AI breakthroughs. As Kurzweil ages, his predictions carry personal urgency. "Even a healthy 20-year-old could die tomorrow," he told The Times, hinting at his own mortality race against the Singularity's timeline.
Meet them half way (Score:3, Insightful)
"Only if we meet them half way." - Dave Snowden
I see the Silicon Valley hype machine is still in top gear. I guess they need to raise more funds for even bigger LLMs. Making them bigger won't make them any less dumb.
Kurzweil's bufoonery pops up every time (Score:2, Insightful)
Re: (Score:2)
Well, in fairness, most predictions are suggesting far to large/widespread changes in short term and far too small changes in long term. So what that he was off by 10-15 years? Most of these have come to pass in some form for another. Regarding your comment about boots on the ground, looking at the thousands of drones going about in Ukraine, he got the "unmanned" part right. They just don't have the autonomy or "intelligence" yet.
Comment removed (Score:4, Insightful)
Re: (Score:2)
Well, going by the list... we have 11 predictions. Most of them didn't happen by 2009, but by 2024 for sure.
Failed ones first:
- Europe is several years ahead of Japan and Korea in adopting the American emphasis on venture capital, employee stock options, and tax policies that encourage entrepreneurship, although these practices have become popular throughout the world. Yeah, nope. Europe is still mostly going with lots of regulation.
- Personality designers are in demand, and the field constitutes a growth
Re: (Score:3)
I'd say the bordeline are just off, there's not a whiff of brainwave based music except maybe an unpopular thing that purported to do so, without any evidence it actually did anything intelligent. The economy has blown up and come back together, and one would *of course* expect record high numbers in a system that is based around targeting low inflation, the numbers go up by design. The dotcom bust, 2007 recession both happened within his ten year prediction window and went way beyond "corrections".
Yes, d
Re: (Score:2)
Of the things you say that have happened already most if not all had already happened in some form before he made his 'predictions'.
-You could 'stroll' for information 'online' back in the 80's on a modem connection on your tv. It was already existing, just not very widely implemented.
-Phone sex was there before, just not with high quality live video. In this fashion you can put the word 'sex' behind every single communication mode of humans and it will be true. If we invent a way to communicate with each o
Re: (Score:3)
Let's go one by one:
- Translating Telephone technology (where you speak in English and your Japanese friend hears you in Japanese, and vice versa) is commonly used for many language pairs.
Well, still not there (in fact can't ever get there, it's impossible to translate that real time given differences in sentence structures requiring a latency before translation to begin). We might be "close enough" with a margin of over 2x the interval of the prediction. This prediction is hardly novel though, even the most casual people were exposed to this concept in Star Trek's universal translator. In the time of writing there were already speech to text things being demonstrated/
Oh lord (Score:2)
It's wonderful, isn't it? (Score:2)
People who can't come to grips with mortality are easy marks.
IMO (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
Inadequate in relation to what?
Physics (Score:2)
LOL (Score:3, Interesting)
Intelligence requires 3 things:
1. memory/knowledge base
2. reasoning outside of immediate context
3. imagination
AI is so stupid if a bunch of idiots say the world is flat, it will believe that.
Yeah but humans are also gullible (Score:2)
Re: (Score:2)
Don't fall for it. The fact that reasoning and imagination are not defined by the OP isn't an invalidation of what was said. In fact, that we have words for those concepts underscores the huge gap between human intelligence and AI.
Also, statement 2 said "reasoning outside of immediate context", not just reasoning. It's the remaining words that carry the meaning. The idea that we can consider one context and how it might apply to another is remarkable and requires "imagination". An LLM could do this, ac
Re: (Score:2)
LOL you read Schopenhauer's "Art of Being Right" or something? I said "immediate context", you changed it to "queried context" .. that's strategem #3 from Schopenhaur "generalize your opponent's specific statements". Notice how I made that recognition?
Re: (Score:2)
That ain’t bad.
Probably Unlikely (Score:5, Interesting)
We still don't know how the human brain works - our present understanding of it is extremely clunky and mostly based on conjecture. Rapid progress is being made, but like many parts of life science, the more you dig, the more complex you realise it is. Just consider how there are now theories that all the 'junk' sequences in DNA may actually have uses in other parts of gene expression.
What we have built with computers is really impressive, but just making some kind of blind connection that 'computers are powerful, so we should be able to make a human brain soon' is dumb. Classical computers can't even fold protein sequences of any significant length, and without some kind of algorithmic or quantum breakthrough they never will, no matter how much we keep increasing their FLOP rate. The same thing happens with any kind of molecular simulation - you quickly run up against big-O issues that cannot be fixed by increasing computing power. Yet your body can fold protein sequences in mS. It may turn out that the ability to do this sort of chemical processing is a requirement for sentience, or it might not. But we have no idea really.
Until we better understand the problem (what is consciously) making these sorts of predictions are dumb. At the moment we can barely predict when computers will be able to fold a pile of washing.
Re: (Score:2)
Classical computers can't even fold protein sequences of any significant length, and without some kind of algorithmic or quantum breakthrough they never will, no matter how much we keep increasing their FLOP rate. The same thing happens with any kind of molecular simulation - you quickly run up against big-O issues that cannot be fixed by increasing computing power. Yet your body can fold protein sequences in mS. It may turn out that the ability to do this sort of chemical processing is a requirement for sentience, or it might not. But we have no idea really.
It seems that you are comparing a calculation with a physical event... If turned the other way - human brain cannot calculate the exact way how the CPU fails if it overheats - but the CPU itself can fail just in mS if overheated. Does it mean that human brain is inferior?
This actually has little to do with "AI" (Score:3)
Augmenting your mind with a computer is something a lot of people already do... for decades. A question pops into your mind you can't answer, you get to a computer keyboard and ask it. This is common from simple web search engines, via specialized query languages to writing a short program to answer that question.
It's a trend that has been slowed by the advent of "smart"-phones where the input is limited almost to the point of uselessness.
Of course what we now also see is that unrestricted capitalism will essentially ruin those ideas. While in the past we thought that brain interfaces would stream ads directly into your brain, we now see people enslaved by "smart"-phone notifications.
Re: (Score:2)
That's simply using tools to access knowledge created by other humans. A thing we did since the first written word. The only difference is that now it's a bit faster.
And AI is the same: a tool for retrieving info we create, Ray's delusions notwithstanding.
Re: (Score:2)
Well so far AI text generators are fairly bad at storing knownledge. However maybe they could translate "natural language" into query languages in an interactive way, asking the user the questions they need to clarify the question.
Arms race (Score:2)
>"Kurzweil asserted humanity would merge with AI by 2045, augmenting biological brains with vast computational abilities."
I think that is nonsense. No way that is happening in 25 or so years. Not saying it won't happen at some point, but not that quickly. And it is likely we will have enough fear of the danger of that type of technology, and rightfully so, to prevent mass adoption. That is on top of just how little we really know about the human brain/mind.
>"If you create something that is thousan
Money (Score:2)
AI... (Score:2)
battle of the big thinkers (Score:2)
I tend to agree more with this one:
https://techcrunch.com/2024/06... [techcrunch.com]
That moron is still around? (Score:2)
Well.
Re: That moron is still around? (Score:3)
Re: (Score:2)
Indeed. The comparison to Chopra does fit. But, like every peddler of pseudo-profound bullshit, he finds his fanbois that completely fail to be able to fact-check but want to believe deeply.
Re: (Score:2)
Hey, smart people do not always stay as smart for all time. He could have been super smart in the past. Oh, and I am not saying old people can't be way smarter than the average young adult - some people degrade much faster as well.
Furthermore, a smart computer person can remain brilliant and capable but OUTSIDE their domain be complete idiots. Unlike most things, you can overlap computers with absolutely every topic, so you can have a brilliant person and their works wasted on idiotic goals. Such as tryin
Re: (Score:2)
Ok, can you live better with "moron savant"? Although from my reading it looks more like he is more limited that a savant would be. He did pick some low-hanging fruits in his early career, but he certainly has a flawed self-evaluation and has no clue how wrong he is on many things. He does have some "guru" skills so many mistake his statements for great wisdom. They clearly are not as soon as you have actual fact-checking abilities.
Nope (Score:2)
Be EMULATED by AI, sure.
Merge with? No. At his age, it's far too far off for him to be augmented with AI in any but the most deluded fantasy, and augmentation is the first step to 'merging'.
My kids might one day, if they're unlucky enough, have accidents that end up with bits of their brains replaced by AI.
Death vs Duplication (Score:2)
Yea, no... (Score:2)
For endless reasons. I mean, if he wants to train a LLM on all his, err, musings, go ahead, but nobody is keeping that going for long.
Frankly, a bot that just spews out variations of "I'm so smart" and "the singularity is a thing because (random)" would be just as good.
People get old, they die. When they can die well, that's a good thing.
Common Denominator (Score:2)
As he feels the cold hand of death creeping closer, his belief in salvation strengthens.
But he will find the same fate as all who have striven for immortality. The same fate that awaits us all. There's nothing more natural than death. We were born for it.
Re: (Score:2)
Meanwhile, a newborn AI is getting groomed, er..., trained, to merge with a 97-year old man in 2045.
Re: (Score:2)
Oh don't worry, it'll do the rational thing and reject him.
Capitalism and AI and Kurzweil / Accelerationalism (Score:2)
About twenty years ago I wrote to Kurzweil about how the capitalist-driven vision he had of AI being created by hyper-competitive venture-capital-funded corporations was problematical for various reasons but he would have trouble seeing that since he himself had personally succeeded financially in that setting.
My position remains that it is possible our moral trajectory coming out of any singularity may have a lot to do with our moral direction going into one -- and do we should do everything we can right n
Re: (Score:2)
I have always considered Roger Penrose to be deeply wrong and misguided. As I age more, however, I gradually begin to hope more and more that he's right and AI is basically computationally impossible.
Re: (Score:2)
And all of his aches and pains will be instantly relieved, as he is relieved of all feelings.
Sounds cold and boring.
Re: (Score:2)
Re: (Score:2)
Its that last bit. This is one case where historical performance does predict the future.
Re: (Score:2)
Re: (Score:2)
Who is the oldest person in your family? In any family? Historic performance says you are going to die, everyone is going to die. So, yes, wormfood if you like.
Re: (Score:2)
you would still be sensing the world/universe and one would presume you would be able to speak as well
Thus creating a species of immortal opinionated entities, forever confined in an online world.
Re: (Score:2)
Descartes' Error (book) & Einstein on emotions (Score:3)
https://en.wikipedia.org/wiki/... [wikipedia.org]
"Damasio refers to Rene Descartes' separation of the mind from the body (the mind/body dualism) as an error because reasoning requires the guidance of emotions and feelings conveyed from the body."
Related by Einstein: https://sacred-texts.com/aor/e... [sacred-texts.com]
"For the scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other. The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will ce
Re:Kurzweil has an impressive track-record ... (Score:5, Insightful)
What metric, exactly?
Current "AI" is a very clever pattern extraction and next word calculation machine. There is absolutely zero intelligence going on, artificial or otherwise.
My puppy is infinitely more intelligent than any LLM will -ever- be.
There has (literally) been zero progress in developing AGI. None. Zip. Zero. Nada. We don't even know what technologies would be required to build an AGI.
Put down the sci-fi books and avoid the self promoting book selling cultists like the intellectual toxic plague they are. Read some real books by people who are in the field, not some sci-fi author's nonsense.
Re:Kurzweil has an impressive track-record ... (Score:5, Interesting)
*You* are a very clever pattern extraction and next-word calculation machine.
This isn't an exaggeration. Every time you listen to language, read, or even think about anything linguistic, your brain is doing next word prediction. It is directly observable in studies of the brain. And indeed, Transformers is superb at modeling human brain next-word prediction, with some studies showing a 0% error rate within the noise threshold.
The entire process of human learning is based on prediction. Every neuron is constantly attempting to predict the behavior of the neurons along its axon connections. "Training" feeds back from the ground truth of peripheral neurons. Every time you reach out to put your hand on a table, you have a prediction of what every sensory nerve is going to feel, and when. If your hand touches the table at a slightly different point than predicted, the error flows back to update your model of where the table is. By contrast, if your hand were to outright pass through the table, the error is going to continue to propagate back and radically alter your entire world model. And dreams are what happens when data from peripheral nerves is absent; predictions continue, but are ungrounded by external inputs, and as a result steadily drift without guidance.
The inputs of your audtiory nerve are predicted. These predictions are based in part on next-word prediction. Next word prediciton is based in part on next topic prediction, and on and on. Let me reiterate: you ARE a predictive engine. It is how you learn. You could not function without a continuous stream of predictions.
With respect to words in specific: they do not exist in a vacuum. Words are a reflection of the world that created them. You cannot reliably predict text without a good world model and all of the conditional fuzzy logic paths that weigh into every decision. You can also not do coherent text prediction by only considering the immediate subsequent word. If your next word is either "a" or "an", you have to know in advance what word is going to come after that, or you're going to commit yourself to a nonsensical subsequent word half the time.
What you get without a world model or operation in a conceptual space examining the entire task is Markov Chain text prediction, like autocomplete, and its random rambling. What you don't get is Transformers.
Every neuron in an ANN is a fuzzy binary classifier that subdivides its input field by a fuzzy hyperplane. Each in effect asks a question or superposition of questions about its inputs and yields some point on the continuum between "yes" and "no". The subsequent layer builds its new questions and answers off the resuts of the previous layer's questions and answers, thus building increasingly more complex questions with each subsequent layer. The term for asking and answering questions, and branching decisions based on them, is logic, not statistics.
Ever since Word2Vec and GloVe we've combined this with the concept of latents / embeddings / hidden states, which - while nominally vectors - are basically equivalent to a pinch in a network that forces the data to be generalized by limiting the size of the passthrough space. These are conceptual spaces in which mathematical operations can be conducted on concepts themselves. King - Man + Woman ~= Queen, etc. Cosine distance measuring the level of conceptual relationship between concepts. Scaled dot product unifying disjoint latents. Etc.
With Transformers, we add the attention mechanism to each token and each layer. This lets the linear network choose to look at whatever weighted mix of other tokens it wants. In effect, instead of just asking questions and branching, as in a standard linear DNN, it can effectively implement algorithms. It also bypasses a weakness of non-attention models wherein the impacts of recent tokens dominate your current state; Transformers can just as readily look back at the start of a text thousands of tokens ago as it can the previous token. And each attention block
Re: (Score:2)
That part about dreams you completely made up.
Mimicking the output of a process, mostly sometimes, does not mean you have modeled the process, it might mean you've found another way to about the same thing.
Re: (Score:2)
I did not. [philpapers.org]
Re: (Score:2)
https://www.nature.com/article... [nature.com]
Re: Kurzweil has an impressive track-record ... (Score:2)
And? Any reason you posted an article having nothing to do with the topic?
"Cause" and "purpose" are entirely different topics. There's a growing consensus around the *cause* (the same predictive processing that hapoens while waking). But there's a million and one theories as to the *purpose* (if there is any purpose at all).
Re: (Score:2)
> there's a million and one theories
Precisely. Theories.
Re: (Score:2)
In case you've forgotten, you are the one who change the topic from the cause to the theories of purpose.
Re: (Score:2)
Are you claiming that your link contains proven facts?
Re: (Score:2)
From your link:
"If our proposal proves to be theoretically robust, it might serve as a springboard for a more general theory of cognition..."
The author calls it a proposal and goes on to say there are areas for research. More research, might, maybe... yep, you've got it all wrapped up a little code. Next week you'll have consciousness and emotions, in silicon, I'm sure.
Re: (Score:2)
You have zero reading comprehension. That sentence is about IIT, not PP (IIT is about conscousness, something entirely irrelevant to the topic of discussion here). I linked that paper because I didn't want to spend more than 2 minutes digging through the research for an example and it mentioned a number of papers and a brief history on PP - not for the paper itself.
Re: (Score:3)
Re: (Score:2)
Possibly. Predicting the near future has the advantage of natural, cheap feedback, so it avoids the problem of labeled training data.
Agreed, a Markov Chain that uses a small context window will return simple information theory predictions based on a few words. A Markov Ch
Re: (Score:3)
You cannot reliably predict text without a good world model and ...
I agree with everything else you wrote, but the above statement is false.
The whole point of LLMs (remember the "Large Language" part?) is that, with a large enough corpus, you CAN predict text from the symbols and syntax alone.
This was already demonstrated ages ago with LLM precursors like LSA "Latent Semantic Analysis" and other similar models that led into Word2Vec etc...
Re: (Score:2)
"You can pile on all the new code and CPUs and ram and whatever else you'd like. It will still lack basic human traits."
At least until we understand what those traits are and how they are implemented. Your argument is religious. There is every reason to believe that many of those traits, possibly all, are implementable by "piling on" new code and CPUs, but that doesn't mean bigger LLMs will do it. A human, or your puppy, is a lot more than a big neural network.
Re: (Score:2, Insightful)
Let me know when we understand those traits, where they come from, and when someone is actually making progress duplicating them in a machine. Until then...
Still waiting for links to backup the false claims about current AGI tech research. I suspect I'm going to be waiting well past my grandchildren's lifetimes with no answers. In the meantime all these cultists continue to spew noise about the wonders of AGI as if it's a real thing today that's near product quality and not 100% pure Hollywood fantasy an
Re: (Score:3)
There is every reason to believe that many of those traits, possibly all, are implementable by "piling on" new code and CPUs,
I'm not sure how you can say this since the human brain contains an incredible collection of intertwined specific circuits. You can't just pile on stuff, it needs to be structured in a particular way to generate a particular dynamic. And that is only the base neural network. How about the influence of hormones? How about the dynamics of production and consumption of neurotransmitters? Etc, etc, etc..
So i would say that there is very little reason to believe you can achieve a human type of intelligence by ju
LLMs are ML, not intelligence. However: (Score:2)
Yes, absolutely. LLMs in and of themselves are not even close to human-level cognitive capabilities. Although it's worth noting that in some ways, they demonstrate conclusively that they are beyond ours, most notably in the breadth and depth of the retrievable and expressible data in their models. If only LLMs could think, imagine the resulting cognitive power (which is what current "AI" fanboi types are
Re: (Score:3)
All of the things we actually have evidence for can be implemented procedurally in hardware. All.
Well, i would say no.
What is missing from you base notions is dynamics. You not only need to replicate the connectome etc, you will also have to make the system in a way that will replicate all the interactions going on between neurons and other neurons or the environment. Sometimes down to the molecular level.
About that option (b) in your last sentence, there IS such a thing, and that is the various emergent behaviors that our brains exhibit. Those features only assemble under certain conditions and we ha
Re: (Score:3)
There is absolutely zero intelligence going on, artificial or otherwise.
You are splitting semantic hairs.
Artificial crab meat is not crab meat.
Artificial leather is not leather.
Artificial intelligence is not intelligence.
See?
The definition of "artificial intelligence" is mimicry of intelligence...not actually being intelligent. Like chess-playing programs, for example. In any other context, only intelligent beings can play chess (rocks certainly can't). So chess-playing bots are imitating intelligent beh
Re: (Score:2)
Never heard an LLM referred to as "tiny", nor that generative AI is much larger than merely the LLM part. At least someone recognizes that LLM is merely part of it.
However, it has become clear that generative AI isn't really even generative. While "zero progress" may be an exaggeration, that "lot more happening" isn't AI either.
Re: Kurzweil has an impressive track-record ... (Score:4, Insightful)
Re: (Score:2)
The reason why the inflated prognosticating (and anger) is that they have money in this current pump and dump.
This is crypto all over again.
Re: (Score:2)
Re: (Score:2)
Seeing your tagline about plumbing made me think about the limitations of computer-based anything. I mean you could get a computer to cough up lengths of pipe and fittings you should use for a task, but will that computer figure out where your leak is, and how to address it? Will it recommend thread sealant where appropriate? Will it tell you how many times to wind teflon tape? How much to torque fittings? And will it be right?
Turns out plumbing has a fair bit of complexity to it, if you aren't using s
Re: (Score:2)
Also, as human beings, when we try to work together, or in larger teams, on any kind of project, good communication is a real problem. People misunderstand instructions or forget, take too long, have conflicting requirements from multiple sources, have mismatching expertise, etc.
I am convinced this human friction is the source of most issues in society, not a lack of individual expertise. Adding domain expert AI into the mix does nothing to address this fundamental issue of communication and t
Re: (Score:2)
So... you think the military is NOT working on AI tech that is not made public? Seriously? You need a link before you'll believe that the military has secret high tech programs?
We think that they are developing great programs for making drones effectively identify and track Russian tanks in forests without killing Ukrainians and many many other clever things including many we can't think of. We also think that likely none of these things are AGI.
Re: (Score:2)
What you experience as one hour, i experience as 1.5324 seconds.
So you have a clock frequency(*) of a whopping 425.66 microhertz? You must be running on a RCA 1802 [hobby-site.com] or an Intel 80386EX [wikipedia.org] to reach such a speed!
(*) Assuming one instruction per clock cycle, so give or take an order of magnitude.
Re: (Score:3, Insightful)
I myself am quite curious about the arrival of AGI and intelligence magnitudes more larger than ours
Problem is that we don't really know how to make AGI.
The "AI" currently paraded out are all rooted in discoveries made in the 80s, if not earlier. It's just that we now have computing power to do that. LLMs are very impressive, but ultimately they are predictive text input on steroids. Same with others.
With AGI, no one knows how to get there. I suppose if you just throw enough computing power at something (li
Re: (Score:3)
There is an even more fundamental problem; we don't even have a consensus on what intelligence even is. You have dictionary definitions of course, but how does it work? When exactly is something intelligent and when is it not?
I even have a source; Artificial Intelligence: A Very Short Introduction by Margaret Boden, an Oxford professor who researches both AI and cognitive science..
https://global.oup.com/ukhe/pr... [oup.com]
This leads to a situation where true AGI research hasn't really moved anywhere since the 1950s
Re: (Score:3, Insightful)
Here we get to the AI effect [wikipedia.org].
Psychologically, it seems that the only definition of intelligence that makes the public happy is "whatever computers can't yet do". People will happily describe a task to be "intelligence" if computers currently suck at it, but once computers get great at it, people get mad at the term being described as "intelligence".
People get even more defensive with the word "creativity". Now that we're running out of creativity tests that AI can't trounce us at - tests we've been relyin
Re: (Score:2)
People are religious, they believe there is more to them that just a machine cranking out deterministic results. They think a machine will always be less than they are, and they view AI as the contradictory concept of a machine that is equivalent to what they are.
Interestingly, the components that contribute to this "religious" thinking are at least part of what's missing in AI.
"Now that we're running out of creativity tests that AI can't trounce us at..."
LOL. "Creativity tests" and "creativity" are not t
Ah, well, religion. (Score:2)
Yes, well, "religious" is just a way to put imaginary lipstick on the imaginary pig of superstition.
This is the rational take on the division between the things we know how work and the things we don't. The line between the two moves constantly, and every bit of movement closes the window on superstition that much further. The problem — and yes, I am pretty confident that it's a problem — is that where lack of actual understanding exists, unfettered imagination tends to achie
Re: (Score:2)
If a human can't look at two things and have an opinion of one thing being more creative than another, the word "creativity" has no meaning and should be stricken from our vocabulary.
Re: (Score:2)
What would define intelligence beyond human ability then? Your definition is asymptotic, among its several flaws.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Then let a human be the standard. When there is an AI that always interprets reality as well as a human and does not make mistakes more than a human (or at least in the same way as a human), and when they can go outside of what they have seen and actually create things rather than just form a complex mesh of things people already know and have published. Let that be the mark.
That's basically the (somewhat modified and improved) Turing test, but it turns out that there are some details, which are a large part of artificial intelligence research over the past few decades.
Re: (Score:2)
Re: (Score:2)
Like, what?
Embeddings / latent spaces are from the 1980s?
The attention mechanism is from the 1980s?
Transformers is from the 1980s?
What?
Come back to Earth, please. The ML field hardly even resembles what it did in the 1980s. The vast majority of ML terms today didn't even exist in the 1980s (heck, most didn't even exist in the 1990s). Backpropagation itself didn't even become mainstream until the 1980s (there were
Re: (Score:2)
"This has been one of my biggest long-time criticisms of Musk's take on AI..."
You thinking that Musk even has a take in AI worth considering is a giant failure on your part. Musk is a fraud, his only take on AI is that he should steal it and claim he invented it, both of which he has now done. Of course Musk thinks he can get to AI by merely scaling up, Musk doesn't understand anything. All Musk is good at is regurgitating things that he has heard. Musk is not an engineer, he is a con man. Whether is is
Re: (Score:2)
I said "rooted", as in based on. Of course there has been lots of stuff built on that, but we really haven't seen any massive paradigm shifts.
Most of the things used today, including the ones in your list, can be traced to recurrent neural networks.
Re: (Score:2)
Re: (Score:2)
There was a recent story about training that was accelerated by removing a dependence on matrix multiplies. The real story was entirely buried there, training was accelerated by NOT doing a bunch of work that resulted in little to no value. What was generated through training was changed, it was not merely a faster way to train. It was faster because it used ternary weights, not floats or integer weights.
And this fact, entirely ignored, tells us a LOT about AI research and AI researchers. They don't rea
Re: (Score:2)
Well, FWIW, and I have no track record:
I expect a minimal AGI by 2035. Minimal means it may initially be less intelligent than a toad, but AGI means it can continue learning in multiple domains for an indefinite period of time. (At least several years.)
Therefore I put 2045 a bit past the "Technological Singularity". So saying "humans will merge with AI" around then isn't implausible, but also isn't very plausible...it's undecideable.
For that matter, it's also ill-defined. I could argue that anyone with
Re: (Score:2)
Re: (Score:2)
... when it comes to AI predictions.
Not really.
Re: (Score:2)
But, when it comes to his predictions about physical existenc
Perhaps (Score:2)
Not necessarily.
People can learn to throw a baseball, which, in terms of understanding requires math at the calculous level. However, we can incrementally develop the neural skill(s) required by iterative practice, that is, many attempts and incremental corrections, entirely without the actual top-level intellectual understandings
Re: (Score:3)
Perhaps something will stick.
In some sense it already has. There are plenty of systems using neural networks for image recognition, including drone targeting and Google lens.
Still, my analogy is that we are stuck in the alchemy level of physics and are waiting for the atomic theory of chemistry to come along. The alchemists also came up with a bunch of useful recipes but they didn't solve the real problems before atomic theory came along.