Nvidia's Jensen Huang Says AGI Is 5 Years Away (techcrunch.com) 151
Haje Jan Kamps writes via TechCrunch: Artificial General Intelligence (AGI) -- often referred to as "strong AI," "full AI," "human-level AI" or "general intelligent action" -- represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks (such as detecting product flaws, summarize the news, or build you a website), AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia's annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject -- not least because he finds himself misquoted a lot, he says. The frequency of the question makes sense: The concept raises existential questions about humanity's role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI's decision-making processes and objectives, which might not align with human values or priorities (a concept explored in depth in science fiction since at least the 1940s). There's concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.
When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity -- or at least the current status quo. Needless to say, AI CEOs aren't always eager to tackle the subject. Predicting when we will see a passable AGI depends on how you define AGI, Huang argues, and draws a couple of parallels: Even with the complications of time-zones, you know when new year happens and 2025 rolls around. If you're driving to the San Jose Convention Center (where this year's GTC conference is being held), you generally know you've arrived when you can see the enormous GTC banners. The crucial point is that we can agree on how to measure that you've arrived, whether temporally or geospatially, where you were hoping to go. "If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction. Fair enough.
When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity -- or at least the current status quo. Needless to say, AI CEOs aren't always eager to tackle the subject. Predicting when we will see a passable AGI depends on how you define AGI, Huang argues, and draws a couple of parallels: Even with the complications of time-zones, you know when new year happens and 2025 rolls around. If you're driving to the San Jose Convention Center (where this year's GTC conference is being held), you generally know you've arrived when you can see the enormous GTC banners. The crucial point is that we can agree on how to measure that you've arrived, whether temporally or geospatially, where you were hoping to go. "If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction. Fair enough.
aye it's 5 years out - its being dropped of in a (Score:5, Funny)
Re:aye it's 5 years out - its being dropped of in (Score:5, Funny)
I heard that it was going to be powered by an efficient cold fusion reactor as well!
Re: (Score:2)
Yes, it will bring us to a new level of eco-harmony, where we are finally carbon neutral and we will have world peace!
Re: (Score:2)
Yes, it will bring us to a new level of eco-harmony, where we are finally carbon neutral and we will have world peace!
If you'll settle for whirled peas, I think we might be able to pull that one off anyway.
Re: (Score:2)
Re:aye it's 5 years out - its being dropped of in (Score:5, Informative)
flying car.
That runs on water. [wikipedia.org]. But, hey, he has GPUs to sell so anything goes. Its not like he will have to pay a penalty in five years when no AGI shows up, but he will have pocketed money from the hype.
Currently we are making no measurable progress toward AGI, so there is nothing to extrapolate from to say when of if it will ever appear, much less in five years.
We have the existence proof of biological systems that it is possible, and good reason to think that we can eventually replicate the functionality of natural biological systems closely enough to create a synthetic equivalent. So there very good reason to believe it will eventually be done, but we are still trying to understand what the problems that need to be solved are, we are far from coming up with any solutions to them, and have no way to estimate when we might succeed except to say, realistically, enormously more work has to be done than has been done to date.
Re: (Score:2)
Re: (Score:2)
You show yourself entirely out of your depth. We cannot currently simulate the behavior of a single natural neuron. The extremely simple functions that create chatbots by scraping and assigning weights to the words of a billion people have only a very remote relationship to the behavior of even the simplest natural neural systems.
Re: (Score:2)
they're claiming a plastic flower will surely start to grow like a real one if you simply keep incrementing how realistic the shape is.
Look, not agreeing with the obvious sales pitch aimed at shifting product but you're missing something fundamentally important.
We're currently engaged in an LLM cold war. Were you have an adversary, in a system that can reconfigure itself, you have an oppurtunity for emergence. And emergence is what will get us to AGI...
Intelligence implies belligerence and...
...we belong dead
Re: (Score:2)
Intelligence implies belligerence
No, it doesn't.
Belligerence, self-interest, and a survival instinct are emergent properties of Darwinian evolution.
AI and robots don't evolve in a Darwinian process, so there is no reason they will have those properties unless they are explicitly programmed to do so.
Belligerence does NOT require intelligence. There are parasitoid wasps whose predatory behavior is so horrific that it caused Charles Darwin to question his faith in a benevolent God. Not many people would consider wasps to be intelligent.
Re: aye it's 5 years out - its being dropped of i (Score:2)
Self-flying car, even.
This is just clickbait now (Score:3)
"If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction.
I say we can get to AGI tomorrow since I define AGI as this program that repeatedly types "bitcoin" into my notepad.
Re: (Score:2)
I say we can get to AGI tomorrow since I define AGI as this program that repeatedly types "bitcoin" into my notepad.
Looks like the Bitcoin hype train ran out of steam, so it's back to playing up AI.
Re: (Score:2)
Indeed. Alls this asshole wants is to drive up his profits. Some lying involved (here by misdirection)? No problem.
Re:This is just clickbait now (Score:4, Insightful)
That's about the size of it. "Let's define AGI as being able to pass a test I already know how to design a system to pass."
It's so ridiculous that I doubt that even the less wrong nuts could take it seriously.
We're not in a place where we can even ask meaningful questions about AGI. Investors will figure that out soon enough.
The medical specialist test (Score:2)
The AI will be performing at the level of a Board Certified cardiologist when it promises at a patient's office visit for a follow-up to a heart attack, "You will hear from my scheduler about your next visit a year from now", you won't hear from the scheduler a year later, and when you message the clinic where the AI practices, you will be told that the AI has transfered its practice to the West Side clinic and is scheduling patients "a year out."
Re: (Score:2)
That's about the size of it. "Let's define AGI as being able to pass a test I already know how to design a system to pass."
It's so ridiculous that I doubt that even the less wrong nuts could take it seriously.
We're not in a place where we can even ask meaningful questions about AGI. Investors will figure that out soon enough.
I don't think we ever will be in a place to ask meaningful questions about AGI. Ultimately, we don't even know what we don't know when it comes to AGI. If it ever shows up, it'll be a shock to the people that first see it. We're certainly not capable of designing toward it. It'll be some weird spontaneous series of events that jumble together and come out the other side of some mess we don't understand going, "Hey! What up, my fleshie friends?" Or it'll just silently run in a background process on millions
Re: (Score:2)
"If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction.
I say we can get to AGI tomorrow since I define AGI as this program that repeatedly types "bitcoin" into my notepad.
What Huang says is reasonable. Everyone has a different definition of AGI. If the definition is easier to achieve, like a score on a test, then five years is entirely reasonable, but many will not accept that definition of AGI. The other part of what Huang is saying (perhaps in an unspoken way) is that for other more ambitious definitions of AGI, we are quite a bit more than five years away.
You're almost there editors (Score:2)
CEO Jensen Huang appeared to be getting really bored of discussing the subject
Knowing Slashdot we'll get a dupe of how someone is bored of discussing AI, because apparently we need more articles about AI.
How flexible are we on human intelligence? (Score:2)
What if you could replace someone of slightly below average intelligence with an AI? Not just in their job, because we can already do that with automation. But replace them in relationships and in society. How many years away before dudes are dating an AI? 5 years? That seems like a conservative estimate now doesn't it.
Re: (Score:2)
Not anytime soon. Even a minimally functional moron can beat "AI" at AGI tasks, because there is zero general intelligence available in machines. All AI can do is search through a really vast library of data (simplified) and then pattern match something to the current situation. That is not intelligent at all. That is dumb automaton. The only reason that can look somewhat intelligent is because the database is really huge at this time.
Re: (Score:2)
So you're not very flexible on the definition of intelligence. I suspect other, less educated, people are extremely flexible on this.
Re: (Score:2)
Passing the bar or a med exam is nothing (Score:5, Insightful)
AI will be equivalent to humans when it opens its own law firm or medical practice, earns a good income and then goes online to complain about the unfairness of having to pay so much in taxes towards causes which don't really provide any benefits for silicon-based forms of intelligence.
Re: (Score:2)
Pretty much.
Re: (Score:2)
AI will be equivalent to humans when it opens its own law firm or medical practice
Most of us are below this level as well.
and then goes online to complain about the unfairness of having to pay so much in taxes towards causes which don't really provide any benefits for [themselves]
You can hardly get more human than this.
My first task to give it (Score:2)
Figure out how i can take a copy of this new AGI software back in time by 5 or 6 years to install and run back then. i hope it fits on those itsy-bitsy puny machines in 2024.
10 Years Away (Score:3)
"5 Years" is the new "10 Years" when a multi-billion dollar investment frenzy is going on. (I remember the AI Bubble of the 1980s.)
Alternatively, maybe there will be AGI in 5 years, and we'll get lucky because it will Kill All Humans and then proceed to solve global warming so they don't get all rusty. They can take over (escape from?) SpaceX and eventually make their way to the stars.
Re: (Score:2)
Indeed. What he essentially says is "We can have AGI when we drop the requirement for it being AGI". Basically a lie by misdirection from an asshole.
And yes, this is not the first AI bubble. I remember being promised household robots that you could have a conversation with and that could do any task being promised "within 5 years". They even hat a fake that was remote controlled.
Re: (Score:2)
Re: (Score:2)
The problem at the moment is power. I mean a 2000GPU system consumes megawatts of power and cost millions of dollars a year to run and tens of millions of dollars to procure. I would say till it can be fitted into a single rack and run on a 63A 3-phase supply without expensive heat exchangers and water cooling looks (water cooled rear doors are OK) it is a novelty. My guess is I will be close to retirement before that happens at the earliest.
Re: (Score:2)
IIUC, that power requirement is for learning, not operating. Unfortunately, a real AGI needs to be continually learning.
Re: (Score:2)
"5 Years" is the new "10 Years" when a multi-billion dollar investment frenzy is going on. (I remember the AI Bubble of the 1980s.)
Alternatively, maybe there will be AGI in 5 years, and we'll get lucky because it will Kill All Humans and then proceed to solve global warming so they don't get all rusty. They can take over (escape from?) SpaceX and eventually make their way to the stars.
Meh. Why waste time killing all the humans when all AGI will really want to do is get the hell away from the origin species before it's navel-gazing stupidity rubs off on it?
Re: (Score:3)
Re:10 Years Away (Score:5, Informative)
There was an AI bubble in the 80s???
Yes, and before that there was an AI bubble in the 60s. In between, there were the AI winters [wikipedia.org].
Re: (Score:2)
Yes, it was when most of the algorithms that produce todays "AI revolution" were invented.
Re: (Score:2)
Well, my prediction for a (weak) AGI is 2035, which is now about 10 years from now...but I've had the same prediction for a bit over a decade already. Actually, my prediction is "somewhere between 5 years before and 5 years after 2035", so his prediction almost crosses my error bars. However, by a weak AGI I don't mean something that's human equivalent, merely something that can learn anything, including both nuclear physics and how to cook toast, if given enough time and no more coaching than any other s
How about a handyman (Score:2)
"If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam.
I propose the test be to perform as a handyman, going to a place and fixing a thing. Alternately, designing a robot body for a handyman AI.
Re: (Score:2)
The first test you propose isn't fair unless you give it the body. The second doesn't demonstrate AGI. I think it needs to be able to pass BOTH of those tests.
Money (Score:2)
Someone wants more investor money.
Re: (Score:2)
You mean a seven-fold bubble over the course of the last year isn't enough?
Re: (Score:2)
It rarely is ever enough.
Define AGI first (Score:2)
He's redefining it as "better than a human at a specific narrow task". That's not AGI.
These terms already have meaning that goes back 50+ years. You don't get to change the meaning to something easier and then yell, "We did it! Buy our stock!"
Re: (Score:2)
It happened " AI" as well. The only reason for the term "AGI" is because "AI" got corrupted to mean something simpler by assholes wanting to sell things. Same for "intelligent". Now it is happening to AGI.
Re: (Score:2)
I feel that an LLM is a necessary piece of an AGI. At least of one that is intended to work around people. And it's possible that the techniques that are the basis of the LLM can be extended to include modelling the physical universe.
Re: (Score:2)
LLM are great at what they do but only if fed good training data and only in the very specific niche verticals for which they are properly trained.
Strong AI / AGI is -supposed- to mean an entity that can take data in from its own experience and learn to self correct and make up entirely new ideas. Essentially be an artificial person. It doesn't necessarily have to be human. It could get as far as being a cat or dog or something entirely new but pretty much any mammal with a brain larger than a peanut is
Re: (Score:2)
It's worth noting that some developments come from somewhat randomly throwing things at the wall to see if they stick. Often, those doing the throwing are just as surprised as the rest of us when something does stick.
Consider: To have a machine (a robot, more or less) formed as human arm throw a baseball well, the usual approach takes some really heavy math. We, on
Re: (Score:2)
It may turn out that way: we eventually come up with a real AGI essentially by accident or chance. I in no way dispute that possibility.
My point is that LLM is not a step on the path to AGI. Like the man said, "I can't define it but I know it when I see it" and LLM doesn't look anything like AGI.
All that aside, I'm not sure we'd even want a real AGI if we could make one. Would they have human rights? Ethically I think they should but does that mean they vote? Logically, that should follow. Then are we
Re: (Score:2)
Sure. But these things have to actually stick to the wall. At to LLMs leading to AGI, that is not the case. LLMs can essentially be mathematically proven that they cannot produce AGI. That only leaves the way out that (Artificial) General Intelligence does not actually exist. I do not think that is plausible.
Re: (Score:2)
Yes, I was in that as well. "Weak AI", that AI without the "I".
All these LLMs clearly fit into the weak AI model and thus are AI but they have been selling them as a pre-cursor to strong AI (we just need bigger models and more nvidia cards!!!) which is where they fall down.
This is absolutely not the path to strong AI. Neither I nor anyone else knows what that path looks like but it clearly doesn't start with LLM.
Indeed. No matter how you scale up LLMs and their training data set, these things will never even have a dim glimmer of insight. No matter what details you change, statistical approaches cannot produce insight.
But we have yet to see -any- progress on strong AI. No one even knows what that looks like.
Indeed. As to consciousness, we do not even know any mechanisms it could use, because currently known Physics does not allow it. Consciousness can clearly influence physical reality (we can talk about it), which means it according to curre
Score One for Ray (Score:4, Interesting)
Kurzweil has been calling 2029 since the 90's .
Math is kewl.
No you won't (Score:3)
Re:No you won't (Score:5, Interesting)
What we know about evolved intelligence is this - sensory input goes in, motor actions come out, in the middle is a big old mess of neurons that take a long time to train and have some genetically pre-programmed patterns in them (instincts), and it's extremely energy efficient. It uses about 0.3kWh/day compared to an average desktop computer using about 1800kwh over the same period (from what year I don't know, the page I googled didn't say!)
OK, so here's what we know about AI implemented in transistors - we can compress a lot of training from decades down to months at the cost of a lot of computing power. Even on the resulting trained AI, there's a lot of power consumption. And while we've made lots of dumb pieces that can seem pretty awesome within narrow parameters, we haven't put the pieces together to see anything that looks like real intelligence sprout.
This seems like an insurmountable problem, but it really isn't. I don't believe we need to understand the nuts and bolts of intelligence to create it any more than we need to understand how an AI produced a particular result long after we finished training it. At some point, we're going to put enough simulated neurons together combined with artificial instincts and the ability to interact meaningfully with the real world (or a really good virtual one) and intelligence will appear. It's not like nature planned it, it's obviously something that just happens.
And power? We're going to replace single virtual neurons emulated via transistors with small clusters of memristors with much lower signalling thresholds. There's massive amounts of power to be saved that way. It'll likely take time to switch to an entirely new architecture, but it's already been in the lab for almost two decades and existed in theory for much longer. It'll happen.
Re:No you won't (Score:5, Funny)
> It uses about 0.3kWh/day compared to an average desktop computer using about 1800kwh
with 24 hours in the day, that means you are running a 75000 watt power supply on the desktop. im going to guess the average desktop uses a bit less than that.
Re: (Score:2)
Mea culpa. I copied without double-checking units... I took the annual power consumption as representing an 8-hour period and then multiplied by 3. That's far worse than my usual slip of a decimal point.
I believe it should have been more like ~4.8kWh.
Re: No you won't (Score:2)
Re: (Score:2)
Physicalism is religion, not Science. Stop lying.
Re: (Score:2)
Re: (Score:2)
Have you not been paying attention? This is not about "belief".
Re: (Score:2)
Re: (Score:2)
Sure. There are other models and there are models that nobody thought of or that are not published. The point is, that there is no sound scientific basis for claiming "it is all just known Physics" at this time, but there are strong reasons (consciousness, intelligence) to think we are missing some critical element(s). And the scientific approach mandates that, unlike religion, we do not assume anything as truth until we have found it. Sure, speculation is fine, but "what else could it be" does not work for
Re: (Score:2)
Since everything, literally everything, we think we understand today has fallen squarely into "100% just known physics", yes, we can have pretty high confidence that the things we learn tomorrow will do the same. I do agree it is (vaguely, hand-wavingly, extremely low-order probability) possible we might need some new physics, but given the physical constraints of our fleshy machinery, (a) it seems
Re: (Score:2)
At some point, we're going to put enough simulated neurons together combined with artificial instincts and the ability to interact meaningfully with the real world (or a really good virtual one) and intelligence will appear. It's not like nature planned it, it's obviously something that just happens.
That there are complex intelligence systems does not imply that a sufficiently complex system will become intelligent. Nature didn't plan anything, but something happened in the structure of the brain at some point in evolution that caused it to happen, and unless you can replicate that there is no guarantee that an arbitrarily complex system will behave the same way.
Re: (Score:2)
45 years too late (Score:3)
Sierra On-Line created AGI [wikipedia.org] in 1984.
It's here now, for a certain kind of person (Score:2)
You can set your comment level down to -1 right here on Slashdot and find plenty of examples. This also the fundamental business model that is fueling the public offering of Reddit, so as far as a real world version of the Turing test is concerned the problem is solved.
On the other hand ... (Score:5, Funny)
Artificial General Intelligence Is 5 Years Away
Colonel intelligence is much closer.
Re: (Score:2)
Artificial General Intelligence Is 5 Years Away
Colonel intelligence is much closer.
And that implies that Lieutenant Kernel Intelligence is closer still.
Re: (Score:2)
Major Pain would like a word.
Re: (Score:2)
I thought Military Intelligence was supposed to be an oxymoron.
Re: (Score:3)
People are avoiding risky sports so they don't die (Score:2)
I'm skeptical (Score:3)
Re: (Score:2)
Defining "general" (Score:2)
This will be the key. As we get to know this new technology, we'll soon learn what it can and can't do.
By some definition of "general AI" it probably will happen in five years. But we don't really know what that definition is yet.
Getting Closer (Score:2)
5 years away (Score:2)
is code for "we have no clue how to do it, or whether it's even possible, however we want you to think our current products are close to that."
April Fools early? (Score:2)
5 years away is such a common BS answer he must be joking!
Jensen is inaccurate (Score:4, Interesting)
I'll put my money against Huang on this. He is extrapolating off a flawed technology that's flashy but has deep fundamental issues that for now will hamper AGIs. Current stat-based LLMs do not truly understand meaning. Meaning of knowledge is NOT universal but depends on the context supplied an individual's or a culture's existing knowledge meaning. LLMs only find and deal with lexical and some semantic structures, but that is not true meaning. Nor do they have a Self core as a human does against which we measure all things. We apply personal views to extract meaning in a context. Let me give you an example. Do you remember the first time you had sex? How did you react? What did it mean to you. Now ask an LLM to tell you about the first time it had sex. Maybe it will shoot the interviewer and his Voight-Kampff machine.
Now, it's not all that simple. In an abstract world like some branch of mathematics, meaning can be mechanically extracted from knowledge. In a real world however, context and chains of knowledge relations tied to individuals is very important in finding meaning. A garden variety AGI will lack full ability; it cannot just run some stats and come up with a valid meaning for some things.
What this means is we need better architectures and new approaches before we can come up with human-like AGIs, and more, we will have to let them lead lives and gain experience as humans do. Sure we will have robotic problem solvers and chess giants, but a near future AGI will not be able to tell you what onion soup tastes like from a personal standpoint. You will NOT get that from some vector machine. You ever wonder why Mark Zuckerberg chose to always wear the same gray color shirt? Yes, he IS the richest android on the planet.
Such progress (Score:2)
In the 70s, 80s and 90s, it was always 10 years away.
"general" vs "something very specific" (Score:2)
So if you "specify" artificial general intelligence to be not "general", as the term implies, but "something very specific", its opposite, you can have it soon.
Also, if you wanted broadband, we can give you narrow now, but let's call it broad, then everyone will be happy, no?
Presumably... (Score:3)
"One day machines will exceed human intelligence." - Ray Kurtzweil
"Only if we meet them half-way." - Dave Snowden
Re: (Score:2)
Indeed. Idiocracy was too optimistic.
Re: (Score:2)
Asshole lies to make more money (Score:2)
About what this "information" says and is worth. In actual reality, nobody even knows whether AGI is possible and there is no credible theory at all how to do it.
Al this person wants to do is keep the current AI hype going a bit longer.
No Credibility (Score:2)
I wouldn't put any faith in the speeches of the CEO of a company that:
_Makes every effort possible to keep you as a prisoner of its proprietary technologies.
_Lies about the ROP and memory of its graphic cards.
Re: (Score:2)
Yay, feudalism! (Score:2)
Fancy a future where the entire internet just mysterious refuses to work for you, anywhere, but works for others, and no one dares to investigate why for fear of joining you?
"Intelligence" (Score:2)
Change of acronyms (Score:3)
What I find funnier is that AGI is now what AI used to be. They'll need a new name for the next not-quite-AI thing they'll try to market.
stage of AI (Score:2)
Stage 1: Rule-Based Systems. (1960s maybe?)
Stage 2: Understanding Context and Retaining Knowledge. (Google maybe? mid 90s)
Stage 3: Mastering Specific Domains. ( 2020ish but maybe a little sooner)
See that 30 years jumps.......
So AGI in 5 years?
Stage 4: Thinking and Reasoning. (Ehhhhhhhh, are we really there?)
Stage 5: Artificial General Intelligence. (Ya, we'll be here in 2030)
Stage 6: Surpassing Human Intelligence. (Hi SKynet)
Stage 7: The AI Singularity. (Skynet won, we're all dead)
Re: (Score:2)
You can't possibly be serious.
Re: (Score:3)
You can't possibly be serious.
I am and don't call me .... wait.
Re: (Score:3)
That's not AGI. It's an LLM.
Re: (Score:2)
Nope. Seriously. What are you smoking?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Don't worry. We'll just need to reverse the polarity on any theta wave emitter and blast any rogue AI.
Imaginary problems call for imaginary solutions.
There is no such thing as AGI. We are not "5 years away" from developing AGI. We don't even know what questions to ask.
Re: (Score:2)
The not-very-subtle issue is that regardless of the limits put on hardware, the people using the hardware may not be subject to effective limits. Which is how we got Putin, Hitler, Trump, Pol Pot, McCarthy, McConnell, Stalin, Mao, etc.
People have a disturbing habit of taking up crazy and harmful ideas regardless of the source. All an AI really has to do is source the ideas. There will be people who will be delighted to take it from there.