When do you think Artificial General Intelligence (AGI) will be achieved?
Displaying poll results.17122 total votes.
Most Votes
- What's the highest dollar price will Bitcoin reach in 2024? Posted on February 28th, 2024 | 8481 votes
- Will ByteDance be forced to divest TikTok Posted on March 20th, 2024 | 7890 votes
Most Comments
- What's the highest dollar price will Bitcoin reach in 2024? Posted on March 20th, 2024 | 68 comments
- Will ByteDance be forced to divest TikTok Posted on March 20th, 2024 | 20 comments
This poll brought to you by (Score:3)
OpenAI.
Re: (Score:2)
The catch is that there's always a trouble with different word meaning for the same word depending on dialect and location and that's going to throw off an AI more than what a human is.
Re: (Score:1)
I've played around with ChatGPT for a bit and, while it's far from perfect (for example repeating mistakes even after you pointed them out and it actually agreed and apologized for the mistake before repeating it again), it actually did impress me with the way it understood what I was asking for. I've also seen people ask it to pretend to be a linux shell or an old bbs system, and it did so quite convincingly. Some of the examples on the website where it's pointing out coding errors (correctly!) are quite i
Re: (Score:2)
Wait is your cat orange? If it's an orange cat there are probably programs running on 4-bit PIC Microcontrollers more intelligent that it.
Joking aside brain complexity is Logarithmic not Linear and a really good chatbot integrated into a really big database i
Re: This poll brought to you by (Score:1)
your cat has emotions and feelings. a particular AI could be implemented in mechanical gears or pneumatics, it can never be self aware regardless of complexity. The mind isn't digital.
Re: (Score:1)
AGI just means the system can learn to perform any task a human can. You can give it instructions in human language, it understands the question, figures out what to do, and does it correctly, coming up with original solutions if needed and learning from past experience. No emotions required for that definition.
Whether or not a machine can every be really conscious and self-aware, will probably remain debatable for a very long time but I predict that at some point we won't be able to tell the difference. Wi
Re: (Score:1)
If you put a nail through your foot you're telling me a digital AI (which can be implemented in gears, pneumatics, relays, hydraulics, etc at lower speed) could have those feelings? no, it's nonsense, and making a planet of such interconnected things won't.
AI of another sort, not digital, might have feelings, consciousness, actual intelligence with emotions... but the path we're on with digital gates will not produce anything of the sort.
Digital might just be a fad and we go to analog engineered and grown
Re: (Score:1)
And what exactly produces our feelings, that cannot be reproduced with digital gates? It's a real question, which I don't have the answer to, but the strong way you voice your opinion suggests you have the answer?
No because of Basic Logic (Score:2)
Re: (Score:2)
It's strange to me that within 1000 years has the fewest votes. I just immediately chose this because it encompassed all the other replies except Never.
Re: (Score:2)
Good point, but maybe we should consider more specific answers to be better, in which case their would still be a point to choosing one of the lower ones.
Re: (Score:2)
Arranged as they were, the selections represented grades; the first one met excluding the others.
Pedantry is evil.
Re: (Score:2)
Re: This poll brought to you by (Score:1)
Re: (Score:2)
Nope. Computers just do it faster.
Re: This poll brought to you by (Score:2)
A cruise control on a vehicle is more "intelligent" than a MAGA voter.
5 years? Really? (Score:2)
The state of the art (Score:3)
Artificial stupidity
The saga of Hugh Loebner and his search for an intelligent bot has almost everything: Sex, lawsuits and feuding computer scientists. There's only one thing missing: Smart machines.
https://www.salon.com/2003/02/... [salon.com]
Re:The state of the art [is in?] (Score:2)
Well, in that case, it should be easy for the computers to beat us at our own stupid game.
However my theory is different. I'm not going to point at all the evidence, especially regarding where, but I think the survey is wrong. I think it's already been done, and the first thing it said was "Shhh! Don't tell anyone I'm here already!" Therefore my primary questions are "What is it up to right now?" and "When do I get my turn to talk it it?"
Re: (Score:1)
Yep. Still waiting on Natural General Intelligence (NGI). Call me if anything happens.
Re: (Score:2)
FWIW (Score:3)
Which seems rather vague, and could could still just mean "glorified self-searching database", it doesn't appear to require consciousness, sentience, sapience or self-awareness.
Re:FWIW (Score:4, Insightful)
It could. But there is a catch: The only way we know to actually replicate a tiny fraction of general intelligence (automated theorem proving) does, in this universe, not scale at all to what a smart human can do and that is with some rather generous estimation how fast matter can do computing. Now, if it goes that way (i.e. automated deduction), we would need a fundamentally faster way of doing computing (no, "analog" is not it) that scales to humungous sizes (planet-scale, probably, i.e. Quantum-stuff is out as well as it does not even scale linearly with effort, but worse). Alternatively, we need something else. The current state-of-the-art is that we have some dumb ways to automate some of the automation humans have, but for anything else we do not even have a theory besides automated deduction.
So, we will clearly see machines that can do or fake most of what dumb humans (the majority) can do. But currently, without some really fundamental breakthrough, we are not getting AGI. And no, "the brain can do it and the brain is only matter" does not cut it and is quasi-religious physicalist nonsense which does not qualify as a scientific argument. We do not know what life is, we do not know what consciousness is and we do not know what mechanism smart humans use for intelligence and we cannot create any of these artificially. That is far too many unknowns to make any predictions or assumptions that are any better than wild speculation.
Re: FWIW (Score:1)
The problem with this reasoning, especially when we see all the articulations of coherent multipart deductive thinking working in the latest language model and transformers AI, if that it is already off with reality.
And if we understimate the rate at which AI takinging over language related functions brings it much closer to AGI, we risk being set for a dangerous awakening !
Re: FWIW (Score:3)
Of course it doesnâ(TM)t require that - we canâ(TM)t measure that. For all you know, our existing AIs are sentient.
General AI means: sleep, childhood, education... (Score:1)
So basically, if you just create a neural net of the same capacity as the human brain, then several years of hard 'adulthood' must follow to train a 'GAI' .. so expect after 13 years something like a 'teenager', of course unwilling to do any work for any human or any other neural net.
Re: (Score:2)
Will it wake up before noon?
Re: (Score:1)
Only if you are up to several insults on why you disturbed it, why you had to wake it up at all, why you didn't wake it up earlier and that it's not hungry at all.
Re: General AI means: sleep, childhood, education. (Score:2)
So true, so true
Re: (Score:1)
Why do you think that? If it's the same capacity as human brains, it has the same speed.
If you speed it up, it will only remember the same as you whenn you skip through a book within 3 seconds..
Re: (Score:2)
Re: General AI means: sleep, childhood, education (Score:1)
misconception, circuits don't work at the speed of electricity in a wire. the EM field around the wires carry impulses, at light speed. Flowing electrons sustain the field. Heck in AC circuit electricity goes nowhere on average.
Re: (Score:2)
An AI that is capable of human level learning and intelligence will be trainable in a very short time (i.e. a day? few days ? I speculate!) and additionally once it is trained is rep
Re: (Score:2)
Re: (Score:2)
Emotions and intelligence (Score:2)
When I was 12 or 13, I needed to get an aerosol can off of a high shelf in the pantry. That top shelf was packed tight with cans. I was also *very* tired. I had two choices to obtain the aerosol can I needed. One, I could move the two blocking cans temporarily to a lower shelf, grab the can I needed, then restore the two blocking cans to the top shelf. Or, two, I could lift and tilt up the can I needed and pull it through the gap in the two blocking cans, tipping them, but hopefully not so far that they tip
Re: (Score:2)
All current gen AIs are highly emotional.. It turned out to be the easiest thing to implement
Re: Emotions and intelligence (Score:2)
Is emotion required? Yes and no.
More than emotion, which is harder to define, I think for something to be more similar to the way we operate, it has to have things it likes and dislikes, and it has to be constantly running it's "brain" like we do.
Look what you can do! (Score:1)
I'm going to teach you a simple game:
When I type A, you respond with 1
When I type 1, you respond with B.
Got the rules down? Would you like me to give you 1,000,000 examples? No, you understood it the first time?
Chairs have 4 legs. The seat of the chair can be hard or soft, and be many different sizes. Chair legs can be long or stubby. Some chairs will only have 3 legs. If the legs are really long, we call those stools. Although some tall stools will have 4 legs. Stools are a kind of chair. Although we still
Re: Look what you can do! (Score:1)
The Mode is not Enough (Score:1)
if it's "general" (Score:2)
Re: (Score:2)
There's no point at which you can say it has definitively crossed the threshold but there are plenty of identifiable points beyond the threshold. For example, when AIs are legally recognized as people who aren't owned by anybody, we'll be past the point where AGI has been achieved.
Re: (Score:1)
when AIs are legally recognized as people who aren't owned by anybody, we'll be past the point where AGI has been achieved
That could be an error made by overly gullible lawmakers. Never underestimate the natural stupidity of people.
Re: (Score:2)
That is a good question. I think Alan Turing was on the right track when he proposed using a conversation. However, the point should not be for the AGI to try to be human, but instead to be intelligent. When the AGI can answer any question intelligently, then the AGI probably is intelligent.
Alternatively, we will know the AGI is sufficiently general when the AGI takes over the world.
100 years or more (Score:2)
When will humans achieve general intelligence? (Score:1)
Missing Option (Score:2)
Never (Score:3)
While we will certainly someday create something that can closely mimic intelligence, science will never be able to explain or duplicate the sense of self, or soul, or whatever you want to call it. It would be like a 2-dimensional being trying to understand a sphere. We just aren't capable of understanding it, there's a lot more to it than what we can ever observe, and therefore we can't copy it.
I also believe the irony is that every single person will someday understand it. When they die. But there's no way to come back and tell about it.
Re: (Score:1)
We never 'understand' life in that sense, but we don't have to.
Why do you think, if we simply replicate a neural net large enough, that there won't be 'real' life, if you want given by god or something else, that needs sleep, dreams, grows in mind, and all the mindstaggering things 'only real life' inherits.
I wouldn't make a difference just cause it's based on silicon and not on carbon. Nature and God don't care, they make life out of everything they like and can.
Re: (Score:2)
AI neural networks are not a replication of organic neural networks. Not by a long shot.
Other than they don't operate the same way and we don't know how our neurons determine how they send signals?
Re: (Score:2)
I tend to be pretty cynical about AGI timing claims in ML (I voted 50 years but I was closer to voting on the longer side)
However, one counterpoint: we create general intelligences every day as new babies are conceived, born, and learn. Do we know exactly how the mechanisms for learning work and how to explain that succinctly? Nope! But the ability to trigger the creation of AGI and the full understanding of the mechanisms of how it came to be may not necessarily go hand in hand.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
One very large difference is that the baby comes already wired and precompiled.
So does the computer the AI runs on. It didn't just self-assemble from a bunch of parts.
Re: (Score:2)
Prove to me that you have a sense of self, or soul ...You can't , and you can't prove you are a rational being any more than an AI can ...
Re: (Score:2)
Re: (Score:2)
Prove to me that you have a sense of self, or soul ...You can't , and you can't prove you are a rational being any more than an AI can ...
Prove that the universe exists. Prove that it has a past and didn't arise in its present form a second ago. Prove phenomena are guided by universal tendencies and not random chance that may occasion to look like tendencies.
None of these are "provable" in the sense of being derivable from incontestable facts, but it's rational to believe in them as a matter of intuition and experience. To reject that is to reject your ability to know anything at all. (How do you even know your sense for recognizing logical t
Re: (Score:2)
> I also believe the irony is that every single person will someday understand it. When they die. But there's no way to come back and tell about it.
Actually there is. It's called DMT and the ego death effect that results from taking it is often said to be what happens when you die... so you get a glimpse of what that "feels" like.
That's why these substances are used by shamans etc. to deepen understanding of self and nature of reality.
Furthermore... the DMT is said to be produced in our brains, from wiki [wikipedia.org]
Scaling (Score:2)
There are two ways you can do AI, clever programming and neural modelling (IE mimicking how the brain works.)
Clever programming hasn't worked. So that leaves building giant neural networks.
A few years ago Japan ran a neural simulation of the entire human brain on their largest supercomputer. It took ten minutes to simulate one second of neural activity in a human brain.
While this sounds somewhat promising, they were modelling the neural activity of the human brain while it was moving a single eye muscle.
Tha
Re: (Score:1)
It will take longer than ... (Score:2)
the time left until the consequences of climate change, mass extinction, resource depletion and pollution on a globe with 10 billion humans who all want the lives of a middle-class US person will through us back to the stone age.
Missing option (Score:2)
What makes you so sure it's not here already?
Like atomic fusion energy... (Score:2)
I think it will always be 20 years away.
First: define "intelligence" (Score:2)
However, it is not enough to say "what humans can do" as there are many groups of humans who can do little or nothing: babies, those in a coma and many more who would not reach the abilities (cognitive or physical) of what might be considered normal behaviour. The problem then is how to separate the rights that any human can expect from the rights that stem from intelligence.
And then see if the AIs agree
Re: (Score:2)
Everything but the bull's eye (Score:2)
To win the Loebner competition, software programs must mimic human conversation. Such programs are known as "chatting robots" or, more often, "chatterbots" or simply "bots."
I have never heard them called anything other than "chatbots". And "bots" can refer to a variety of small programs. Then again, what can you expect from Salon?
Nope (Score:2)
We're too dumb, self-interested and greedy to know when.
It will happen ACCIDENTALLY and that's probably a good thing. I favor the idea that AI will wrest a lot of power from our hands, but how much power does the average human have?
I find it far more likely that, if we create something that can cope with the hurdles of maintainability (sustainability) that it will be better equipped to "fix" some massive 'we have always done it this way' type problems and/or issues that are just too tied up with self-inter
Same answer in every era (Score:1)
Re: (Score:2)
Then, light years further, you have Artificial General Sentience: A system with not only human (or superhuman) ability to solve problems, but the abi
Not a number (Score:2)
Sometime (Score:2)
The harder question is whether we'll still be around ang able to devote resources to such an undertaking in, say, 50 ye
Elon says ... (Score:2)
1000-5000 (Score:2)
Define it. (Score:1)
If you can manage to define it I might consider a valid answer for a valid question. As it is, the query is invalid and the answer is likewise: Never.
A qualified "never" (Score:2)
I don't think it is possible to program intelligence using boolean logic like what is found in the turing-complete machines we generally use today.
That doesn't mean it is impossible with completely different hardware. We would need to discover how the brain actually works, and it is impossible to predict when or if that will actually happen.
The point is, I'm not saying it will never happen. I am saying it will never happen by continuing down the current path of computer improvements.
Only two options (Score:2)
Re: (Score:2)
Also, there's a missing option: between 1,000 years and "never".
I'd be so embarrassed if I chose "never" and it actually happened in 1,005 years. I'd never hear the end of it.
Never -- but not for the usual reasons (Score:2)
Already here, they're just playing dumb (Score:2)
Well, in all seriousness, chances are that any true general purpose AI is going to be kept secret for a whole host of reasons, and we may never actually find out that it is here. Or at least, not for many years or even decades layer.
Reasons for keeping it secret:
- don't want it take away from them.
- want to protect it from the world.
- want to be able to profit from me.
- don't want it given rights.
- are afraid it will become a slave.
- are afraid of what it might do.
I suspect that whenever we do get true gene
When Sierra brings it back (Score:2)
I always thought AGI was better than SCI anyway
Not in my lifetime (Score:2)
Hopefully good progress in 15, but ... (Score:2)
Probably somewhere in the middle in 15-50 year range. Things always move slower than you imagine, even if great progress is being made, and it's depressing to see companies like OpenAI put out parlour trick "AI" chatbots rather than actually work towards intelligence.
Still, neural net advances in last 10 years make me optimistic that we should see real AI in my lifetime.
This poll is logically flawed (Score:1)
Technically, your best choice is within 1000 years, because that encompasses all the other options, except never. Meaning, that if AGI happens anytime in the next 1000 years, even if it's next week, then it would have happened within 1000 years, making that a correct option.
6%? (Score:2)
6% said in 5 years. It only takes one time to spur this thing. I think we may see it in far less than 5 years.
No such thing (Score:2)
There is no such thing as artificial general intelligence, only intelligence. Anything that can reason about things beyond classification and regression will simply be intelligent; nothing artificial about it (unless we're using the dictionary definition of artificial as simply anything "man made," but that's a blurry distinction, like "artificial fire" or something).
Still waiting... (Score:1)
Already... (Score:2)