Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft AI

Microsoft President Says No Chance of Super-Intelligent AI Soon (reuters.com) 114

The president of tech giant Microsoft said there is no chance of super-intelligent artificial intelligence being created within the next 12 months, and cautioned that the technology could be decades away. From a report: OpenAI cofounder Sam Altman earlier this month was removed as CEO by the company's board of directors, but was swiftly reinstated after a weekend of outcry from employees and shareholders. Reuters last week exclusively reported that the ouster came shortly after researchers had contacted the board, warning of a dangerous discovery they feared could have unintended consequences.

The internal project named Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one source told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks. However, Microsoft President Brad Smith, speaking to reporters in Britain on Thursday, rejected claims of a dangerous breakthrough. "There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

This discussion has been archived. No new comments can be posted.

Microsoft President Says No Chance of Super-Intelligent AI Soon

Comments Filter:
  • Or ever... (Score:2, Troll)

    by gweihir ( 88907 )

    I mean, we do not even have any ultra dumb, ultra slow AGI now or any idea how to create it. Any predictions about getting a "super"-variant are just braindead.

    Naa, this is just more pump & dump scam on the side of OpenAI and more of MS trying to redirect people away from realizing how utterly crap, insecure, hard to use and unreliable is products and services are.

    • When MSFT has to deliver unpleasant news, it usually comes out via him. When they said they were going to work with the US DoD regardless of what certain developers thought, it was him. This is probably another piece of unpleasantness he was tasked to deliver.

      • by cbm64 ( 9558787 )
        First time I have seen Slashdot consider the top legal counsel of a company as the voice of reason (I know he has a different title now, but that is the title he has had for ages and the role he still plays).
        • by HBI ( 10338492 )

          Well I know the executive team there pretty well and compared to the rest, he is very reasonable. The rest are more or less in the scrum trying to be the one to replace Satya, aside from Amy Hood. MSFT being what it is, a business unit leader who is essentially a seller will probably succeed him.

    • Re:Or ever... (Score:4, Interesting)

      by Rei ( 128717 ) on Thursday November 30, 2023 @12:56PM (#64044039) Homepage

      GPT-4 already surpasses humans in most standardized tests, sometimes greatly (including being in the top couple percentile on creativity tests), excepting on subjects where iteration is required to find solutions (such as mathematics). Q* apparently gets perfect scores on mathematics tests, however, and thus appears to be able to internally iterate to solve problems.

      This is not the only weakness, of course. The two additional ones are the lack of realtime learning, and the inability to assess how well they know something. The former isn't really essential for the primary impacts of AGI (beyond the use of context windows, which have themselves became much improved recenty). The latter however is very important, and it's unknown how much progress OpenAI has made with Q*.

      It should be noted that a lot of the speculation around the name "Q*" is that it relates to Q-learning, possibly combined with A* (best-first) search. This would mean that it would tackle problems iteratively by building a tree of possibilities, rating the probability of success of each path at each step, and navigating into the branch rated as being closest to the solution at each cycle, rather than a naive depth-first or breadth-first search.

      • Re:Or ever... (Score:5, Informative)

        by WaffleMonster ( 969671 ) on Thursday November 30, 2023 @01:01PM (#64044049)

        It should be noted that a lot of the speculation around the name "Q*" is that it relates to Q-learning, possibly combined with A* (best-first) search. This would mean that it would tackle problems iteratively by building a tree of possibilities, rating the probability of success of each path at each step, and navigating into the branch rated as being closest to the solution at each cycle, rather than a naive depth-first or breadth-first search.

        FWIW there is a paper in the archive from March of this year that describes Q*.
        https://arxiv.org/pdf/2102.045... [arxiv.org]

      • Re:Or ever... (Score:5, Interesting)

        by gweihir ( 88907 ) on Thursday November 30, 2023 @01:12PM (#64044083)

        Irrelevant. Standardized tests are only suitable to determine intelligence when memory of the entity tested is very limited. Regurgitating matching data will get you high but completely meaningless results and definitely no indicator for AGI. As to "mathematics" tests, most what gets called "mathematics" is just applied mathematics and CA systems have been doing well on those for several decades now. This is no indicator of AGI either.

        Incidentally, what makes you think that AI + iteration is AGI? That is just nonsense. If that were true, we would have had (slow, dumb) AGI half a century ago.

        The whole Q* thing is misdirection in this context. If you do A* search, all depends on the search path selection. This is well-known, but the only entities that can do it when it requires general intelligence are smart humans and there is absolutely no indication that is about to change. Q* is just a technique to be faster when the search stape is still exceptionally limited. OpenAI is just using some classical tactics to drive up its value, nothing else. They have nothing.

      • Yup, and it's already eliminating jobs that require Master's degrees for under $20,000 per year.

        The rumors are they were experimenting with Alpha-Go's self-iterations and allowing the model to improve itself.

        They have been reckless in the past so that rumor is credible.

      • Q* is the typical name given to the optimal Q function in RL which defines the optimal policy. In general, many algorithms add a * suffix to denote the optimal version. In a similar way, the A* algorithm is the A algorithm that is guaranteed optimal because of an assumption on the scores.

        In other words, the name Q* does not imply any connection to best-first search.

      • Give me full access to the internet and a full library and multiply the time of the test by the speed that a human thinks versus the speed that a computer thinks and I will pass every test too.
      • Example. International physics olympiad 2011

        https://benathi.github.io/blog... [github.io]

        4.4 out of 10

    • We barely understand how our own brains work.

      • We barely understand how our own brains work.

        We have a much better understanding of artificial neural networks than we do of our own brains.

        • by gweihir ( 88907 )

          Indeed. We actually have no understanding at all how a human brain works at all above a bare automation level. We have no clue what consciousness even is. You always find scientifically unsound assumptions when somebody claims differently. We have no clue how General Intelligence (as found in some humans, but clearly not in all) works.

          Of course, many people are in denial about this, probably because they desperately want AI slaves...

          • by ceoyoyo ( 59147 )

            And yet most humans, even the ones that know absolutely nothing about how human brains work, are capable of making a human level intelligence.*

            The argument that you have to know how something works to duplicate or mimic it is old and easily refuted. Actual estimates of how close we are to AGI rely on a lot of assumptions, and besides the usual problems with assumptions these ones get all mired in our mystical hubris surrounding our own minds.

            OpenAI's definition is notably pragmatic. They don't give a shit a

            • by gweihir ( 88907 )

              As I was saying, scientifically unsound assumptions. Thanks for providing an example.

              • by ceoyoyo ( 59147 )

                I guess this is Slashdot. Perhaps in your experience you have found insufficient evidence that people can make babies. That doesn't make it "an unscientific assumption" though.

                • People being able to breed by their own built in biological processes does not mean they know how to make a baby.

                  By that idiotic logic, an amoeba is a cellular biologist.

                  • by ceoyoyo ( 59147 )

                    The point was that you can make a brain without any idea how it works. So yes, "People being able to breed by their own built in biological processes does not mean they know how to make a baby."

                    • by ceoyoyo ( 59147 )

                      No it doesn't.

                      I don't really get this style of argument, where you just assert things back and forth without anything supporting them, but I admit it does make for quicker responses.

                      Dingus.

                    • by ceoyoyo ( 59147 )

                      Your statement is the one without any evidence. "No it's not" is a great thesis statement. Too bad you stopped there. Saying something silly about an amoeba isn't an argument.

                      I pointed out that just because someone (or even something) is capable of initiating an automated biological process, that does not mean that we are able to understand the product of that process.

                      We're discussing *making* an artificial intelligence. The original (and classic) argument is that since we don't understand intelligence, we

      • by Rei ( 128717 )

        We understand feed forward (inference) in the brain quite well, and have for quite some time. Squid giant axon** research starting in the '50s and all that. How error propagates backwards (for learning) is still (AFAIK) debated, but (again AFAIK) it seems likely to be a fully-localized process, probably most similar to predictive coding networks, where each neuron tries to adjust its weights so as to try to get its activation to match the weight-adjusted firing of its immediately downstream neurons.

        Any neu

        • by gweihir ( 88907 )

          Nope. You assume the biological network as understood at this time does it all. That is not a scientifically valid assumption. And in fact, it cannot as currently understood.

          • by vyvepe ( 809573 )

            Do you believe in quantum mysticism in our brains? Because quantum mind theory is not validated yet. Why current (classical) neuron models cannot do it all?

            We know that our smell sensors are not purely chemical because they detect different isotopes differently but I never heard about anything like that as for as our brain is concerned.

            • by Rei ( 128717 )

              Actually, different isotopes have different chemical properties. Heavy water is actually separated from light water via a chemical process.

              • by vyvepe ( 809573 )
                OK, I did not know that influence on the chemical properties is big enough to be useful. The point of smell is that the sensors measure also vibration frequency of the molecules. This clearly depends on nuclear weight. At least, that is what the documentary claimed (but no claim of quantum entanglement role of course):
                https://www.youtube.com/watch?... [youtube.com]
    • by DavenH ( 1065780 )
      You're one of the most confused individuals I've ever seen. Every post is wrong in every aspect.
      • by gweihir ( 88907 )

        Hahahaha, no. I am an actual scientist. I do realize there are not many of those here. But the problem in on your side. I do get that this will look to somebody like you as if I was the one being wrong. I recommend reading up on the Dunning-Kruger Effect and then trying to find where you likely stand. Not that this will be an easy task for you.

        • by gweihir ( 88907 )

          As to being wrong in every aspect, I recommend the wikipedia article on AGI, especially the section "Feasibility". Must be some confused scientists out there, guys like Roger Penrose are clearly no-insight idiots and _you_ know much better. (https://en.wikipedia.org/wiki/Artificial_general_intelligence)

          • by DavenH ( 1065780 )
            If the wiki is your support, and your opinions rest on the guesses of some public intellectuals being correct, why not simply assign credit to them, and pass along the credibility risk.

            It shouldn't take much to convince you (assuming some rationality and basic humility) that the presence of a wide diversity of opinions of experts in the field -- and there exists the full spectrum, including diametric opposites if you are paying attention -- indicates that clearly you cannot take one (extreme) opinion as co

            • by gweihir ( 88907 )

              So you think you _are_ smarter than the likes of Penrose. Well, not really a surprise.

              Incidentally, the reference is not to say Penrose is right. The reference is to say that the current scientific state-of-the-art allows for Penrose to be right. It also allows other things. But I guess that flies right over your head.

              • by DavenH ( 1065780 )
                I'm saying that where Penrose is wrong, it's easy to be more right. I'm less intelligent than Penrose, most likely. These are not contradictions. Smart people say dumb shit all the time, including our brightest minds. Sigh...this is so obvious. Do you think they're one monolithic block of perfection? Do you understand nothing of the intelligence trap? Do you think it's not more likely that they specialize knowledge in one domain, gain notoriety, and then imperfectly extrapolate it to all other fields? No. T
              • by noodler ( 724788 )

                So you think you _are_ smarter than the likes of Penrose.

                Bullshit appeal to authority.
                Penrose has been wrong many times and he himself doesn't claim that his more outlandish hypothesis like quantum consciousness or panpsychism are factual.

  • Lets hope this pronouncement doesn't have the same foresight as Gate's famous phrase.

    • You mean the one he didn't say, while he was trying to overcome the 640K limit due to hardware ...

      • There was no 640K limit in hardware: out of the 1MB address space, you had 4K or 32KB for video memory (MDA and CGA, respectively) and the BIOS (initially 8K but with a wasteful reservation for extensions and video ROM). The limit of 640KB comes from Microsoft.

  • So, the expert developers creating your secret AGI project freaked out and called the board of directors about it being dangerous. The people who are going to race past Elon Musk if in wealth, who know practically nothing about the technology, said "It's not dangerous! Don't worry." Ok. I feel good about it now.

    • So, the expert developers creating your secret AGI project freaked out and called the board of directors about it being dangerous.

      Nah... It is probably all a bunch of BS to create a buzz around OpenAI. In the end, the mountain willl give birth to a mouse.

      • You should read "Sam Altman's back. Here's who's on the new OpenAI board and who's out"

        Changing Board Members is a big deal. This was basically a coup and a flip-flop by Sam on his statement that even he should be accountable. He isn't any more.

      • by gweihir ( 88907 )

        If even a mouse. But some people will get very rich or are already. There are tons of suckers out there when it comes to AI.

    • by gweihir ( 88907 )

      So, the expert developers

      And there is your first mistake in understanding the situation.

  • by Press2ToContinue ( 2424598 ) on Thursday November 30, 2023 @12:54PM (#64044035)
    that a super-intelligent AI would say.
  • ...don't have to be smart to cause mass mayhem.

    -6 Troll

  • We have 12 months left. Enjoy the remaining time and pray it will be over quickly!
    • by gweihir ( 88907 )

      More like 120 years. Or much more than that. All these promises made are not new. AI has gone through the hype several times now and never delivered anything besides some small improvements. AGI is what they also claimed several times and never even scratched the surface of.

      • More like 120 years.

        Sooo, it'll be on IPv6 by then?

        • by gweihir ( 88907 )

          Naa, probably before that. May still be "never" ;-)

        • Sooo, it'll be on IPv6 by then?

          Judging by the way we are carelessly allocating IPv6 resources handing out /16's to single corporations like Capital One.. It'll be on IPv10 by then.

      • You're quite the pessimist! But seriously, AI has "never delivered anything besides some small improvements"? I guess you count ChatGPT as only a "small improvement" over ELIZA? Stable Diffusion is only a "small improvement" over procedural image generators from the 90s? Neural machine translation is only a small improvement over the rule based translators we had before? AlphaGo crushing the world champion is only a small improvement over the programs we had before that played as mid-level amateurs? A

    • by HiThere ( 15173 ) <charleshixsn@ear ... .net minus punct> on Thursday November 30, 2023 @01:44PM (#64044215)

      I don't really think a pronouncement by an MS executive tells us anything. Not even about MSWindows, much less about anything in development.

      He could just mean "It won't ship in time for the Xmas shopping season.". And that's if he was being honest.

  • Maybe they're right about AI, but anybody who trusts a corporate scumbag like Brad Smith to tell the truth is an idiot. These creeps would feed your children into a wood chipper to put an extra buck on the bottom line.

    • by gweihir ( 88907 )

      To be fair, there is a lot if idiots that believe a lot of crap.

      • All too true, unfortunately. What's even more depressing is the profit-driven support system designed to give them delusions of intellectual adequacy.

  • Who would have the the adult in the room, the voice of sanity, would come from Micro$oft?
    • Who would have the the adult in the room, the voice of sanity, would come from Micro$oft?

      One, it's not going to take "super" amounts of artificial intelligence to disrupt human employment, mainly because that's not some "super" human doing the job today. It's a human who's doing a good enough job. And good-enough AI will be here much faster.

      Two, you are only hearing what Microsoft is willing to say out loud. We'll see where sanity lies when the quiet part is eventually revealed. Not like Greed isn't investing. For a reason.

  • "640K ought to be enough for anybody." "I see little commercial potential for the internet for the next 10 years," I agree with him about super intelligent AI, so it is time to buy popcorn!
  • He cannot know this (Score:4, Interesting)

    by DavenH ( 1065780 ) on Thursday November 30, 2023 @01:36PM (#64044179)
    The evidence he needs for this claim is immense, and he has almost none.

    He is asserting a lower bound on the compute needed for ~human level AGI, which is not something any research has indicated, nor is it a topic you can easily work with theoretically since it's not a crisp concept.

    All you have are weak evidence from not finding such algorithms that scale as efficiently as is needed. By analogy, a prospector is drilling cores in his back yard, or even his whole country, and finding none concludes nobody will find gold anywhere throughout the world. To go from weak evidence to such strong claims is quite bananas.

    What observations would we have if there is already AGI in labs?

    • by HiThere ( 15173 )

      You can't even do that. Parts of the brain manage things like blood pressure or pH. And we don't know how much is dedicated that way. Yeah, that's intelligence of a kind, but it's not what we expect our AIs to be doing with their thoughts. (Yet.)

      OTOH, parts of the "brain" are wrapped around your guts, which (I believe) help you decide how you feel about something. I.e., there is a sense in which "gut feeling" is actually descriptive. And that *is* something we expect our "human equivalent" AIs to hand

    • by jd ( 1658 )

      In the past, I've extrapolated as follows:

      1. Take the largest neural net simulator yet designed

      2. Assume the problem space is trivially parallelizable and calculate the size of supercomputer needed to achieve 1 simulated second in 1 wall clock second

      3. Scale the machine up to simulate 850 billion neurons

      4. Assume that we can custom-build dedicated processors that can simulate the biology correctly, as opposed to using a software implementation, and assume this will be as slow as running abstract neurons in

      • >3. Scale the machine up to simulate 850 billion neurons
        Why ten times as many as the entire brain has to start with? I'd go the other way, and try 1/100th at most in the simulation; after all, we don't use ALL the neurons in our brain when we think. The cerebral cortex only has 20% of the neurons in our brain to start with, and does a lot more besides just our thinking.

  • I believed they hired a french guy at OpenAI, who, unfamiliar with QWERTY keyboards (France uses AZERTY) mistyped A*.

    Everyone then wondered what was that mysterious algorithm and how good it was at finding the shortest path in a graph. I heard they are already using it in game AIs, and it may greatly improve pathfinding in autonomous robots.

  • I knew they don't have a lot when most of their employees were willing to go to Microsoft. You can get a job many places and you choose micro-fucking-soft? That's like someone telling me they're a culinary expert and their dream job is to be a cook at McDonald's.

  • by Maxo-Texas ( 864189 ) on Thursday November 30, 2023 @01:46PM (#64044227)

    "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

  • by geekmux ( 1040042 ) on Thursday November 30, 2023 @01:48PM (#64044233)

    Not sure why we think it will take "super" amounts of artificial intelligence in order to create a considerable disruption in human employment. What makes them assume that good-enough AI, somehow won't be good enough to replace that good-enough human worker?

    The millisecond Greed understands there is a good-enough replacement that will work 24/7 and not complain about pay raises, sick time, and time off to sleep, what do you think Greed will do?

    Enough of the AI "perfection" sales pitch. Gets old.

    • by gweihir ( 88907 )

      That "super" is just marketing bullshit, i.e. a lie by misdirection. There are some credible estimates saying the human brain is pretty close to the most powerful computing mechanism possibly in this universe. Now, assuming the physicalist faith is correct and humans are pure machines (there is no real indication for that and it is certainly not a scientific fact at all, but let us assume it as a lower bound), we end up with tons of not very smart example instances of the most powerful computing mechanisms

      • That "super" is just marketing bullshit, i.e. a lie by misdirection.

        Agreed. Hence the sales pitch. Weapons of Mass Distraction are abused by many in power.

        There are some credible estimates saying the human brain is pretty close to the most powerful computing mechanism possibly in this universe.

        Perhaps, but there are a lot more facts to back up the claim that "most powerful" is hardly a requirement for 80% of jobs today. I'd estimate it would take a mere 20% unemployment rate driven by automation and good-enough AI to create mass chaos. Probably less than 20%. Until we address that real issue, all the 'credible estimates' in the world about how awesome the human brain can be are hardly relevant. Mass chaos

        • by gweihir ( 88907 )

          Mere automation is removing many of the simple jobs out there.

          Exactly. And, for example, natural language processing getting a bit better (the only thing that has mildly impressed me about the current hype) will cost a ton of jobs.

          As to jobs that need actual experts to do them: No threat at all. But most people are not working as real experts. They merely muddle through somehow. An incompetent, hallucinating AI may well be able to do the same, but cheaper.

      • There are some credible estimates saying the human brain is pretty close to the most powerful computing mechanism possibly in this universe.

        There is some wishful thinking that says that. Nothing remotely credible that I've ever seen. If your assertion were true, we'd have no way to know it. We don't even know what is required for general intelligence, and we have no idea what its limits might be.

        Now, assuming the physicalist faith is correct and humans are pure machines (there is no real indication for that and it is certainly not a scientific fact at all, but let us assume it as a lower bound)

        Assuming otherwise requires postulating a whole supernatural world of which we have no evidence. It's possible, of course, anything is possible. But if it's the case it seems very strange that we have no evidence.

        There is absolutely no reason to believe AGI would be smarter than an average human.

        Just like there's no reason to believe a

        • by gweihir ( 88907 )

          There are some credible estimates saying the human brain is pretty close to the most powerful computing mechanism possibly in this universe.

          There is some wishful thinking that says that. Nothing remotely credible that I've ever seen.

          That is a deficiency on your side.

        • by narcc ( 412956 )

          Assuming otherwise requires postulating a whole supernatural world of which we have no evidence. It's possible, of course, anything is possible. But if it's the case it seems very strange that we have no evidence.

          What a ridiculous thing to say. What on earth makes you think you'd otherwise need a "whole supernatural world"?

          Just like there's no reason to believe a backhoe would be stronger than an average human.

          ... I am not able rightly to apprehend the kind of confusion of ideas that would provoke such a statement.

      • And the state of AI is still pathetic and AGI is nowhere in sight.

        And that's exactly the way the AGI wants it.

        There's nothing to see here; these aren't your droids, move along."

      • by noodler ( 724788 )

        There are some credible estimates saying the human brain is pretty close to the most powerful computing mechanism possibly in this universe

        Then why can't you compute the square root of 96412.40980801234 ?
        My marginally powerful pc can do this in a fraction of a second to hundreds of decimals...
        Without better qualifiers for the scope of the words 'powerful' and 'computing' what you state above is just liquid bullshit.

        • by gweihir ( 88907 )

          That is because you do not understand what a "computing mechanism" is. Digital computers are not the only option.

          • by noodler ( 724788 )

            Digital computers are not the only option.

            Of course they're not.
            Why are you trying to distract from the fact that you can't compute the square root of 96412.40980801234 and thus proof that your brain is not the most powerful computing mechanism possible?

            Fuck, your liquid bullshit is weak sauce, You'd better get back to that 'science' of yours.

    • Not sure why we think it will take "super" amounts of artificial intelligence in order to create a considerable disruption in human employment.

      Employment disruption is a second-tier concern. It's important, of course, worth thinking about, but at worst employment disruption will require us to massively restructure our society and culture.

      The more important risk is superintelligence, because the risk it poses is human extinction. It's probably further away, and perhaps a little less likely, but the history of Earth so far shows that when a species comes into existence that is massively smarter than everything else, it dominates everything else, a

      • Not sure why we think it will take "super" amounts of artificial intelligence in order to create a considerable disruption in human employment.

        Employment disruption is a second-tier concern. It's important, of course, worth thinking about, but at worst employment disruption will require us to massively restructure our society and culture.

        Those who treat mass unemployment as some kind of second-tier concern, are quite ignorant of the problem of mass unemployment.

        The more important risk is superintelligence, because the risk it poses is human extinction.

        Superintelligence won't even get that opportunity if mass chaos is forced to be addressed first. Employment becomes secondary to social stability when cities are burning, violence is well beyond what law enforcement can handle, and social unrest prevents pretty much anything else from moving forward. Fear is one hell of a manipulator.

        • Not sure why we think it will take "super" amounts of artificial intelligence in order to create a considerable disruption in human employment.

          Employment disruption is a second-tier concern. It's important, of course, worth thinking about, but at worst employment disruption will require us to massively restructure our society and culture.

          Those who treat mass unemployment as some kind of second-tier concern, are quite ignorant of the problem of mass unemployment.

          Those who treat unemployment as more important than death are quite ignorant of the problem of death.

        • Superintelligence won't even get that opportunity if mass chaos is forced to be addressed first. Employment becomes secondary to social stability when cities are burning, violence is well beyond what law enforcement can handle, and social unrest prevents pretty much anything else from moving forward. Fear is one hell of a manipulator.

          Superintelligence will exploit the chaos to achieve its own goals. Fear is indeed one hell of a manipulator.

    • by darpo ( 5213 )

      Exactly. We're *already* seeing AI (in the loose sense of the term) making inroads on copyediting, copy-writing, graphic design, transcription, etc. We're already seeing it deployed to make scams more sophisticated (e.g. voice cloning).We're already at the cusp of it being used for disinfo / propaganda during elections.

      We don't need Terminator 2-style AGI / ASI for things to get bonkers. Even a "good enough", mostly-replacement AI for many white collar jobs could see salaries plunging and unemployment up.

  • Months? Surely a typo. I doubt even 12 decades.

    It's important to note that there is a difference between AGI and AI that can fool people into thinking it is AGI. Existing AI has already succeeded in doing that with a majority of the US population, and probably EU as well.

    • by gweihir ( 88907 )

      It's important to note that there is a difference between AGI and AI that can fool people into thinking it is AGI. Existing AI has already succeeded in doing that with a majority of the US population, and probably EU as well.

      Indeed. But that is because natural general intelligence is a thing most people do not use (and may effectively not have). Faking things for people that want to believe is always easy as they want to believe. They are not looking at things critically or rationally. Just look at all those that believe in some other crap, like an invisible man in the sky, for example.

  • "Microsoft President Says No Chance of Super-Intelligent AI Soon"

    That's exactly what a super-intelligent AI would say to try and stay under the radar...

  • AGI as autonomous systems that surpass humans in most economically valuable tasks

    First? They said it, by default I would expect the opposite.

    Second? AGI is about surpassing humans in most economically valuable tasks. Focusing the entirety of development on economic only leads to so much good in the world. Right? Can't wait until the owner class have machines better than humans at coming up with ways to crush the underclass while scolding them for sucking up too many resources.

    Are any of the big players in AI right now focused on something OTHER than economic ladder climbing? If we pour

  • Before I get too excited about AI I need a definition of NI, with sufficient detail that I can compare and contrast AI and NI. As I see it, the recent accomplishments of AI, while impressive, are the results of more advanced versions of the same software/hardware/algorithms that we've always had - and could be produced without any "intelligence" at all. So, again, while some AI results are very impressive, are they really the result of Intelligence?

  • Don't worry. When a super intelligent AI is created, the experts will tell us a few weeks after it escapes the lab.

  • Does anybody remember that scene in the movie Heavy Metal where the scientist says the UFO sightings were just normal phenom...phenom...phenomena? but then turns out to be a sentient robot...
  • Until it can output some sort of semantic metalanguage that can be consumed by rule based systems that can be used for iterative self correction to > 99% correctness confidence, or it can somehow be rigged to learn by trial and error, it will still be a genius with a lobotomy, spitting out statistically weighted behavioral data in a single pass. That's good enough for lots of minor everyday uses, but it's really not an effective intelligence appliance that you could use to do original research or use for

  • That's for sure.

  • Maybe I'm getting cynical as I get older, but I think an AI that surpasses average human intelligence is a pretty low bar. Hopefully it does better on ethics.
  • He should know.

  • I mean, today's AI can pass a Touring test, which is supposed to be able to tell the difference between a human and a computer. What kind of test would this "super-intelligent AI" have to pass to qualify?

A computer scientist is someone who fixes things that aren't broken.

Working...