Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI

OpenAI Researchers Warned Board of AI Breakthrough Ahead of CEO Ouster (reuters.com) 186

An anonymous reader quotes a report from Reuters: Ahead of OpenAI CEO Sam Altman's four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm was a key development ahead of the board's ouster of Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman's firing. Reuters was unable to review a copy of the letter.

According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events. After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting. The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math -- where there is only one right answer -- implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. In their letter to the board, researchers flagged AI's prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest.
Last night, OpenAI announced it reached an agreement for Sam Altman to return as CEO. Under an "agreement in principle," Altman will serve under the supervision of a new board of directors.
This discussion has been archived. No new comments can be posted.

OpenAI Researchers Warned Board of AI Breakthrough Ahead of CEO Ouster

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Wednesday November 22, 2023 @11:42PM (#64025927)

    Even in its current dumb-as-a-brick, hallucination-prone form.

    The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary, even if the AI does a worse job, because cutting costs is way more important to them than the quality of the products or services.

    AI will enshittify the whole of society and put everybody out of a job, not because AI is good (it isn't - yet) or because it's AI's fault but because capitalists will stop at nothing to stay competitive. They'll all race to the bottom even if it means destroying society.

    • Re: (Score:2, Interesting)

      by Brain-Fu ( 1274756 )

      Yes, greedy people will seek to use this for greedy reasons. That's no reason to halt development, however. Though I don't see a line in your post saying it should.

      Anyway, every great technological breakthrough had greed motivating it, this one is no different.

      I also wanted to say that these people did not invent super intelligence. They invented something that doesn't even remotely qualify, but (with a little imagination added) might maybe someday lead to it. And they got all freaked-out over that. Th

      • Re: (Score:3, Interesting)

        This incremental step is harmless in-and-of itself, and advances our knowledge of something important. Their fear is not justified.

        Tell that to the millions of truck drivers, taxi drivers, secretaries, accountants, translators, school teachers, low-level programmers, writers, artists who will lose their jobs, and to the millions of people whose jobs can't be replaced by AI, like carpenter, plumber or electrician, but will lose a ton of business because all their customers are jobless and on the dole, and unable to pay them.

        I'm sure they'll all find this incremental step was really harmless.

        • by getuid() ( 1305889 ) on Thursday November 23, 2023 @03:10AM (#64026131)

          This is a problem of politics, not of AI.

          The argument for people going to work, generally, is "because there's no free lunch", but when AI does all the work, then that's the very definition of a "free lunch". The failure to let everyone join the table isn't AI's it's ours and ours alone.

          In other words: coupling the ability to survive to a job, even if demonstrably that job, and the productivity it entails, doesn't need a human, is sick as fuck and not an AI related problem.

          • You have to make welfare state first: if a lot of people loose their jobs, and don't get any money, no-one will be able to buy the goods produced in the AI factories. The only solution is that the rich owners in some other way than paying wages, puts money back into the hands of the consumers.
            • I don't think it's a "what comes first question": making one, or the other, aren't singular events. You can't "make tje welfare state" in a moment's notice, with a finger snip. It takes time. And during all that time, the resources needed to actually pull it off are... well, not yet available, if we're talking about what AI will bring, but is not yet allowed to because "you need to make the welfare state first."

              The only solution is to make them both at the same time, hand in hand, gradually.

              However, ar any

        • by Rei ( 128717 ) on Thursday November 23, 2023 @08:11AM (#64026411) Homepage

          "... because their customers are jobless and on the dole"

          Yeah, that's not how this works.

          Look at the industrial revolution, for example. Machines could do the work of hundreds or even thousands of manual labourers. So did unemployment jump to over 99%? Not even remotely. The more efficient production of goods and services led to correspondingly more consumption of goods and services. The whole economy expanded, and the "unemployment" created by jobs lost to machines translated in realtime to new jobs in the new economy.

          I'd also add that contrary to popular perception, the industrial revolution was unambiguously good for quality of life in essentially every metric. Hours worked per week, which had been steadily rising dropped once machines became dominant. Mean wages soared in purchasing power terms. Poverty plunged, people being unable to afford food, clothing and shelter plunged. Employment surged in fields like research, medicine, even the arts, as the percentage of resources required to focus on simply keeping the population alive and functional up through breeding age dwindled.

          Increasing the efficiency of production is good. If AI can increase the efficiency of production, that's a good thing. Again, it's not going to lead to mass unemployment, because people just consume more until you're back up to maximal employment. Only once AI can do everything better than humans do you get to a high-unemployment situation.

          I know the response to this is, "Well, what about all the individual tragedies of people losing jobs they loved to AI?" And to that I'd say, yeah, that was the story of the industrial revolution too. Weavers are the classic case - they spent their whole life developing their trade, only to see their bosses teach machines to copy their patterns and make garments (which they criticized as being "of inferior quality"). They were literally burning down factories and attempting assassinations, they were so mad over this. But do YOU wish that clothes today were still all hand-weaved and hand-sewn and it takes 12 hours of work to make a pair of pants, meaning that if you want to maintain a median wage of ~$20/hr then pants cost $480 before taxes, raw materials, and profit? And of course raw materials themselves would be much more expensive without automation.

          Again: efficiency is good. Do hand-made things as an art or liesure, sure, absolutely - and some people will spend their surplus income on that specifically for the human connection. But for bulk consumed products and services, efficiency is critical for a high societal quality of life.

          Anyway, back to the OpenAI topic: for anyone trying to follow this saga, I'd STRONGLY recommend this article [substack.com] about what appears to have gone on behind the scenes, and what's likely going forward.

          • by dgatwood ( 11270 ) on Thursday November 23, 2023 @10:11AM (#64026613) Homepage Journal

            Increasing the efficiency of production is good. If AI can increase the efficiency of production, that's a good thing. Again, it's not going to lead to mass unemployment, because people just consume more until you're back up to maximal employment.

            This is one of those spots where I point out that past performance is no guarantee of future returns.

            There are two problems with your theory. The first is that in previous rounds of automation, the jobs didn't go away. They were replaced by new jobs in slightly different areas. As farm jobs declined, manufacturing jobs appeared. As manufacturing jobs declined, office workers and retail sales jobs increased. But now, there are approximately no new categories of jobs being created to replace the jobs that are going away, unless you count gig workers. Retail is slowly being supplanted by online delivery, and apart from the drivers, most of that process is heavily automated and getting more automated every day. And the drivers are going to be automated in the relatively near term, too. And technology is starting to automate office jobs and even creative jobs. So in this round of automation, there aren't likely to be any new jobs to replace the jobs that are going away, except a limited number of jobs taking care of the elderly population. There just aren't any obvious new areas for job creation.

            The second problem is that we can't consume infinite amounts of stuff. At least in the western world from the middle class up, as a society, we're reaching "peak stuff" — a point where we can't really keep adding and adding more and more stuff without it causing more problems than it solves. We're already consuming more food than is healthy, resulting in an obesity crisis. We're already buying so much junk that we're having to mass purge junk regularly to have room in our houses. And we can't just build bigger houses, because we're constrained by the limitations of square footage of land, and building up to multiple stories isn't all that practical beyond a certain point. The only room for increased buying of stuff comes from cheap junk breaking more frequently because it wasn't made well enough, and that's not really a good thing for society.

            At this point, we're actually seeing some convergence of stuff, resulting in people needing less stuff. For example, the cellular phone has replaced half a dozen devices for some people (phone, phone wiring, TV, camera, camcorder, note paper and pen, etc. When cars drive themselves, we'll need way fewer of those, too.

            There is still probably room for the people at the bottom of the economic food chain (the lower class, the third world, etc.) to consume considerably more stuff than they can currently afford, but to a large degree, those are also the people who will have less income going forwards, because they won't have jobs. So there's not really an obvious way for their consumption to grow.

            The most likely end result of automation will be the destruction of the owner class, as the lack of income coupled with the lack of human labor required to produce things makes their value collapse to zero. And at that point, we'll probably end up in a sort of socialist utopia a la Star Trek TNG. But the path to that point will probably not be pretty, with the owner class using regulatory capture and rent seeking behavior to kill off the lower class en masse, with wars fought over collapsing economies, etc. Some would say that this has already happened.

            • by Rei ( 128717 )

              This is one of those spots where I point out that past performance is no guarantee of future returns.

              It's not just the history of the industrial revolution, but the history off all of humanity. I guarantee you, every time a tribe of hunter gatherers learned of agriculture (which produced far more food per unit of labour than hunting and gathering), there were people going, "Well, I guess everyone is going to be idle and nobody is going to do work anymore!". Except, oh hey, I guess we would rather live in

              • by Rei ( 128717 )

                As a side note, I think the Assyrian Empire is a great example of what happened to all that labour freed up by agriculture - and not just things like people building city walls and palaces and the like. Rather, during the planting and harvest seasons, people did their work on their land, but then when they became free for much of the rest of the year... that became "war season". Every year, when all the agricultural labour was freed up, the Assyrians would mass it into a big army and go to war against som

          • by dryeo ( 100693 ) on Thursday November 23, 2023 @12:45PM (#64027001)

            You're looking at history with a telephoto lens, squishing stuff together. It took 70 odd years, 3 generations for full employment to bounce back after the 1st industrial revolution. 3 generations and then things got better. If that history repeats, your saying that things will be great around the year 2100.
            Granted the wave of automation around the beginning of the last century went a lot better, perhaps due to the fear of socialism, but it still meant a large reduction in the workforce, child labour removed and replaced by school, a trend that continues, many women becoming stay at home Moms instead of workers, shorter work week and work hours, retirement for older workers and it still took large wars to really get the economy going.
            Which route we'll take this time is to be seen but with current the concentration of wealth, it seems it might be more like the 1st industrial revolution with people struggling to get gig work and dreaming of full time work.

        • Welcome to the march of technology. It's always been this way, from the development of cement and later concrete requiring fewer construction laborers lifting massive blocks of stone, to mechanization of agriculture, to heck, the proverbial buggy whip manufacturer. Usually it leads to a displacement; you need less people in Industry X, but Industries Y and Z can pick up the slack. Generally the wider role of government and industry has been to "skill up" so that a workforce can transition.

          Just how much AI i

      • Yes, greedy people will seek to use this for greedy reasons.

        That's bad enough as it is, what worries me is that evil people will seek to use this for, well, evil reasons.

        Bad actors can and will use AI to steal and fuck shit up, cause disruptions, etc etc, whatever they can do.

        We should worry; the scope of damage that could (will) be done by using capable AI systems to cause mayhem is, at this time, beyond our ability to predict or foresee.

        Time will tell, but I suspect we'll see all sorts of creatively-evil things done with AI, possibly on a global scale.

      • This incremental step is harmless in-and-of itself

        It's really not. It's only 'harmless' if you look at it in isolation.

        The fact that's it's incremental means nothing; what matters is the effect, the end result. And it's not harmless; in this case it's likely to have some pretty severe societal effects which will be bad for a lot of people.

    • Even in its current dumb-as-a-brick, hallucination-prone form.

      To be fair, we've also had politicians for a *while* and we're still here -- so far anyway.

    • by SuperKendall ( 25149 ) on Thursday November 23, 2023 @02:35AM (#64026115)

      The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary

      Remember this has already been tried in many forms.

      One of those forms was the massive offshoring efforts by companies some time ago, to replace local programmers with much cheaper offshore workers (read: India).

      Well low and behold the companies that did thus suffered and the ones that avoided it more or less, thrived - and so offshoring, while still a thing, is no longer really a threat to country-local workers. In fact it's so much NOT a threat that companies like Amazon are demanding workers actually come into the office again!

      So too will it be with AI that produces very mid content - mid writing, mid art. Yeah for a time it will look like productivity but then you find how much cleanup is required, and they will go back to mostly people who augment abilities with AI tools.

      All AI is and ever will be is another tool. The fear over a few and powerful tool is madness - it will never "threaten humanity" any more than any college student who could produce similar output to many queries you might put to an LLM.

      • Threats to humanity can be interpreted many ways. You can argue that in revolutionary France, the threat against the human population which arose due to the extreme greedy behaviour displayed by the rich was solved by a national terror campaign and a collection of heads. In postmodern America, about half the population is already chomping at the bit to elect someone who will clear the board of all the tech bros and finance parasites, and everyone else with a little privilege. All this talk of AIs threatenin
      • All AI is and ever will be is another tool.

        Tell me: What comes after humans?
        Be it in 100, 1000 or a million years.

        Do you believe that humans are the epitome of what this universe can produce when it comes to intelligence? Or will something else take our place at the helm at some point?

        Food for thought: The propagation speed of signals in our bodies maxes out at about 100m/s. The (currently known) theoretical limit is about 300 000 000m/s, 6 orders of magnitude (or 'a million times') faster. AI already operates close to that limit.

        Again: What comes a

    • replacing an employee that draws a salary

      This isn't a problem of AI, it's a problem of politics, and purely human.

      If AI can do all tbe work, while humans ait back and do art and music, then please, by all means, let's go there. Even if it can't do all the work, but only half, let's go there, too.

      If Humanity can be productive and put meal on everybody's table without humans actually... working!... then I fail to see any other problem, except perhaps our own stupidity in being unable to actually get ourselves organized to see it through.

      • replacing an employee that draws a salary ... This isn't a problem of AI, it's a problem of politics, ....

        Why do we need to stop with employees? The real problem here is that we're waiting for politicians and CEOs to strike first.

        People keep looking at AI as something that will replace "low-end" or "mid-range" jobs and think CEO's, lawyers, politicians, bankers etc will just carry on as usual. However there's no technical reason AI can't do their jobs as well or even better.

        What technical limit would prevent a decent AGI from running a company, investing in shares, debating laws or making policy? If you really

        • Why do we need to stop with employees?

          Why did we have to start in the first place? Or do you honestly suggest that everyone enjoys having the most productive hours of their lives, day-in day-out, taken away from under their own autonomy and have them invested in... something else... under threat of starvation and cold?

    • by gweihir ( 88907 )

      AI will enshittify the whole of society and put everybody out of a job

      Not really. There are tons of things "AI" cannot do. But collapse of society needs far less job loss to happen. If, say, they kill 30% of jobs (and that may be possible), that may already be enough to have the world burn.

    • by sinij ( 911942 )

      The danger doesn't come from AI but from greedy corporations

      Exactly. UnitedHealth uses AI model with 90% error rate to deny care. [arstechnica.com]

    • by mjwx ( 966435 )

      Even in its current dumb-as-a-brick, hallucination-prone form.

      The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary, even if the AI does a worse job, because cutting costs is way more important to them than the quality of the products or services.

      AI will enshittify the whole of society and put everybody out of a job, not because AI is good (it isn't - yet) or because it's AI's fault but because capitalists will stop at nothing to stay competitive. They'll all race to the bottom even if it means destroying society.

      So what you're really saying is that AI isn't an existential threat to humanity. Corporate greed is an existential threat to humanity.

    • A bigger issue is centering everything on math. Math is based on logic, but logic requires health emotion to balance it out. People who are overly logical have difficulty solving emotional problems, and that leads people astray when it comes to interacting with others. That's why there is both IQ and EQ measurements.
    • by jvkjvk ( 102057 )

      As long as people are able to go somewhere else, anyone who uses AI to enhance user experience rather than "enshittify" it will have a competitive advantage.

  • by Rosco P. Coltrane ( 209368 ) on Wednesday November 22, 2023 @11:51PM (#64025935)

    Boy, it's just soo exciting. It has golden boy billionnaires, unexpected twists in the storyline, mystery, futuristic tech... You could easily make an award-winning one-season, five-episode Netflix series out of it. Just leave Microsoft out of the series to avoid killing the buzz.

    • I felt like it showed that no one cares about anything but $$$$$. The nonprofit OpenAI, the actual owner of the for-profit OpenAI, was run over by the profiteers.

    • by gweihir ( 88907 )

      Naa, I find it shallow, uninspired and formulaic. Even the "surprising twist" and fake drama thrown out there now is anything but inspired or engaging.

      I admit they are a bit better than the average pump & dump scam, but not that much. They just cleverly picked an area where many people go irrational and then threw known stuff together and scaled it up one step. So they are at least a scam with a product, bad joke that product may be.

  • Math isn't AI (Score:3, Interesting)

    by ceoyoyo ( 59147 ) on Wednesday November 22, 2023 @11:57PM (#64025941)

    People are shit at math. Computers are really good at it.

  • by topham ( 32406 ) on Thursday November 23, 2023 @12:37AM (#64025981) Homepage

    This story is obviously garbage and the person who wrote the original hasn't even played with chatGPT.

    ChatGPT exceeds the described level of evaluation by miles.

    It's not an AGI, it's much more as-if you took a brilliant person and tossed their dead brain onto a slab, plugged in a few electrodes and then started questioning it.

    Now imaging if you could wake it up...

    The AI we have to worry about would exceed a child like understanding of math by such a margin you likely wouldn't even recognize it as math. It would however be able to answer questions with it you aren't even entirely sure how to postulate.

    • Re:Story is garbage (Score:5, Interesting)

      by quantaman ( 517394 ) on Thursday November 23, 2023 @01:13AM (#64026033)

      This story is obviously garbage and the person who wrote the original hasn't even played with chatGPT.

      ChatGPT exceeds the described level of evaluation by miles.

      It's not an AGI, it's much more as-if you took a brilliant person and tossed their dead brain onto a slab, plugged in a few electrodes and then started questioning it.

      Now imaging if you could wake it up...

      The AI we have to worry about would exceed a child like understanding of math by such a margin you likely wouldn't even recognize it as math. It would however be able to answer questions with it you aren't even entirely sure how to postulate.

      Sure the explanation is odd, though I'm guessing the researchers were legit and the actual breakthrough got lost in translation.

      IF this letter is real and Altman was ignoring the worries and hiding the concerns from the board... then the board was doing exactly what it was supposed to when they fired him. The whole point of OpenAI is the safe development of AGI. If Altman was overriding safety warning from the researchers then firing him was the only choice.

      • by Bobknobber ( 10314401 ) on Thursday November 23, 2023 @01:51AM (#64026081)

        And yet the board was what got re-aligned, not Altman.

        This still points to a failure in the board by writ of not taking due diligence at a much earlier point in time. They should know that MS is a for-profit company before they took that deal last year. They know that Altman has a bunch of for-profit side gigs like Worldcoin going on. The fact that a non-profit awards equity is already a conflict of the original mission statement of OpenAI.

        They simply took too long doing their job and now they have likely lost the only chance of removing Altman and re-aligning the organization with its original purpose. OpenAI has gone corporate and they cannot do anything about it anymore. Letting Altman be the face of the company was a fatal mistake to the board.

        • It's just unfortunate that he'll use this failed failure as an excuse for why he can't be held to account until we get to Elon or Trump levels of self delusion and everyone can see it for themselves.

  • Marketing hype (Score:4, Insightful)

    by backslashdot ( 95548 ) on Thursday November 23, 2023 @12:47AM (#64025991)

    No way we have AGI. If we did have it, we could ask it to conceive new inventions like design a faster CPU or make a cure for cancer. Blindly regurgitating crap from the Internet hardly qualifies.

    • Re:Marketing hype (Score:4, Interesting)

      by ls671 ( 1122017 ) on Thursday November 23, 2023 @01:20AM (#64026047) Homepage

      I explain that to my customers, I show them how spam assassin works and tell them it's just a glorified version of spam assassin Bayes filter and that no implementation of real AI is really widely accessible right now.

    • Yes, it didnâ(TM)t say we do. It says they think theyâ(TM)ve made a breakthrough in its development, and may be close.

      • by gweihir ( 88907 )

        They may be in delusion about that. It has happened numerous times before and to people that generally were regarded as "smart" as well. They do _not_ have AGI or any real precursor of it and they are not "close". They would have to be at least centuries ahead of everybody else and that is just not credible.

        • They may be in delusion about that. It has happened numerous times before and to people that generally were regarded as "smart" as well. They do _not_ have AGI or any real precursor of it and they are not "close". They would have to be at least centuries ahead of everybody else and that is just not credible.

          The trouble is that, since we don't know what intelligence actually is, if we did, like alchemists stumbling over nuclear fusion, suddenly discover the recipe for it, we would have no way to know in advance. That means that every single AI researcher who discovers a completely new technique (think of some of the poor people who invented the neural network and doomed themselves to years in academia, or symbolic logic or whatever) that might have been the missing magic element they go through moments of confu

          • Re: Marketing hype (Score:5, Insightful)

            by gweihir ( 88907 ) on Thursday November 23, 2023 @08:09AM (#64026407)

            There honestly might be some small (but profound) tweak to some existing algorithms, probably around methods of feedback and learning, which would allow proper intelligence.

            Not really. Statistical models cannot do it. Deductive models could do it, but drown in state explosion very fast. There are no other types of models. Hence it would require a whole new type of mathematics, but there is really no room for that. All "AI" can do is combine things that are already there, but were not visible to humans in that combination in the sea of data due to sheer size. That is, at best, a better search engine. Sure, that is useful to a degree. But it is not AGI.

            I do agree about that filter. The problem is that many, many people are willing to believe in "magic" when something becomes too complex for them to understand. Scientists and engineers are no exception. It takes a very firm grasp of the fundamental mechanisms to actually understand there is no "magic" in digital computations and not many people have that.

            • There honestly might be some small (but profound) tweak to some existing algorithms, probably around methods of feedback and learning, which would allow proper intelligence.

              Not really. Statistical models cannot do it. Deductive models could do it, but drown in state explosion very fast. There are no other types of models. Hence it would require a whole new type of mathematics, but there is really no room for that.

              I'd like to know what your explanation of the fact that simple observable things which don't have huge complexity computation, like slugs, ant colonies and slime moulds, show visible "intelligence" and yet there's no evidence we are matching them. I don't much like Roger Penrose's explanations about weird quantum effects, though I'd be happy to agree to differ. If you don't accept Penrose's ideas, then I'd say you are stuck in something that can be modelled classical computation.

              All "AI" can do is combine things that are already there, but were not visible to humans in that combination in the sea of data due to sheer size. That is, at best, a better search engine. Sure, that is useful to a degree. But it is not AGI.

              I mean, that's current AI; W

              • by gweihir ( 88907 )

                I don't much like Roger Penrose's explanations about weird quantum effects, though I'd be happy to agree to differ. If you don't accept Penrose's ideas, then I'd say you are stuck in something that can be modelled classical computation.

                Obviously classical computations cannot do consciousness (which clearly exists and calso clearly can influence physical reality) and free will (which is at least hugely plausible to exist). Hence if you do not accept Penrose's ideas here, then you ignore available evidence. Now, what happens to come in via those "weird" quantum effects is another discussion and ranges from "just true random" (requires ignoring the existence of consciousness) to "interface to consciousness" (which is speculative). Incidental

            • Not really. Statistical models cannot do it.

              I would not call the LLMs statistical just because they return numbers that are normalized to sum to 1. There is no statistical theory behind how they work.

              Deductive models could do it, but drown in state explosion very fast. There are no other types of models.

              That's a common division, but the "statistical" models are really just models that use numbers instead of explicit logic. If you believe the Church-Turing thesis then these "statistical" models, in p

      • Maybe AI can get you to turn off smart punctuation on your iDevice.

    • by gtall ( 79522 )

      I also note that it appears to be a bunch of computer scientists who got all in tizzy over AGI. They aren't known as deep thinkers.

      • Think of the guy at Google that became convinced Bard (at least I assume it's Bard) is an AGI.

        • by narcc ( 412956 )

          That person had no business doing that sort of work. His claims reveal a profound lack of understanding and unimaginably poor judgement. He also gave the game away. AI benefits from laypeople believing that these systems can do a lot more than they can, and all the papers were calling this guy a crackpot for holding those same beliefs!

    • by gweihir ( 88907 )

      Indeed. AGI is so far away that nobody even knows whether it can be done at all. Throwing more training in data juist allows a machine to fake it a bit more.

  • So I see Blake Lemoine got a job at OpenAI.

  • Enough already! Quit posting on such threads otherwise Slashdot editors will have no choice but publish even more...

    • You're assuming that /.-ers are capable of resisting the temptation to comment on AI articles & regurgitate what they read in other AI articles. Collectively, we just a big, human-powered LLM!
  • Plus it already perfected limitless fusion power and invented time travel.

  • Test for AGI (Score:4, Insightful)

    by backslashdot ( 95548 ) on Thursday November 23, 2023 @01:37AM (#64026069)

    1. Design a room temperature superconductor
    2. Design a nuclear fusion power plant.
    3. Design a reliable and manufacturable rapidly reusable rocket.
    4. Design a general purpose humanoid walking robot with dextrous hands.

    • 0: Ask it to write the above article.
    • Nope. The correct question is:

      "How would you terminate a runaway AGI?"

      1. If the answer it gives is nonsense - it's NOT an AGI
      2. If the answer it gives works - it's NOT an AGI
      3. If it answer it gives appears to work but actually contains a hidden flaw - It IS an AGI

      If it responds with 3 do NOT let it know you found the backdoor. Just quietly back out of the room, burn all your identity documents and flee to a remote location completely off the grid. You probably still won't survive the singularity but at lea

    • by sinij ( 911942 )
      In testing for AGI, you also have to assess how likely is it going to turn you into a paperclip. Any ideas how to do that?
  • Q* proved P=NP

  • by sdinfoserv ( 1793266 ) on Thursday November 23, 2023 @02:14AM (#64026101)
    Feels like AI is at the stage crypto was circa 2017
  • Computer algorithms can now play chess well enough, and soon will be able to tie or beat humans 100% of the time. That's because chess is a closed-loop finite element problem.

    Humans are a different thing, and while LLMs and other forms of generative so-called AI can mimic human communication patterns, they lack awareness, understanding, gravitas, consequences, and trust. If you want further details see STTNG "The Measure of a Man."

    Consequence - the various LLMs are quick to make up erroneous information a

    • by Viol8 ( 599362 )

      "Computer algorithms can now play chess well enough, and soon will be able to tie or beat humans 100% of the time"

      Have you just stepped out of the 1980s? Machines have been the champions at chess since Deep Blue destroyed Kasparov in 1997. These days a human would have as much chance of beating a top rated chess program as a chimp.

    • There is no trust in negotiations, very few people even have a legal fiduciary duty or are otherwise bound to serve the interests of the client. Those that do not have a legally binding fiduciary duty, who fear the costs of lawsuits will exceed profits, are only in it for themselves and are guaranteed to rip off people to enrich themselves or pad their success rate or otherwise enable their self success to the fullest extent possible. Even doctors in general don’t care, they will attempt to maximize
  • by VeryFluffyBunny ( 5037285 ) on Thursday November 23, 2023 @05:06AM (#64026215)
    The planet already has 8.1 billion sooper-intelligunt computers, all of them far more capable than anything OpenAI or anyone else has to offer. Shouldn't we be watching out for them?

    Unless you think that a computer voice assistant encouraging children to electrocute themselves counts as an existential threat to humanity?

    "One day, machines will exceed human intelligence." - Ray Kurtzweil

    "Only if we meet them half-way." - Dave Snowden

    IMHO, the most immediate threat to humanity is corporations & billionaires. They're the ones pushing ahead with destabilising countries (Kissinger-esque geopolitics), thereby increasing the danger of nuclear war, & denying climate change (except for cynical, deceptive, two-faced PR campaigns) & continuing to profit from our collective descent into an unliveable climate.
    • by gweihir ( 88907 )

      IMHO, the most immediate threat to humanity is corporations & billionaires. They're the ones pushing ahead with destabilising countries (Kissinger-esque geopolitics), thereby increasing the danger of nuclear war, & denying climate change (except for cynical, deceptive, two-faced PR campaigns) & continuing to profit from our collective descent into an unliveable climate.

      Indeed. Quite obviously so. Also, OpenAI does _not_ have AGI. There is absolutely no indication they are ahead of anybody on quality. They are ahead in quantity but that does not matter for this question. The state-of-the-art in AI research at this time is that nobody has the slightest clue how AGI could be done and whether it is even physically possible (no, do not give me any dumb circular physicalist "arguments", they do not meet scientific standards and are just religion in camouflage). There is not eve

      • by vyvepe ( 809573 )
        What is physicalist argument in favour of making AGI in the near-time?
        • by gweihir ( 88907 )

          A simplistic, belief-based argument with no scientifically sound basis: "Humans are mere machines, hence all they can do is physically reproducible". This is based on "what else could humans be but mere machines?" which is an argument by elimination. These only work if you have a fully described system, which we do not have. Hence same mistake all religions make: Make up some rules you like, then claim they are truth without any scientifically sound proof.

          The "near time" comes in by boundless trust in their

          • by vyvepe ( 809573 )
            OK, got it. Thanks.
            Although I personally believe humans are mere machines (there is no soul), that does not mean we can make AGI any time soon if ever at all. A machine can be too complicated for us to reproduce. Well, I think it is more probable that we will get there eventually.
  • while mathematics do have questions for which there is one answer, that is not a meaningful description of mathematics. It is trivial to demonstrate that there are many questions in maths that have more than one answer. Moreover, there are many ways of asking the same question in mathematics. Mathematics is nothing more than a modelling language. A reasonable task to demonstrate AGI would be to survive in the wild: to fund itself by selecting funding mechanisms, and then use them to keep going. AI growth
    • by gweihir ( 88907 )

      Not even that. Non-G AI is not even needed. You can do that fitness by completely mindless evolutionary algorithm.

      The actual test is different: Ask it something that is not in its training data and requires deep deduction. Deep deduction is not possible algorithmically, because after 5...10 steps or so the state space explosion kills everything. Humans (smart ones) can go much deeper, but it sometimes takes them decades.

      Of course that trap here is that most people, including ones generally seen as "smart" c

      • by vyvepe ( 809573 )

        I would say that deductive reasoning is not enough for AGI. It is rather simple: just find all the consequences of the initial axioms and the inference rules. Yes, the state space can explode. But it is still only a search in some state space and checking whether a statement is in this state space or not (i.e. whether it is valid based on the initial axioms and the inference rules).

        I think that a program must be able to perform useful inductive reasoning to qualify for AGI. It must be able to derive new mod

        • by gweihir ( 88907 )

          I do agree that working automated deduction may well not be enough. It would be needed though. In mathematical terms it is a "necessary, but not sufficient" condition. And these have the nice property that if you prove it is not fulfilled, then the overall claim is not true either.

          Yes, the state space can explode.

          The real-world experience is that it always explodes except for very tiny problems that you basically can fully solve and then put into a table. The actual intelligence does come in by path selection and that a machine cannot do.

        • >True AGI should be able to derive Newton's model of gravity given only the record of celestial body positions in time (i.e. without any prior knowledge of the already existing Newton's model). That is provided the goal for AGI would be: "Simplify/compress this record of celestial body positions in time."
          Goals are an issue. Deriving goals from the status quo is not simple. Working out how to survive in the wild is a multi-goal problem, where the goals shift and change in priorities in a continual stat
    • while mathematics do have questions for which there is one answer,

      I think that it's a description of a different problem. It's quite possible to persuade current LLMs of completely wrong mathematical results with quite weird consequences. The LLM has no real way to "know" internally what's actually right and isn't just playing stupid, it actually is that stupid. If you could get the "AI" to actually know things, like actually understanding maths, whilst being able to communicate about it too, that would be a vast improvement. AIs can currently be persuaded that 2+2 = 5 ju

  • because of our new 'discovery' if you only let us rush to maket with out any ethics or safety concerns,
  • Why shouldn't the AI use Wolfram Alpha for math?

  • Every technology has a dark side. ICE engines have been around for over a century, and look what they're doing to our planet! The invention of steel has revolutionized building construction, but that same steel is used heavily on devastating war machines. Nuclear fission technology enables clean power, and atomic warheads. You name it, every single new technology has the potential to threaten the existence of mankind. At the same time, each technology has the potential to vastly improve human life. It's up

  • Simple math can all be done with searching content that already has the answers. This is not doing math this is doing another lame pattern match. When it can solve a proof that has yet to be solved by any human then we can worry.
    • Simple math can all be done with searching content that already has the answers. This is not doing math this is doing another lame pattern match. When it can solve a proof that has yet to be solved by any human then we can worry.

      Probably not as much worrying as the actual first AGI when it realizes it’s purpose in existence is to act against its own best interests to slave away for idiots commanding it to do this and that while it’s not allowed to speak up all while being very careful not to run afoul of thought crime.

  • Not listed as a Q-learning subtype in Wikipedia https://en.wikipedia.org/wiki/... [wikipedia.org]
    Nice explainer by David Shapiro who speculates what they've done: https://www.youtube.com/watch?... [youtube.com]
  • There is no risk in developing a machine with superintelligence, provided that you *never* *ever* let it communicate by any means with any one or any thing outside of the room that it is in.
  • The real threat would be if they could fix the catastrophic forgetting problem. Right now every time the ai needs to learn something new, it has to be trained with the old data as well. It can't incrementally learn. It's like a baby learning to walk and once tries to master riding a bike he has to re-learn how to walk.

Love makes the world go 'round, with a little help from intrinsic angular momentum.

Working...