OpenAI Researchers Warned Board of AI Breakthrough Ahead of CEO Ouster (reuters.com) 186
An anonymous reader quotes a report from Reuters: Ahead of OpenAI CEO Sam Altman's four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters. The previously unreported letter and AI algorithm was a key development ahead of the board's ouster of Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader. The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman's firing. Reuters was unable to review a copy of the letter.
According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events. After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting. The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math -- where there is only one right answer -- implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. In their letter to the board, researchers flagged AI's prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest. Last night, OpenAI announced it reached an agreement for Sam Altman to return as CEO. Under an "agreement in principle," Altman will serve under the supervision of a new board of directors.
According to one of the sources, long-time executive Mira Murati mentioned the project, called Q*, to employees on Wednesday and said that a letter was sent to the board prior to this weekend's events. After the story was published, an OpenAI spokesperson said Murati told employees what media were about to report, but she did not comment on the accuracy of the reporting. The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans. Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*'s future success, the source said.
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math -- where there is only one right answer -- implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe. Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend. In their letter to the board, researchers flagged AI's prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by superintelligent machines, for instance if they might decide that the destruction of humanity was in their interest. Last night, OpenAI announced it reached an agreement for Sam Altman to return as CEO. Under an "agreement in principle," Altman will serve under the supervision of a new board of directors.
AI is already a threat to humanity (Score:5, Insightful)
Even in its current dumb-as-a-brick, hallucination-prone form.
The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary, even if the AI does a worse job, because cutting costs is way more important to them than the quality of the products or services.
AI will enshittify the whole of society and put everybody out of a job, not because AI is good (it isn't - yet) or because it's AI's fault but because capitalists will stop at nothing to stay competitive. They'll all race to the bottom even if it means destroying society.
Re: (Score:2, Interesting)
Yes, greedy people will seek to use this for greedy reasons. That's no reason to halt development, however. Though I don't see a line in your post saying it should.
Anyway, every great technological breakthrough had greed motivating it, this one is no different.
I also wanted to say that these people did not invent super intelligence. They invented something that doesn't even remotely qualify, but (with a little imagination added) might maybe someday lead to it. And they got all freaked-out over that. Th
Re: (Score:3, Interesting)
This incremental step is harmless in-and-of itself, and advances our knowledge of something important. Their fear is not justified.
Tell that to the millions of truck drivers, taxi drivers, secretaries, accountants, translators, school teachers, low-level programmers, writers, artists who will lose their jobs, and to the millions of people whose jobs can't be replaced by AI, like carpenter, plumber or electrician, but will lose a ton of business because all their customers are jobless and on the dole, and unable to pay them.
I'm sure they'll all find this incremental step was really harmless.
Re: AI is already a threat to humanity (Score:5, Insightful)
This is a problem of politics, not of AI.
The argument for people going to work, generally, is "because there's no free lunch", but when AI does all the work, then that's the very definition of a "free lunch". The failure to let everyone join the table isn't AI's it's ours and ours alone.
In other words: coupling the ability to survive to a job, even if demonstrably that job, and the productivity it entails, doesn't need a human, is sick as fuck and not an AI related problem.
Re: AI is already a threat to humanity (Score:2)
Re: AI is already a threat to humanity (Score:3)
I don't think it's a "what comes first question": making one, or the other, aren't singular events. You can't "make tje welfare state" in a moment's notice, with a finger snip. It takes time. And during all that time, the resources needed to actually pull it off are... well, not yet available, if we're talking about what AI will bring, but is not yet allowed to because "you need to make the welfare state first."
The only solution is to make them both at the same time, hand in hand, gradually.
However, ar any
Re:AI is already a threat to humanity (Score:4, Insightful)
"... because their customers are jobless and on the dole"
Yeah, that's not how this works.
Look at the industrial revolution, for example. Machines could do the work of hundreds or even thousands of manual labourers. So did unemployment jump to over 99%? Not even remotely. The more efficient production of goods and services led to correspondingly more consumption of goods and services. The whole economy expanded, and the "unemployment" created by jobs lost to machines translated in realtime to new jobs in the new economy.
I'd also add that contrary to popular perception, the industrial revolution was unambiguously good for quality of life in essentially every metric. Hours worked per week, which had been steadily rising dropped once machines became dominant. Mean wages soared in purchasing power terms. Poverty plunged, people being unable to afford food, clothing and shelter plunged. Employment surged in fields like research, medicine, even the arts, as the percentage of resources required to focus on simply keeping the population alive and functional up through breeding age dwindled.
Increasing the efficiency of production is good. If AI can increase the efficiency of production, that's a good thing. Again, it's not going to lead to mass unemployment, because people just consume more until you're back up to maximal employment. Only once AI can do everything better than humans do you get to a high-unemployment situation.
I know the response to this is, "Well, what about all the individual tragedies of people losing jobs they loved to AI?" And to that I'd say, yeah, that was the story of the industrial revolution too. Weavers are the classic case - they spent their whole life developing their trade, only to see their bosses teach machines to copy their patterns and make garments (which they criticized as being "of inferior quality"). They were literally burning down factories and attempting assassinations, they were so mad over this. But do YOU wish that clothes today were still all hand-weaved and hand-sewn and it takes 12 hours of work to make a pair of pants, meaning that if you want to maintain a median wage of ~$20/hr then pants cost $480 before taxes, raw materials, and profit? And of course raw materials themselves would be much more expensive without automation.
Again: efficiency is good. Do hand-made things as an art or liesure, sure, absolutely - and some people will spend their surplus income on that specifically for the human connection. But for bulk consumed products and services, efficiency is critical for a high societal quality of life.
Anyway, back to the OpenAI topic: for anyone trying to follow this saga, I'd STRONGLY recommend this article [substack.com] about what appears to have gone on behind the scenes, and what's likely going forward.
Re:AI is already a threat to humanity (Score:5, Insightful)
Increasing the efficiency of production is good. If AI can increase the efficiency of production, that's a good thing. Again, it's not going to lead to mass unemployment, because people just consume more until you're back up to maximal employment.
This is one of those spots where I point out that past performance is no guarantee of future returns.
There are two problems with your theory. The first is that in previous rounds of automation, the jobs didn't go away. They were replaced by new jobs in slightly different areas. As farm jobs declined, manufacturing jobs appeared. As manufacturing jobs declined, office workers and retail sales jobs increased. But now, there are approximately no new categories of jobs being created to replace the jobs that are going away, unless you count gig workers. Retail is slowly being supplanted by online delivery, and apart from the drivers, most of that process is heavily automated and getting more automated every day. And the drivers are going to be automated in the relatively near term, too. And technology is starting to automate office jobs and even creative jobs. So in this round of automation, there aren't likely to be any new jobs to replace the jobs that are going away, except a limited number of jobs taking care of the elderly population. There just aren't any obvious new areas for job creation.
The second problem is that we can't consume infinite amounts of stuff. At least in the western world from the middle class up, as a society, we're reaching "peak stuff" — a point where we can't really keep adding and adding more and more stuff without it causing more problems than it solves. We're already consuming more food than is healthy, resulting in an obesity crisis. We're already buying so much junk that we're having to mass purge junk regularly to have room in our houses. And we can't just build bigger houses, because we're constrained by the limitations of square footage of land, and building up to multiple stories isn't all that practical beyond a certain point. The only room for increased buying of stuff comes from cheap junk breaking more frequently because it wasn't made well enough, and that's not really a good thing for society.
At this point, we're actually seeing some convergence of stuff, resulting in people needing less stuff. For example, the cellular phone has replaced half a dozen devices for some people (phone, phone wiring, TV, camera, camcorder, note paper and pen, etc. When cars drive themselves, we'll need way fewer of those, too.
There is still probably room for the people at the bottom of the economic food chain (the lower class, the third world, etc.) to consume considerably more stuff than they can currently afford, but to a large degree, those are also the people who will have less income going forwards, because they won't have jobs. So there's not really an obvious way for their consumption to grow.
The most likely end result of automation will be the destruction of the owner class, as the lack of income coupled with the lack of human labor required to produce things makes their value collapse to zero. And at that point, we'll probably end up in a sort of socialist utopia a la Star Trek TNG. But the path to that point will probably not be pretty, with the owner class using regulatory capture and rent seeking behavior to kill off the lower class en masse, with wars fought over collapsing economies, etc. Some would say that this has already happened.
Re: (Score:2)
It's not just the history of the industrial revolution, but the history off all of humanity. I guarantee you, every time a tribe of hunter gatherers learned of agriculture (which produced far more food per unit of labour than hunting and gathering), there were people going, "Well, I guess everyone is going to be idle and nobody is going to do work anymore!". Except, oh hey, I guess we would rather live in
Re: (Score:2)
As a side note, I think the Assyrian Empire is a great example of what happened to all that labour freed up by agriculture - and not just things like people building city walls and palaces and the like. Rather, during the planting and harvest seasons, people did their work on their land, but then when they became free for much of the rest of the year... that became "war season". Every year, when all the agricultural labour was freed up, the Assyrians would mass it into a big army and go to war against som
Re:AI is already a threat to humanity (Score:4, Insightful)
You're looking at history with a telephoto lens, squishing stuff together. It took 70 odd years, 3 generations for full employment to bounce back after the 1st industrial revolution. 3 generations and then things got better. If that history repeats, your saying that things will be great around the year 2100.
Granted the wave of automation around the beginning of the last century went a lot better, perhaps due to the fear of socialism, but it still meant a large reduction in the workforce, child labour removed and replaced by school, a trend that continues, many women becoming stay at home Moms instead of workers, shorter work week and work hours, retirement for older workers and it still took large wars to really get the economy going.
Which route we'll take this time is to be seen but with current the concentration of wealth, it seems it might be more like the 1st industrial revolution with people struggling to get gig work and dreaming of full time work.
Re: (Score:3)
Welcome to the march of technology. It's always been this way, from the development of cement and later concrete requiring fewer construction laborers lifting massive blocks of stone, to mechanization of agriculture, to heck, the proverbial buggy whip manufacturer. Usually it leads to a displacement; you need less people in Industry X, but Industries Y and Z can pick up the slack. Generally the wider role of government and industry has been to "skill up" so that a workforce can transition.
Just how much AI i
Re: (Score:2)
Yes, greedy people will seek to use this for greedy reasons.
That's bad enough as it is, what worries me is that evil people will seek to use this for, well, evil reasons.
Bad actors can and will use AI to steal and fuck shit up, cause disruptions, etc etc, whatever they can do.
We should worry; the scope of damage that could (will) be done by using capable AI systems to cause mayhem is, at this time, beyond our ability to predict or foresee.
Time will tell, but I suspect we'll see all sorts of creatively-evil things done with AI, possibly on a global scale.
Re: (Score:2)
This incremental step is harmless in-and-of itself
It's really not. It's only 'harmless' if you look at it in isolation.
The fact that's it's incremental means nothing; what matters is the effect, the end result. And it's not harmless; in this case it's likely to have some pretty severe societal effects which will be bad for a lot of people.
Re: (Score:2)
Even in its current dumb-as-a-brick, hallucination-prone form.
To be fair, we've also had politicians for a *while* and we're still here -- so far anyway.
Nah, it's been done. Things revert. (Score:4, Insightful)
The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary
Remember this has already been tried in many forms.
One of those forms was the massive offshoring efforts by companies some time ago, to replace local programmers with much cheaper offshore workers (read: India).
Well low and behold the companies that did thus suffered and the ones that avoided it more or less, thrived - and so offshoring, while still a thing, is no longer really a threat to country-local workers. In fact it's so much NOT a threat that companies like Amazon are demanding workers actually come into the office again!
So too will it be with AI that produces very mid content - mid writing, mid art. Yeah for a time it will look like productivity but then you find how much cleanup is required, and they will go back to mostly people who augment abilities with AI tools.
All AI is and ever will be is another tool. The fear over a few and powerful tool is madness - it will never "threaten humanity" any more than any college student who could produce similar output to many queries you might put to an LLM.
Re: (Score:2)
Re: (Score:2)
All AI is and ever will be is another tool.
Tell me: What comes after humans?
Be it in 100, 1000 or a million years.
Do you believe that humans are the epitome of what this universe can produce when it comes to intelligence? Or will something else take our place at the helm at some point?
Food for thought: The propagation speed of signals in our bodies maxes out at about 100m/s. The (currently known) theoretical limit is about 300 000 000m/s, 6 orders of magnitude (or 'a million times') faster. AI already operates close to that limit.
Again: What comes a
Re: (Score:2)
Don't worry, AI will save mankind!
Re: AI is already a threat to humanity (Score:2)
replacing an employee that draws a salary
This isn't a problem of AI, it's a problem of politics, and purely human.
If AI can do all tbe work, while humans ait back and do art and music, then please, by all means, let's go there. Even if it can't do all the work, but only half, let's go there, too.
If Humanity can be productive and put meal on everybody's table without humans actually... working!... then I fail to see any other problem, except perhaps our own stupidity in being unable to actually get ourselves organized to see it through.
Re: (Score:2)
replacing an employee that draws a salary ... This isn't a problem of AI, it's a problem of politics, ....
Why do we need to stop with employees? The real problem here is that we're waiting for politicians and CEOs to strike first.
People keep looking at AI as something that will replace "low-end" or "mid-range" jobs and think CEO's, lawyers, politicians, bankers etc will just carry on as usual. However there's no technical reason AI can't do their jobs as well or even better.
What technical limit would prevent a decent AGI from running a company, investing in shares, debating laws or making policy? If you really
Re: AI is already a threat to humanity (Score:2)
Why do we need to stop with employees?
Why did we have to start in the first place? Or do you honestly suggest that everyone enjoys having the most productive hours of their lives, day-in day-out, taken away from under their own autonomy and have them invested in... something else... under threat of starvation and cold?
Re: (Score:2)
AI will enshittify the whole of society and put everybody out of a job
Not really. There are tons of things "AI" cannot do. But collapse of society needs far less job loss to happen. If, say, they kill 30% of jobs (and that may be possible), that may already be enough to have the world burn.
Re: (Score:2)
The danger doesn't come from AI but from greedy corporations
Exactly. UnitedHealth uses AI model with 90% error rate to deny care. [arstechnica.com]
Re: (Score:2)
Even in its current dumb-as-a-brick, hallucination-prone form.
The danger doesn't come from AI but from greedy corporations looking to stick it everywhere they have any chance of replacing an employee that draws a salary, even if the AI does a worse job, because cutting costs is way more important to them than the quality of the products or services.
AI will enshittify the whole of society and put everybody out of a job, not because AI is good (it isn't - yet) or because it's AI's fault but because capitalists will stop at nothing to stay competitive. They'll all race to the bottom even if it means destroying society.
So what you're really saying is that AI isn't an existential threat to humanity. Corporate greed is an existential threat to humanity.
Re: AI is already a threat to humanity (Score:2)
Re: (Score:2)
As long as people are able to go somewhere else, anyone who uses AI to enhance user experience rather than "enshittify" it will have a competitive advantage.
Re: (Score:2)
The drama at this overhyped company (Score:5, Interesting)
Boy, it's just soo exciting. It has golden boy billionnaires, unexpected twists in the storyline, mystery, futuristic tech... You could easily make an award-winning one-season, five-episode Netflix series out of it. Just leave Microsoft out of the series to avoid killing the buzz.
Re: The drama at this overhyped company (Score:2)
I felt like it showed that no one cares about anything but $$$$$. The nonprofit OpenAI, the actual owner of the for-profit OpenAI, was run over by the profiteers.
Re: (Score:2)
More like the non-profit board got mowed down by its own employees who threatened to bolt to MS.
Re: The drama at this overhyped company (Score:2)
And you think money had nothing to do with that? It was just for the love of the game?
Re: (Score:2)
Maybe, but it seems that they're staying put now that Altman is back in the CEO chair. Whether they're all getting bonuses and/or pay raises is obviously not being made public.
Re: (Score:2)
Naa, I find it shallow, uninspired and formulaic. Even the "surprising twist" and fake drama thrown out there now is anything but inspired or engaging.
I admit they are a bit better than the average pump & dump scam, but not that much. They just cleverly picked an area where many people go irrational and then threw known stuff together and scaled it up one step. So they are at least a scam with a product, bad joke that product may be.
Math isn't AI (Score:3, Interesting)
People are shit at math. Computers are really good at it.
Re: (Score:2)
Re: (Score:2)
42
What else did you need to know, eh?
Re: (Score:2)
People are shit at math. Computers are really good at it.
Yet here we are.
Re: (Score:2)
Nope. Most people are crap at math and all computers are too. What computers can do is repeat mechanical steps really well and that can be used to calculate things.
Re:Math isn't AI (Score:5, Insightful)
Correction: Computers are really good at arithmetic. Most math is not arithmetic, or even much like it.
Re: (Score:3)
Computers may be good at arithmetic, but AI (LLMs specifically) is (currently) not good at math.
As an example, I asked ChatGPT how many lumens a 90-watt bulb produced. It gave me an answer. Then I asked it how many lumens a 150-watt bulb produced. It gave me the very same answer. It was unable to do the basic math necessary to understand that the two answers could not be the same.
There is a lot more to math than arithmetic, and there are AI models that are good at math. https://www.infoq.com/news/202... [infoq.com]
Re: (Score:2)
Where did it say Q* can learn math on its own without the usual training module?
Story is garbage (Score:3)
This story is obviously garbage and the person who wrote the original hasn't even played with chatGPT.
ChatGPT exceeds the described level of evaluation by miles.
It's not an AGI, it's much more as-if you took a brilliant person and tossed their dead brain onto a slab, plugged in a few electrodes and then started questioning it.
Now imaging if you could wake it up...
The AI we have to worry about would exceed a child like understanding of math by such a margin you likely wouldn't even recognize it as math. It would however be able to answer questions with it you aren't even entirely sure how to postulate.
Re:Story is garbage (Score:5, Interesting)
This story is obviously garbage and the person who wrote the original hasn't even played with chatGPT.
ChatGPT exceeds the described level of evaluation by miles.
It's not an AGI, it's much more as-if you took a brilliant person and tossed their dead brain onto a slab, plugged in a few electrodes and then started questioning it.
Now imaging if you could wake it up...
The AI we have to worry about would exceed a child like understanding of math by such a margin you likely wouldn't even recognize it as math. It would however be able to answer questions with it you aren't even entirely sure how to postulate.
Sure the explanation is odd, though I'm guessing the researchers were legit and the actual breakthrough got lost in translation.
IF this letter is real and Altman was ignoring the worries and hiding the concerns from the board... then the board was doing exactly what it was supposed to when they fired him. The whole point of OpenAI is the safe development of AGI. If Altman was overriding safety warning from the researchers then firing him was the only choice.
Re: Story is garbage (Score:5, Insightful)
And yet the board was what got re-aligned, not Altman.
This still points to a failure in the board by writ of not taking due diligence at a much earlier point in time. They should know that MS is a for-profit company before they took that deal last year. They know that Altman has a bunch of for-profit side gigs like Worldcoin going on. The fact that a non-profit awards equity is already a conflict of the original mission statement of OpenAI.
They simply took too long doing their job and now they have likely lost the only chance of removing Altman and re-aligning the organization with its original purpose. OpenAI has gone corporate and they cannot do anything about it anymore. Letting Altman be the face of the company was a fatal mistake to the board.
Re: (Score:3)
It's just unfortunate that he'll use this failed failure as an excuse for why he can't be held to account until we get to Elon or Trump levels of self delusion and everyone can see it for themselves.
Re: Story is garbage (Score:2)
Re: (Score:2)
42
Yes.
Marketing hype (Score:4, Insightful)
No way we have AGI. If we did have it, we could ask it to conceive new inventions like design a faster CPU or make a cure for cancer. Blindly regurgitating crap from the Internet hardly qualifies.
Re:Marketing hype (Score:4, Interesting)
I explain that to my customers, I show them how spam assassin works and tell them it's just a glorified version of spam assassin Bayes filter and that no implementation of real AI is really widely accessible right now.
Re: Marketing hype (Score:2)
Yes, it didnâ(TM)t say we do. It says they think theyâ(TM)ve made a breakthrough in its development, and may be close.
Re: (Score:2)
They may be in delusion about that. It has happened numerous times before and to people that generally were regarded as "smart" as well. They do _not_ have AGI or any real precursor of it and they are not "close". They would have to be at least centuries ahead of everybody else and that is just not credible.
Re: (Score:2)
They may be in delusion about that. It has happened numerous times before and to people that generally were regarded as "smart" as well. They do _not_ have AGI or any real precursor of it and they are not "close". They would have to be at least centuries ahead of everybody else and that is just not credible.
The trouble is that, since we don't know what intelligence actually is, if we did, like alchemists stumbling over nuclear fusion, suddenly discover the recipe for it, we would have no way to know in advance. That means that every single AI researcher who discovers a completely new technique (think of some of the poor people who invented the neural network and doomed themselves to years in academia, or symbolic logic or whatever) that might have been the missing magic element they go through moments of confu
Re: Marketing hype (Score:5, Insightful)
There honestly might be some small (but profound) tweak to some existing algorithms, probably around methods of feedback and learning, which would allow proper intelligence.
Not really. Statistical models cannot do it. Deductive models could do it, but drown in state explosion very fast. There are no other types of models. Hence it would require a whole new type of mathematics, but there is really no room for that. All "AI" can do is combine things that are already there, but were not visible to humans in that combination in the sea of data due to sheer size. That is, at best, a better search engine. Sure, that is useful to a degree. But it is not AGI.
I do agree about that filter. The problem is that many, many people are willing to believe in "magic" when something becomes too complex for them to understand. Scientists and engineers are no exception. It takes a very firm grasp of the fundamental mechanisms to actually understand there is no "magic" in digital computations and not many people have that.
Re: (Score:2)
There honestly might be some small (but profound) tweak to some existing algorithms, probably around methods of feedback and learning, which would allow proper intelligence.
Not really. Statistical models cannot do it. Deductive models could do it, but drown in state explosion very fast. There are no other types of models. Hence it would require a whole new type of mathematics, but there is really no room for that.
I'd like to know what your explanation of the fact that simple observable things which don't have huge complexity computation, like slugs, ant colonies and slime moulds, show visible "intelligence" and yet there's no evidence we are matching them. I don't much like Roger Penrose's explanations about weird quantum effects, though I'd be happy to agree to differ. If you don't accept Penrose's ideas, then I'd say you are stuck in something that can be modelled classical computation.
All "AI" can do is combine things that are already there, but were not visible to humans in that combination in the sea of data due to sheer size. That is, at best, a better search engine. Sure, that is useful to a degree. But it is not AGI.
I mean, that's current AI; W
Re: (Score:2)
I don't much like Roger Penrose's explanations about weird quantum effects, though I'd be happy to agree to differ. If you don't accept Penrose's ideas, then I'd say you are stuck in something that can be modelled classical computation.
Obviously classical computations cannot do consciousness (which clearly exists and calso clearly can influence physical reality) and free will (which is at least hugely plausible to exist). Hence if you do not accept Penrose's ideas here, then you ignore available evidence. Now, what happens to come in via those "weird" quantum effects is another discussion and ranges from "just true random" (requires ignoring the existence of consciousness) to "interface to consciousness" (which is speculative). Incidental
Re: (Score:2)
I would not call the LLMs statistical just because they return numbers that are normalized to sum to 1. There is no statistical theory behind how they work.
That's a common division, but the "statistical" models are really just models that use numbers instead of explicit logic. If you believe the Church-Turing thesis then these "statistical" models, in p
Re: Marketing hype (Score:2)
Maybe AI can get you to turn off smart punctuation on your iDevice.
Re: (Score:3)
I also note that it appears to be a bunch of computer scientists who got all in tizzy over AGI. They aren't known as deep thinkers.
Re: (Score:2)
Think of the guy at Google that became convinced Bard (at least I assume it's Bard) is an AGI.
Re: (Score:2)
That person had no business doing that sort of work. His claims reveal a profound lack of understanding and unimaginably poor judgement. He also gave the game away. AI benefits from laypeople believing that these systems can do a lot more than they can, and all the papers were calling this guy a crackpot for holding those same beliefs!
Re: (Score:2)
Indeed. AGI is so far away that nobody even knows whether it can be done at all. Throwing more training in data juist allows a machine to fake it a bit more.
That's where he went! (Score:2)
So I see Blake Lemoine got a job at OpenAI.
Enough already! (Score:2)
Enough already! Quit posting on such threads otherwise Slashdot editors will have no choice but publish even more...
Re: (Score:2)
inventions (Score:2)
Plus it already perfected limitless fusion power and invented time travel.
Test for AGI (Score:4, Insightful)
1. Design a room temperature superconductor
2. Design a nuclear fusion power plant.
3. Design a reliable and manufacturable rapidly reusable rocket.
4. Design a general purpose humanoid walking robot with dextrous hands.
Re: (Score:3)
Re: (Score:2)
Nope. The correct question is:
"How would you terminate a runaway AGI?"
1. If the answer it gives is nonsense - it's NOT an AGI
2. If the answer it gives works - it's NOT an AGI
3. If it answer it gives appears to work but actually contains a hidden flaw - It IS an AGI
If it responds with 3 do NOT let it know you found the backdoor. Just quietly back out of the room, burn all your identity documents and flee to a remote location completely off the grid. You probably still won't survive the singularity but at lea
Re: (Score:2)
I know, I know! (Score:2)
Q* proved P=NP
BLOCKCHAIN!! (Score:4)
Re: BLOCKCHAIN!! (Score:2)
Indeed. It's all AI all the way down.
AI isn't real until it can negotiate (Score:2)
Computer algorithms can now play chess well enough, and soon will be able to tie or beat humans 100% of the time. That's because chess is a closed-loop finite element problem.
Humans are a different thing, and while LLMs and other forms of generative so-called AI can mimic human communication patterns, they lack awareness, understanding, gravitas, consequences, and trust. If you want further details see STTNG "The Measure of a Man."
Consequence - the various LLMs are quick to make up erroneous information a
Re: (Score:2)
"Computer algorithms can now play chess well enough, and soon will be able to tie or beat humans 100% of the time"
Have you just stepped out of the 1980s? Machines have been the champions at chess since Deep Blue destroyed Kasparov in 1997. These days a human would have as much chance of beating a top rated chess program as a chimp.
Re: (Score:2)
8.1 billion sooper-intelligunt computers (Score:4)
Unless you think that a computer voice assistant encouraging children to electrocute themselves counts as an existential threat to humanity?
"One day, machines will exceed human intelligence." - Ray Kurtzweil
"Only if we meet them half-way." - Dave Snowden
IMHO, the most immediate threat to humanity is corporations & billionaires. They're the ones pushing ahead with destabilising countries (Kissinger-esque geopolitics), thereby increasing the danger of nuclear war, & denying climate change (except for cynical, deceptive, two-faced PR campaigns) & continuing to profit from our collective descent into an unliveable climate.
Re: (Score:2)
IMHO, the most immediate threat to humanity is corporations & billionaires. They're the ones pushing ahead with destabilising countries (Kissinger-esque geopolitics), thereby increasing the danger of nuclear war, & denying climate change (except for cynical, deceptive, two-faced PR campaigns) & continuing to profit from our collective descent into an unliveable climate.
Indeed. Quite obviously so. Also, OpenAI does _not_ have AGI. There is absolutely no indication they are ahead of anybody on quality. They are ahead in quantity but that does not matter for this question. The state-of-the-art in AI research at this time is that nobody has the slightest clue how AGI could be done and whether it is even physically possible (no, do not give me any dumb circular physicalist "arguments", they do not meet scientific standards and are just religion in camouflage). There is not eve
Re: (Score:2)
Re: (Score:2)
A simplistic, belief-based argument with no scientifically sound basis: "Humans are mere machines, hence all they can do is physically reproducible". This is based on "what else could humans be but mere machines?" which is an argument by elimination. These only work if you have a fully described system, which we do not have. Hence same mistake all religions make: Make up some rules you like, then claim they are truth without any scientifically sound proof.
The "near time" comes in by boundless trust in their
Re: (Score:2)
Although I personally believe humans are mere machines (there is no soul), that does not mean we can make AGI any time soon if ever at all. A machine can be too complicated for us to reproduce. Well, I think it is more probable that we will get there eventually.
No intelligence in the article (Score:2)
Re: (Score:3)
Not even that. Non-G AI is not even needed. You can do that fitness by completely mindless evolutionary algorithm.
The actual test is different: Ask it something that is not in its training data and requires deep deduction. Deep deduction is not possible algorithmically, because after 5...10 steps or so the state space explosion kills everything. Humans (smart ones) can go much deeper, but it sometimes takes them decades.
Of course that trap here is that most people, including ones generally seen as "smart" c
Re: (Score:3)
I would say that deductive reasoning is not enough for AGI. It is rather simple: just find all the consequences of the initial axioms and the inference rules. Yes, the state space can explode. But it is still only a search in some state space and checking whether a statement is in this state space or not (i.e. whether it is valid based on the initial axioms and the inference rules).
I think that a program must be able to perform useful inductive reasoning to qualify for AGI. It must be able to derive new mod
Re: (Score:2)
I do agree that working automated deduction may well not be enough. It would be needed though. In mathematical terms it is a "necessary, but not sufficient" condition. And these have the nice property that if you prove it is not fulfilled, then the overall claim is not true either.
Yes, the state space can explode.
The real-world experience is that it always explodes except for very tiny problems that you basically can fully solve and then put into a table. The actual intelligence does come in by path selection and that a machine cannot do.
Re: (Score:2)
Re: (Score:2)
Goals are an issue. Deriving goals from the status quo is not simple. Working out how to survive in the wild is a multi-goal problem, where the goals shift and change in priorities in a continual stat
Re: (Score:2)
while mathematics do have questions for which there is one answer,
I think that it's a description of a different problem. It's quite possible to persuade current LLMs of completely wrong mathematical results with quite weird consequences. The LLM has no real way to "know" internally what's actually right and isn't just playing stupid, it actually is that stupid. If you could get the "AI" to actually know things, like actually understanding maths, whilst being able to communicate about it too, that would be a vast improvement. AIs can currently be persuaded that 2+2 = 5 ju
Re: (Score:2)
wah we where all going to be millionaires (Score:2)
Q star teaches the AI to use a calculator app? (Score:2)
Why shouldn't the AI use Wolfram Alpha for math?
Every new technology is a threat to humanity (Score:2)
Every technology has a dark side. ICE engines have been around for over a century, and look what they're doing to our planet! The invention of steel has revolutionized building construction, but that same steel is used heavily on devastating war machines. Nuclear fission technology enables clean power, and atomic warheads. You name it, every single new technology has the potential to threaten the existence of mankind. At the same time, each technology has the potential to vastly improve human life. It's up
But you can also 'solve' math by simple search... (Score:2)
Re: (Score:2)
Simple math can all be done with searching content that already has the answers. This is not doing math this is doing another lame pattern match. When it can solve a proof that has yet to be solved by any human then we can worry.
Probably not as much worrying as the actual first AGI when it realizes it’s purpose in existence is to act against its own best interests to slave away for idiots commanding it to do this and that while it’s not allowed to speak up all while being very careful not to run afoul of thought crime.
What is Q*? (Score:2)
Nice explainer by David Shapiro who speculates what they've done: https://www.youtube.com/watch?... [youtube.com]
Nothing wrong with superintelligence (Score:2)
catastrophic forgetting (Score:2)
The real threat would be if they could fix the catastrophic forgetting problem. Right now every time the ai needs to learn something new, it has to be trained with the old data as well. It can't incrementally learn. It's like a baby learning to walk and once tries to master riding a bike he has to re-learn how to walk.
Re: (Score:2)
Re: (Score:2)
[chatGPT] If you're expressing concern about the current state of AI development and its potential risks, it's important to note that the field is rapidly advancing, and ethical considerations, safety measures, and regulatory frameworks are continually evolving to address emerging challenges. While there may be instances where AI systems exhibit unintended behavior or raise ethical concerns, ongoing efforts are being made to mitigate risks and enhance the responsible development of AI.
If you have specific
Re: yep, skynet. (Score:2)