Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Multiple issues (Score 1) 138

Second, we think there is no limit to how smart an AI can become. This is not true. This is because when you look at charts vs time, they look exponential - showing how each year the AI not only gets smarter but also gets more smarter than it did last year. Those charts so capability vs time but ignore the cost and hardware increases. In reality these charts are NOT showing AI advancements - they are showing Moore's Law.

AI indexes are measuring capabilities of AI systems not Moore's law. You can say moores law is responsible for enabling hardware industrial base but this doesn't change the nature of the thing being measured.

Because of Moores law, each year we get exponentially better chips. But AI itself is not improving, it is the HARDWARE that is getting better - along with the money we spend on the AI. Hardware improvements affect speed, not capability. AI with better hardware is faster, but it can't really do more or give you better answers.

The more training a model of a given size the better answers it gives. The more compute you can afford the more you can afford to train the model.

The honest truth is that all of AI's improvements in capability - the better answers- are entirely caused by HUMANS. The humans detect a problem - putting elephants in a room when told not to - and fix it. The humans realize that AI gives better answers when told to check it's results - so the AI is told to replace "What is the best political party to vote for" with "What are the problems with my answer to what is the best political party to vote for".

This is like saying everything is caused by god and just as useful. Humans are getting better at training AIs resulting in AIs that are more useful and more capabilities. Majority of a models capability and compute budget takes form of pretraining rather than post training where CoT et el is applied.

Consider how easy it is to write a book that has some of your knowledge, but impossible to write a book that has more knowledge than you have.

Similarly, it is extremely unlikely that a species can create an artificial intelligence that is actually smarter than the original species.

This is conflating knowledge with intelligence. What makes AI useful isn't what they know but rather an ability to generalize and apply their experience to new situations. LLMs for example know way more than any human does and their perplexity scores are at least an order of magnitude better than human scores yet nobody would say they are more intelligent than humans.

How could we tell if we succeeded? If it answers a question we cannot answer - how would we know it is right?

I don't think this is a salient issue. Either you get a useful answer or you don't. If what you ask for isn't checkable and you have no way of ever evaluating the real world performance of the answer by putting it to use in some way then what was the point of asking in the first place?

Third and most important, if we can create a super intelligent AI we will not create a single one of them. Instead we will create hundreds of them. There will be the prototype and the one made that fixes the first mistakes. There will be China, Russia, Japans, America, Germany, one. And Microsoft's, Googles, Amazons, etc.

Yep as time moves forward it gets easier and easier for everyone to create their own AI genies. It is ultimately the enabling knowledge and industrial base that matters not how many compliance boxes are checked or how many people on your red team.

I can respect the rare doomer who advocates for blanket AI bans. This at least has some logic to it. While it is infeasible to detect when people are breaking the rules trillions of dollars in capital flows and large scale access to enabling knowledge can't be hidden.

The typical doomer never advocates for stopping. It is just more of the same bullshit of protectionist regulatory hurdles that stand no chance of preventing either the emergence of AI genies or the granting of wishes to different masters. AI companies have already established themselves as wholly untrustworthy power seeking whores (no offense to actual whores)

Comment Re:Sums it up nicely (Score 1) 138

Leftists rushing to buy "I purchased this Tesla before Musk went crazy" stickers was absolutely hilarious due to the level of cognitive dissonance on display.

I liked the Nazi themed stickers and slogans. Swasticars, 0 to 1939 in 3 seconds, fascist thing on four wheels, Tesla stylized KKK hoods...etc.

Leftists rushing to buy "I purchased this Tesla before Musk went crazy" stickers was absolutely hilarious due to the level of cognitive dissonance on display. Making Left decide what is more important - TDS or Green Agenda and then having them decide that TDS is more important is a Magnum Opus

It's 2025. There are plenty of EVs on the market that don't benefit Musk or Trump's incompetent attempt at a self coup. There is no need to decide. You can for example "sell your car", still have an EV and fuck over Musk all at the same time.

You can say "I purchased this Tesla before Musk went crazy" is a weak protest when you can sell your car and make a more powerful statement yet it was never at any point a choice between green agenda and opposition to authoritarianism, sociopathy and incompetence.

Likewise people can stop paying to use LLMs. The open source models do the same shit and cost less to run than paying for an OpenAI subscription. Instead of bitching about the trillions being funneled into this crap people have the power to simply choose to ignore it. More concretely it is in everyone's interest for the AI bubble to pop sooner rather than later.

Comment Re:The Disease of Greed. (Score 1) 138

Exactly which species do you think the machine is learning from today? Don't anthropomorphize it? I'd love to know exactly how we go about doing that. Especially knowing how stupid we humans are.

You do it by not jumping to baseless conclusions. People are capable of thinking abstractly and recognizing their own biases.

If we were smart and not greedy, we would require a minimum IQ and psych eval for anyone wanting to communicate with AI.

The general answer to the corrupting influence of power is systems of governance where power is constrained by power. A states imposition of this type of gating who is too stupid or unfit to access information or communicate regardless of intention is certain to lead to further aggregation of power.

We're not smart. We're greedy. And the millisecond superintelligence will need, will be used to decide our fate. Not debate with stupid humans that will look like a grown-ass adult arguing with a 2-year old.

Like predictions of the future nobody has any way of predicting what ASI can do. It is very much still an open question the value higher intelligence brings to accomplishing relevant tasks relative to value of doing the required work.

LLMs of today are moored to their training and structurally can't evolve outside the confines of their limited STMs. Likewise human minds are moored to their genetic histories. The evolution of a super-intelligence which presumably have no such constraints is fundamentally unpredictable.

Comment Re:Multiple problems (Score 4, Interesting) 51

Investor owned utilities want profit, not construction expense

True. I used to work for one of those. They were always trying to figure out how to offload maintenance and construction onto subcontractors. And just sit around, read meters and collect bills. It turns out that the meter-reading (which they had also sub'ed out) is easy to do. And the market took note of that and cut their ROI to the bone. They were de-listed from the stock market and went private as a subsidiary of an investment fund. Which is principally held by the construction companies doing their heavy lifting. And making big bucks doing so.

It turns out that capital markets are pretty good at spotting situations where the marginal cost of a product is low or zero. And then cutting the fair PE ratio to match. Except for where it will take a few years to figure the market and products out (AI for example). And then the salesmen drop that segment like a no longer hot potato and spin up a new scam.

It turns out that there is always money to be made as a reward for continuting real efforts. It's just not the sexiest part of the economy.

Comment Re:The thread of AGI ... (Score 1) 138

Are you so arrogant as to think an AGI doesn't know that? If its alignment says so, it will find a way around that. Now it is chatbots, soon it will we robots. We must try to control the alignment.

This is a fever dream. What we think of as alignment.. bludgeoning of pre-trained models to output what we prefer them to output is already an easily bypassed joke. A joke that goes completely out the window the second you close the loop allowing models to augment themselves.

Comment Re:The Disease of Greed. (Score 1) 138

When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

Never anthropomorphize intelligence especially machine intelligence. Eightfold paths, hwnata hakhta hvarshta, etc are human inventions not expressions of some underlying universal truth some people are just too stupid to see and abide by.

Comment Re:Algorithmically generated feeds (Score 1) 152

There's no acceptable "middle ground" for this. Either we've got free speech, or we're giving Trump and the GOP the ability to sue any provider that allows users to say things their administration doesn't like. The only thing trying to reach a "middle ground" accomplishes is fine-tuning how many lawsuits they'll need to file to silence dissent.

Comment Re:We've done the experiment (Score 1) 152

230 prevents sites from being prosecuted. So, right now, they do b all moderation of any kind (except to eliminate speech for the other side).

Remove 230 and sites become liable for most of the abuses. Those sites don't have anything like the pockets of those abusing them. The sites have two options - risk a lot of lawsuits (as they're softer targets) or become "private" (which avoids any liability as nobody who would be bothered would be bothered spending money on them). Both of these deal with the issue - the first by getting rid of the abusers, the second by getting rid of the easily-swayed.

Slashdot Top Deals

All constants are variables.

Working...