Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Inference will get cheaper (Score 1) 69

The difference between the AI slop machine and Amazon or Uber is that even when those were losing money, it was none the less clear that if they scaled up then scaling efficiencies would yield a lower cost/unit and they'd become profitable. The pathway to making money instead of setting it on fire clearly existed. It also existed because it was clear even before they super-scaled that Amazon and Uber were doing something useful for which where existed a demand.

So far all we are seeing with the generative AI delusion is an exponentially exploding waste of resources in order to pollute my Youtube feed with slop. Every enterprise is trying "AI" and essentially all of them are finding it does not do what the people selling the tin claim it can.

There were no Amazon, or Uber or Internet evangelists trying to convince everyone that those things were useful or invent uses for them because there was no need: the value was obvious and real.

Isn't Uber still losing money?

Amazon had a plan for profitability, so much so they took on more debt in the early days to scale up. A gamble that paid off because they had a solid plan to begin with, not a "hope the magic beans drop into our laps before we run out of money" type of plan that AI companies have. Uber's business plan was "lets keep doing illegal shit that our competitors cant and just hope we become big enough not to fail".

Comment Re:It's not supposed to be profitable (Score 1) 69

The wealthy prefer a dystopian hell hole for 99.9% of the population and extraordinarily god-like opulence for themselves. They want to be able to control who lives and who dies on such a fundamental level that they are like the Pharaohs of old literally exalted to godhood.

You cannot as a regular person comprehend the kind of greed that a man like Elon Musk or Bill Gates experiences as their normal state of being. It is way past just wanting money or yachts or any of that and into the point where they want to be transhuman.

And you need to understand that they do not think of you as a human being. You are not at the same level organically or as a species in their eyes. You aren't even at the level that you for example perceive a chimpanzee as in their eyes. To a guy like Elon Musk you're more like a slime mold. An utterly alien existence that might occasionally be useful.

Comment Re:It's not supposed to be profitable (Score 1, Troll) 69

I mean you could stop voting for right-wing politicians because you don't like queer people or brown people or whoever the fuck it is you don't like (in Japan it's certain job descriptions because the Japanese can't tell each other apart well enough to create racism).

You could also get over that stupid 12-year-old feeling of it's not fair when you see somebody having food and shelter without being miserable 40 hours or more per week.

But you're not going to do that. Or if you do your friends and family aren't. So like crabs in a bucket we are going to destroy ourselves.

I'm not acting like there's anything that can be done about it I'm just venting. Flaws in human reasoning and emotions mean our species is doomed and it is incredibly frustrating that we're all going to die for such a stupid and idiotic reason.

Who knows maybe one of the other species will take over after we kill ourselves. Smart money is on raccoons. A few more mutations and they'll have opposable thumbs. Beavers are also in the running.

Comment Not enough time (Score 1) 69

The population decline from low birth rates isn't drastic enough. You can look up how the math works out but there is a long tail of increased population growth before you see the crash. It has to do with how you already have all these people of childbearing age going through their lives.

So long before our population could adjust we're going to get hit with huge amounts of layoffs that will cause massive amounts of social strife. There's no getting away from it.

Comment Re:The experiment to train LLMs on LLM output begi (Score 1) 54

There won't be much of an experiment per se. In practice it will quickly devolve into a few big players that control platforms people use so that they can continuously access new training material.

So microsoft, Apple maybe and Facebook and possibly but probably not Twitter (since we just learned 80% of the accounts on Twitter are Russians and bangladeshies pretending to be American conservatives) will continue to thrive because they will be able to tell the difference between a bot and a human being thanks to their control of the platform.

Everyone else was just accessing free training data goes tits up soon. Some of them will be bought out.

In addition to devastating the job market and devouring electricity and water AI is also going to result in huge monopolies because it's a technology that lends itself to monopolies inherently.

Comment It's not supposed to be profitable (Score 3, Insightful) 69

It's supposed to be the answer to the question "if nobody buys the wealthy's products how are they going to stay rich?"

The goal here is to replace as many workers as possible and eliminate the dependency on consumers.

The ultra wealthy want to go back to being like kings. Basically feudalism.

They will have a very tiny number of guildsman and scribes and a handful of knights to keep them in line.

Everyone else has a lifestyle below that of a medieval peasant because you're not even needed to tend the land anymore, they will have machines for that.

It never ceases to amaze me how many people don't realize what's happening here. Even more so there are the people who realize it but just kind of put it out of their mind because the idea of the ultra wealthy dismantling capitalism is so far outside what people view as possible that they can't emotionally comprehend it even if they can understand it intellectually.

And of course there are the numb skulls who think that they are somehow going to profit from the collapse of modern civilization. It's a big club boys and you ain't in it.

Comment You can't cut off cheap Chinese goods (Score 1, Interesting) 81

Europe like America gives too much money to its 1%. The only way to maintain their economies is with cheap goods made by slave labor in China. That's the only way to offset increasingly large amounts of money being moved from the bottom to the top.

If you want to fix that you have to cut off the flow of money to the top and we're not going to do that. There's a variety of terrible reasons why that is the case but it just is.

I honestly do not know a solution to prevent human civilization from collapsing. I suspect that within 10 or 20 years we are going to hand nuclear launch codes over to religious lunatics and that's going to be gave over for humanity.

I definitely do not know how we avoid regressing back into feudalism even if we don't destroy our species. People just like worshiping rulers and kings and the ones that don't don't have the tendencies towards violence that the ones that do have. If it's one thing Afghanistan taught us it's that a very small number of idiots willing to use terrible violence can install a very very small number of people as absolute rulers.

We could counter this with education and critical thinking but even among people who should be well educated all I'm hearing is how we should all go into the trades and be plumbers or whatever. Anti-intellectualism and a hatred and disdain for experts dominates discourse now. That overpowering 12-year-old urge to not be told what to do has completely overwhelmed society and I do not know how you push back against that.

Basically don't tell me what to do.

Comment Re:PR article (Score 2) 238

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment AI data centers aren't going anywhere (Score 2) 54

and neither are their power demands. AI exists to automated white collar jobs. The Demand for that is huge.

When the AI bubble bursts yes, you and I are going to bail out the banks that loaned doggy AI companies hundreds of billions (either that or they'll crash the global economy, remember, you're a hostage not a consumer)

But all that infrastructure you paid for with your tax dollars will just be bought up for cheap by whoever survives and you'll lose your jobs to it.

But hey, look over there! It's a DEI trans girl playing high school sports carrying a happy holidays sign!

Comment Re:PR article (Score 1) 238

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 1, Insightful) 238

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

For every complex problem, there is a solution that is simple, neat, and wrong. -- H. L. Mencken

Working...