Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:YAFS (Yet Another Financial System) (Score 1) 62

Like I've said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system with less regulation created by the crypto bros to wrestle the current system away from the Wall St. bros.

I think this view gives the crypto bros too much credit. They might now be thinking about taking advantage of the opportunity to wrestle the system away from the Wall Street bros, but there was no such plan.

Comment Re:Let's be honest here (Score 4, Insightful) 24

There's really not much worth reading "on the internet" anymore.
It's meaning inflation. The more words published, the less value per word.

Or, there's the same amount of stuff worth reading, but it is being diluted by a much larger flow of sewage that isn't worth reading.

Comment Re:Very difficult to defend (Score 2) 34

too much hassle. build a shadow fleet of well-armed fast interceptors with untraceable munitions and sink the saboteurs.

To intercept them you still have to identify them, which you can't do until after they perform the sabotage. Given that, what's the benefit in sinking them rather than seizing them? Sinking them gains you nothing, seizing them gains you the sabotage vessel. It probably won't be worth much, but more than nothing. I guess sinking them saves the cost of imprisoning the crew, but I'd rather imprison them for a few years than murder them.

Comment Re:PR article (Score 1) 206

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:What is thinking? (Score 1) 206

You ignored his core point, which is that "rocks don't think" is useless for extrapolating unless you can define some procedure or model for evaluating whether X can think, a procedure that you can apply both to a rock and to a human and get the expected answers, and then apply also to ChatGPT.

Comment Re:PR article (Score 1, Interesting) 206

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper

Heh. It says a lot about the pace of AI research and discussion that a paper from last year is "old".

This is a common thread I notice in AI criticism, at least the criticism of the "AI isn't really thinking" or "AI can't really do much" sorts... it all references the state of the art from a year or two ago. In most fields that's entirely reasonable. I can read and reference physics or math or biology or computer science papers from last year and be pretty confident that I'm reading the current thinking. If I'm going to depend on it I should probably double-check, but that's just due diligence, I don't actually expect it to have been superseded. But in the AI field, right now, a year old is old. Three years old is ancient history, of historical interest only.

Even the criticism I see that doesn't make the mistake of looking at last year's state of the (public) art tends to make another mistake, which is to assume that you can predict what AI will be able to do a few years from now by looking at what it does now. Actually, most such criticism pretty much ignores the possibility that what AI will do in a few years will even be different from what it can do now. People seem to implicitly assume that the incredibly-rapid rate of change we've seen over the last five years will suddenly stop, right now.

For example, I recently attended the industry advisory board meeting for my local university's computer science department. The professors there, trying desperately to figure out what to teach CS students today, put together a very well thought-out plan for how to use AI as a teaching tool for freshmen, gradually ramping up to using it as a coding assistant/partner for seniors. The plan was detailed and showed great insight and a tremendous amount of thought.

I pointed out that however great a piece of work it was, it was based on the tools that exist today. If it had been presented as recently as 12 months ago, much of it wouldn't have made sense because agentic coding assistants didn't really exist in the same form and with the same capabilities as they do now. What are the odds that the tools won't change as much in the next 12 months as they have in the last 12 months? Much less the next four years, during the course of study of a newly-entering freshman.

The professors who did this work are smart, thoughtful people, of course, and they immediately agreed with my point and said that they had considered it while doing their work... but had done what they had anyway because prediction is futile and they couldn't do any better than making a plan for today, based on the tools of today, fully expecting to revise their plan or even throw it out.

What they didn't say, and I think were shying away from even thinking about, is that their whole course of study could soon become irrelevant. Or it might not. No one knows.

Comment Re:Just use sea water. (Score 3, Insightful) 25

The idea of people bathing in the effluent of a datacenter is peak dystopian. I love it.

What in the world do you think happens to the output of sewage treatment plants? Do you think it's teleported to Pluto?

All the water you ever bathed in has effluents that have gone through the kidneys of scores of animals. Merely warming water by a few degrees is trivial.

Comment Re:Air cooling (Score 1) 25

They never heard of direct to air cooling? There is no need to evaporate clean water.

Air cooling is quite inefficient compared to water cooling. The heat of vaporization of water, 2260 kJ/kg, is remarkable. It will remove a lot of heat. Even the thermal mass of water, with a specific heat of about 4.2 kJ K/kg, is pretty impressive.

Comment Re:AI as a sacred prestige competition (Score 2) 25

AI Slop, all of it. "A theocratic sunk cost trap"?

Not sure why you think this is AI slop. It's an interesting argument. Not sure I agree, but it's a different take.

I admit religions are a cost trap, but they are not connected to data centers

The connection is right in the subject line: it is comparing AI to a "sacred prestige competition." The central idea is that AI is like religion in that it promises great and wonderful rewards in the future if we make sacrifices in the present, and if we don't see any of these great and wonderful rewards: that's because we're not sacrificing enough. It becomes a death spiral: the worse things get, the more effort goes into propitiating the gods (rather than growing food).

The idea that some past societies have collapsedbecause when times were bad the theocracy responded by building more temples and making more and larger sacrifices instead of putting resources into solving their problems) is not new. I don't know if any actual historians credit this theory, but it's been proposed.

Comment Re:PR article (Score 1) 206

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2, Insightful) 206

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...