Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:AI as a sacred prestige competition (Score 1) 13

I think the parent commenter was proposing an analogy to the various temples-overtaken-by-jungle and cathedrals-and-hovels societies; where the competing c-suites of the magnificent seven and aspirants suck our society dry to propitiate the promised machine god.

I have to say; datacenters will not make for terribly impressive ruins compared to historical theological white elephant projects. Truly, the future archeologists will say, this culture placed great value in cost engineered sheds for the shed god.

Comment Re:Air cooling (Score 1) 13

At least for new builds/major conversions; it's often a matter of incentives.

There's certainly some room for shenanigans with power prices; but unless it's an outright subsidy in-kind you normally end up paying something resembling the price an industrial customer would. Water prices, though, vary wildly from basically-free/plunder-the-aquifer-and-keep-what-you-find stuff that was probably a bad idea even when they were farming there a century or two ago; to something that might at least resemble a commercial or residential water bill.

If the purpose is cooling you can (fairly) neatly trade off between paying for it in power and paying for it in water; and when the price differs enormously people usually choose accordingly if they can get away with it. In the really smarmy cases they'll even run one of the power-focused datacenter efficiency metrics and pat themselves on the back for their bleeding edge 'power usage effectiveness'(just don't ask about 'water usage effectiveness').

You can run everything closed loop; either dumping to air or to some large or sufficiently fast moving body of water if available; but the electrical costs will be higher; so you typically have to force people to do that; whether by fiat or by ensuring that the price of water is suitable.

Comment Re:PR article (Score 1) 163

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2) 163

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:What is thinking? (Score 1) 163

None of your examples are examples of "not thinking." They're examples of things that you think don't think.

The problem with that is it's entirely useless for extrapolating, as much as your prejudice would like you to think the opposite. It's also generally agreed that rocks don't do arithmetic, but if you arrange them in just the right way they're actually awfully good at it.

Comment Re: Really? (Score 3, Insightful) 163

Funny, but the entire human population spends most of their time not "thinking."

From coordinating complex movements like walking through routines like driving to work to, yes, knee jerk reactions to most things, most of what our brains do is subconscious. Only the weird justifies the effort of actual executive control. Whatever it is that we call "conscious thought" is even rarer.

Comment Dumb (Score 1) 163

Einstein's theory of relativity was not based on scientific research.

Well, you can stop reading there. I don't necessarily agree with the thesis, but the supporting arguments seem to range from wrong to kind of dumb.

Comment Re:Anything for money (Score 1) 89

BYD, at least, have a reasonable footprint in Mexico (taken several Uber rides there in BYD’s).And unless you’re already in the western US, Mexico City is as close as LA or closer - and that’s one of the farther-south places in the country. The one driver I asked about his was very happy with the car, and it certainly seemed nice enough. Not a luxury car, but comfortable and spacious.

Comment Re:It WILL Replace Them (Score 4, Insightful) 43

The illusion of intelligence evaporates if you use these systems for more than a few minutes.

Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.

Comment Re:Not so odd (Score 2) 33

It's pretty important if you're working in a developing field. The original TPU couldn't do floating point so it wasn't really useful for training. IIRC they also work best with matrices that have dimensions that are multiples of fairly big numbers (128? 256?) with later generations working best with bigger matrices.

That's great for the current focus on gigantic attention matrices but not so great if the next big thing can't be efficiently shoehorned into that paradigm.

Comment Re:Better if... (Score 1) 156

I'll stick with them, as long as they aren't that iPhone17 orange abomination.

I'm with you on this one....WTF was up with that orange color???

That AND...no Space Grey or Black?!?!

That's pretty much one of the only things keeping me from upgrading my 12 pro max to the 17 pro max.

I'm hoping in a few months maybe they'll offer better colors....?

Comment Re:Better if... (Score 1) 156

I tend to keep my phones a long time.

I tend to buy top of the line loaded ones....I'm currently on the iPHone 12 Max Pro...loaded storage available at the time...I think 1 TB?

Before that I had the iPhone 6 Plus (did they have a pro?)....and IPhone 3GS before that....

Right now, not seriously in the market....my phone still had plenty of space on it, runs as fast as I need, I don't see any speed or battery degradation on it yet.

I will admit I'm looking at the 17's camera and ability to shoot RAW video...that is starting to tempt me.

I guess my phones are not well over $1K, I generally just put it on Apple Pay, get my 3% cash back and pay it off interest free over 12 months.

I have the cash, but figure why not use "free money" if given the opportunity, eh? I keep may cash for it in an interest bearing account or invested, etc...

I frankly don't give a fuck what anyone thinks of my phone, if they think anything at all.

As I'd written earlier, I think in the US, phones are such a commodity, no one looks at them as any sort of status symbol and hasn't for a long time now...

Slashdot Top Deals

Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?

Working...