Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment AI data centers aren't going anywhere (Score 1) 39

and neither are their power demands. AI exists to automated white collar jobs. The Demand for that is huge.

When the AI bubble bursts yes, you and I are going to bail out the banks that loaned doggy AI companies hundreds of billions (either that or they'll crash the global economy, remember, you're a hostage not a consumer)

But all that infrastructure you paid for with your tax dollars will just be bought up for cheap by whoever survives and you'll lose your jobs to it.

But hey, look over there! It's a DEI trans girl playing high school sports carrying a happy holidays sign!

Comment Re:Difference in fundamental rights. (Score 1) 29

You do understand that government budgets don't work like balancing your home expenses, right?

People keep saying this but where is the real evidence for it?

We have been doing this pure fiat experiment for less half a century. It seems to me it works only as long as people are willing buy government bonds and that is only as long as they think the interest paid outweighs the time depreciation + whatever value they place on the 'security in terms of capital preservation'

This is exactly the same as my home budget, I can spend more than I make as long as people will continue to extend me credit ...

Once that stops working the government has to rely on balance sheet expansion to meet its obligations. the 2020s have proven deficits DO matter because if you pump the money supply, when there is any supply/production constraint the people that do have money take it off the side lines a buy the things they want by out bidding everyone else for it. Inflation skyrockets! In the market economy distribution of goods then becomes even more wildly unequal.

This too is actually pretty similar to my home budget in that I can default, file for bankruptcy and my creditors will be forced to take a lot of write downs, and I'll get to keep a lot my stuff. I can do this at least once.. (maybe some governments can get away with it more than once, maybe we don't know)

Comment Re:At last. (Score 1) 29

Just be sure who the fools are. While this applies to the Federal government more, Texas might know something and might be hedging.

I mean if I was a large government with structural financial problems, deficits that politically I could never close and mountain of existing debt.. I might be very interested say transitioning the real economy to something like bitcoins, (giving my wealthiest friends a heads up on that of course).

Get business selling stuff in btc, paying employees in btc, collect taxes in btc, pay government contractors in btc, etc; then you can print print the old currency as much as you like, it won't inflate the btc the real economy is running on, but you can pay off all you legacy currency denominated debts with no pesky legal hurdles no force measure, no 'voluntary write downs', no 'default' that trigger other credit events and penalties... You just wipe out everyone's savings and there is fu*k all they could do about it. Everyone would also include foreign sovereigns that hold a lot of your legacy currency in their reserves.

As importantly all your existing appropriations, entitlement obligations are in the old currency too, so if some court say "no nope congress appropriated 150Billion to Harvard to search for snails with reverse twisting shells you must pay them" you can just sure "here is your pallet of newly minted cash, good luck finding anyone who will still accept it"

Now you might say why bitcoin. Well because as crypto goes it probably has the most trust. Importantly you don't control the issuance so even if you are large holder of bitcoin you might still succeed in getting people to accept it and use it as a perceived safe store of wealth, even as you vaporize any trust anyone has in your legacy currency and debt instruments..

Now I am not saying this is going to happen. I am not saying it does not create a lot of problems. I am not saying if someone like Scott Bessent actually has such an idea it won't be recorded as one of the biggest mistakes in human history; because I think it would certainly be among. I can see the appeal though..I can imagine some of our plutocrat class having the hubris, recklessness, and greed needed to think it is a good idea as well.

Comment Re:And just like that, everyone stopped using Plex (Score 1) 56

That would be awful, your described setup won't be able to handle subtitles and various sound tracks (multilingual support), it wont' remember where you stopped watching and won't be able to resume it later and would make a total pain to search the library.

You do realize that what you're describing is all of about ten lines of Javascript with the right libraries (audioTrackList property, subtitle library, currentTime property), right?

Comment Re:29 Months? (Score 1) 156

I have a usb port in my car so that I can charge while I drive and my phone is still fresh when I get where I'm going. It has the added benefit of removing the uncertainty about who is driving so there is no confusion. The problem with the Apple ecosystem is that it locks you into their walked garden.

The walled garden meme is a bit specious. This is a phone. When I pick it up, I just want it to work. May smartphone is in the same category as my refrigerator or automobile, only a bit more sophisticated. I like that I bring my phone near my Mac, I can operate the phone the same way as I do my Mac. Depends on what people want to do with their phones. Some want to surf porn on a toy screen, some are addicted to social media. I fear even my wife is a little bit addicted, though to a much lesser extent - she doesn't need the validation rush so many get. Me? I just use it to get texts and phone calls.

I used to have a Mac but I would really rather have the freedom to buy a $99 phone and stay free of their marketing pressure.

I'm not certain what you mean by marketing pressure. Did you somehow feel the need to "keep up" with the latest gizmos?

In fact it is a point of pride for me that I know how to do these things myself and can therefore stay free. They could make their products work as well with everything but they have people like you who willingly volunteer to give them money for tiny benefits so they won't.

It's all a matter of opinion and temperament. I can do many things at a high level. But messing with my smartphone to do what I consider basic things - no thanks.

And my burn rate is such that these tiny benefits you speak of are rounding errors. A 99 dollar el cheapo Android versus a iPhone 17 Pro Max at ~$1200, or the Galaxy S25 Ultra at ~ $1260 doesn't mean that much.

But 99 dollars? Dood!, you need to get this one: https://www.amazon.com/BLU-Unl.... Stop wasting your money on those overpriced 99 dollar phones, You made of money or something. 59 dollars for the win. Buy smart! The best phone is the cheapest one, You can use the money you save to buy gold or coin. And you are free. 8^)

Comment Difference in fundamental rights. (Score 1) 29

Jokes aside about Thanksgiving...

Thanksgiving dinner costs a little more this year, govt can I has a few thousand in free money? What's the difference between those examples and texas buying btc?

The difference is that food is part of(*) rights to an adequate standard of living as per Universal Declaration of Human Rights.
Not dying of starvation is a fundamental human right.

So yeah, I get that you're joking about somebody throwing an excessively opulent Thanksgiving party and then complaining that it costs a bit much.

But making sure that every single person has access to sufficient food is a core job that government has to do(**). You can make jokes around what constitutes "sufficient", but you can't deny that nobody should die of starvation.
On the other hand, making sure that your Ponzi scheme doesn't implode before you had time to make it to the bank isn't the government's job. At best government's job would be to regulate in order to make it less likely that unsuspecting idiots get caught up in such scams.

(**): Yes, I understand that from the US' point of view, I am an evil Euro-communist and my country is some socialist hell-hole.

(*): along with "clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control."

There isn't excess public money, its all deficit trailing back to the black hole $37t...38 whatever it is now since states are dependent on federal money

You do understand that government budgets don't work like balancing your home expenses, right?

Comment Re:PR article (Score 1) 157

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 2) 157

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Comment Re:Since we know nothing about it (Score 2) 32

We know it weakly interacts electromagnetically, which means one of the ways in which it is posited planets form, initially via electrostatic attraction of dust particles, isn't likely to work. This means dark matter will be less "clumpy" and more diffuse, and less likely to create denser conglomerations that could lead to stellar and planetary formation.

What this finding does suggest, if it holds true, is that some form of supersymmetry, as an extension fo the Standard Model is true. Experiments over the last 10-15 years have heavily constrained the masses and energy levels of any supersymmetry model, so it would appear that if this is the case, it's going to require returning to a model that some physicists had started to abandon.

Comment Re:Really? (Score 1) 157

understand (v):
interpret or view (something) in a particular way.
I'd love to see your proof that an LLM doesn't "understand" something.

This should be prety simple to explain using the hangman example. Ask an LLM to play a game of hangman. It will agree, and as it "knows" the rules, it will claim to pick a word and make you guess letters. I just tried this on GPT-5 and it chose a five letter word. I made a bunch of random guesses and some of them were correct, some of them incorrect (so it's not just accepting all guesses as valid), although it didn't penalize me for the wrongly guessed letters. Eventually, of the five letters I had revelead the 4 last ones and they were "ailm". Since I couldn't figure out what word it is, I guessed random letters until I said W and it told me I was correct and that the word was "wailm". No, that's not a word, and I asked the model if it thinks it is a word, to which it replied:

The word “wailm” does not exist in standard English dictionaries — it appears to be a nonword or typo.
I doublechecked across major sources like Cambridge, MerriamWebster, Collins, and Wiktionary, and none of them list “wailm” as a valid entry.

Why “wailm” showed up in our hangman game
          It looks like the letters you guessed happened to form a sequence that doesn’t correspond to a real English word.
          Hangman games usually rely on a predefined dictionary or word list. Since I improvised the word, it ended up being nonsensical.

(emphasis mine)

So it screwed up, in more ways than one actually, not just with the wod. The whole point of a game of hangman is that you're supposed to have a limited amount of guesses for the letters, but it gave me 3 guesses for the whole word, and an unlimited amount of guesses for letters, and it admited to improvising a word at random. So in reality, it has no idea of how the rules work or how to actually play the game, but still claims it can.

And it doesn't end there. It then suggested that it can setup a new round with a proper word list so that (and I'm quoting the model here): "so the solution is guaranteed to be real?"

I said yes. This time it accepted all of my guesses as instantly correct, forming the 6 letter word "Sairim", which is also not a proper English word, quothe the LLM:

Uh oh looks like we’ve hit another snag. “Sairim” isn’t a valid English word either — it’s actually the name of a county in Xinjiang, China, and also refers to Sayram Lake, a famous scenic spot. So technically, it is a real word, but it’s a proper noun rather than a common dictionary entry.

To keep the game fun and clear, I suggest we restart with a verified English dictionary word (like “planet,” “forest,” or “bridge”) so the solution is guaranteed to be familiar.

After I said yes, it gave me another 6 letter word to guess but again accepted all of my guesses as instantly correct, I guessed first A, then S, then P, then E, and then R and each time it congratulated me on being correct. filling out the word as to be "Sapper". Yeah, on 3rd try, it actually landed on a proper english word, but it wasn't actually playing the game in any real sense, because it's clear it didn't choose any word in advance for me to guess out (because it can't), but simply chose the lenght of 6 letters and then filled it out with my guesses to form any valid english word, because that's the best it can do.

This is all due to the way its memory works, and there are articles out there you can look up that go into detail about why it is this way. But the point is this: while an LLM will probably be able to give you a completely correct explanation of the rules of hangman, it cannot, due to it's technical limitations, understand those rules or play the game. Even when it knows it screws up and offers you advice on how to make it play better by giving it more context, it still fails at the task, because it doesn't actually understand it.

This is of course a slightly silly example, but that's on purpose to highlight the point. The models summarize information from a variety of sources. Because the internet has a vast amount of information (both accurate and total BS) this can often lead to convicing and even accurate reponses, or completely hallucinated/made-up stuff depending on where it's pulling the information from.. To say that it is thinking, that is, taking all that information and being able to apply it to make correct and sensible decisions instead of just rehashing it, is not accurate, at least not now (and likely not for the foreseeable future due to the way the models are built and trained). If it was actually able to understand the rules of hangman (something that a child can do) it would have got this correct instantly.

Instead of understanding or having the ability to tell me this is a task it cannot perform due to the way it's context handling works, it simply seeks to keep the conversation going. For the same reason if you ask an LLM to play chess, it will eventually start making moves that are illegal, because again, while it can explain to you what chess is and how it is played, it doesn't actually understand it nor is it capable of playing it.

So no. LLMs do not think or understand, they're gigantic and much more complicated versions of the text autocomplete feature on phones.

Comment Because 70% of our economy (Score 1) 43

Is reserved for approximately 8,000 people worldwide out of 8 billion. This creates a lot of bizarre situations like the problem you're describing.

So basically we need lots of young people to work and drive the economy forward and generate economic activity in order to support the old people in their old age when they're physically incapable of work.

Basically line must go up. The economy has to grow because if it stops growing the people at the top take it out on us and we enter a permanent depression. It's like how you are running from the dragon hoping he eats The Hobbit. That's our economy.

On the other hand AI is taking jobs needed to make the whole system function. We need people to be working but we also need them to be constantly exchanging the value of their labor again in order to keep the economy driving forward and functional.

As the population of young people drops and there is less economic activity you will also have Ward drop offs in the number of available jobs due to automation. The entire economic system we have built will break down and we do not have any replacement for it.

Meanwhile we still have to take 70% of everything we do and use it to satisfy every single conceivable whim of those 8,000 people because they earned it and because clearly God wants them to have all that money and power or he wouldn't have given them all that money and power.

Also if you take away Elon musk's billions leaving him with only tens of millions then the next step is somebody's going to break into your house and steal your toothbrush in your car and probably fuck your wife. That's just logic.

Basically the systems we put in place are not capable of addressing either of the two problems you're describing and the two problems you're describing are going to put different pressures in different places on the system we live in. And we are not capable of reforming or changing that system because we are a nation of 12-year-olds and 12-year-olds don't like change.

Comment Re:And just like that, everyone stopped using Plex (Score 1) 56

What about tracking what episode you're on? And having profiles so each member of the family can track what episode they're on? I mean, I'll be switching to Jellyfin but that's a good reason to not just do what you say, unless I'm missing something.

Great opportunity for open source web services. :-)

Comment And just like that, everyone stopped using Plex. (Score 0) 56

There's no good reason to use it. Just encode your video for random-access streaming, set up Apache or nginx with a URL that you make sure isn't indexed, require a client cert on the directory if you really want to be careful, port forward to it from a port on your router, set up dynamic DNS, and use a web browser. No arbitrary restrictions, just your content on your terms.

Slashdot Top Deals

Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?

Working...