Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Europe has itself to blame for this (Score 3, Insightful) 122

Eastern Europe was screaming about how dangerous this was, but they weren't listened to.

One of the most insane things is how after Russia's surprisingly poor military performance in the Georgian war, the Merkel government was disturbed not that Russia invaded Georgia, but at the level of disarray in the Russian army, and sought a deliberate policy of improving the Russian military. They perceived Russia as a bulkwark against e.g. Islamic extremism, and as a potential strategic partner. They supported for example Rheinmetal building a modern training facility in Russia and sent trainers to work with the Russian military.

With Georgia I could understand (though adamantly disagreed) how some dismissed it as a "local conflict" because it could be spun as "Georgia attacking an innocent separatist state and Russia just keeping their alliances". But after 2014 there was no viable spin that could disguise Russia's imperial project. Yet so many kept sticking their fingers in their years going, "LA LA LA, I CAN'T HEAR YOU!" and pretending like we could keep living as we were before. It was delusional and maddening.

The EU has three times Russia's population and an order of magnitude larger of an economy. In any normal world, Russia should be terrified of angering Europe, not the other way around. But our petty differences, our shortsightedness, our adamant refusal to believe deterrence is needed, much less to pay to actually deter or even understand what that means... we set ourselves up for this.

And I say this to in no way excuse the US's behavior. The US was doing the same thing as us (distance just rendered Russia less of a US trading partner) and every single president wanted to do a "reset" of relations with Russia, which Russia repeatedly used to weaken western defenses in Europe. And it's one thing for the US to say to Europe "You need to pay more for defense" (which is unarguable), even to set realistic deadlines for getting defense spending up, but it's an entirely different thing to just come in and abandon an ally right in the middle of their deepest security crisis since World War II. It's hard to describe to Americans how betrayed most Europeans feel at America right now. The US organized and built the world order it desired (even the formation of the EU was strongly promoted by the US), and then just ripped it out from under our feet when it we're under attack.

A friend once described Europe in the past decades as having been "a kept woman" to America. And indeed, life can be comfortable as a kept woman, and both sides can benefit. America built bases all over Europe to project global power; got access to European militaries for their endeavours, got reliable European military supply chains, etc and yet remained firmly in control of NATO policy; maintained itself as the world's reserve currency; were in a position that Europe could never stop them from doing things Europeans disliked (for example, from invading Iraq); and on and on - while Europe decided that letting the US dominate was worth being able to focus on ourselves. But a kept woman has no real freedom, no real security, and your entire life can come crashing down if you cross them or they no longer want you.

Comment Re:Spreadsheets and databases (Score 1) 73

They do. Some people don't use them; and (if disciplined) use one or more worksheets to store data and refer to it purely internally and (otherwise) just sort of ad-hoc mix data and formulas.

In some cases a database connection is where the data comes from; but the number of cells grows because it's conceptually easier(and in practice often less opaque, given the ugliness of displaying very large cell contents) to munge on the data step by step rather than trying to ram everything into one transformation.

Coming from the IT side; and having to field questions from the perpetrators of some absolutely hideous excel sheets from time to time(no, I didn't even know that there was a way of creating a type of embedded image that actually quietly triggers the print spooler subsystem to do something that generates a new image based on the contents of another region of the spreadsheet, still don't know how they did that; but it's objectively depraved) I understand the hate; but I do have to admit that spreadsheets are pretty good for napkin-math thinking-it-through type processes.

Like when you work it out on paper; you've got your input, then you have a cell with the contents of the first transformation you wanted to make, then the second, then the third, and so on, and at each step you can think "does this value make sense?"

It rapidly gets out of hand in quantity; but as a rapid sketchpad for thinking something through you could do a whole lot worse. It's also tempting(again, tempting down the path of darkness in quantity) for dealing with jobs that need both a bit of string munging and a pretty-printed output.

You send the intern down to storage with a barcode scanner and have them start snagging SNs and MACs and stuff from the shipment of new gear. Turns out various vendors use different prefixes on different barcode values to inform their own ERP/inventory system/warehouse people which of the 5 closely spaced barcodes their scanner hit. And each vendor uses a different set of conventions, and while obvious enough they aren't documented. Ok, no problem; intern comes back with raw list; all the Lenovo SNs get a 'last x characters' substring; all the Cisco MACs get another transform, whatever.

Obviously if it were your inventory/warehouse system you wouldn't be treating the barcode scanner as a raw HID device and doing ad-hoc transformations, there would be a program that automatically uses the prefixes to populate the correct parts of the form; but you want to stick your head into ERP project hell rather than come up with maybe a dozen lightweight string manipulations? Obviously, you could also do it in your choice of scripting language and iterate through one CSV to create another; but that mostly just conceals what you did from anyone who doesn't use that scripting language; while you can walk basically anyone employable through the logic of the spreadsheet prettifying.

Comment Re:AI detectors remain garbage. (Score 1) 28

They clearly didn't even use a proper image generator - that's clearly the old crappy ChatGPT-builtin image generator. It's not like it's a useful figure with a few errors - the entire thing is sheer nonsense - the more you look at it, the worse it gets. And this is Figure 1 in a *paper in Nature*. Just insane.

This problem will decrease with time (here are two infographics from Gemini 3 I made just by pasting in an entire very long thread on Bluesky and asking for infographics, with only a few minor bits of touchup). Gemini successfully condensed a really huge amount of information into infographics, and the only sorts of "errors" were things like, I didn't like the title, a character or two was slightly misshapen, etc. It's to the point that you could paste in entire papers and datasets and get actually useful graphics out, in a nearly-finished or even completely-finished state. But no matter how good the models get, you'll always *have* to look at what you generate to see if it's (A) right, and (B) actually what you wanted.

Comment AI detectors remain garbage. (Score 5, Interesting) 28

At one point last week I pasted the first ~300 words or so of the King James Bible into an AI detector. It told me that over half of it was AI generated.

And seriously, considering some of the god-awful stuff passing peer review in "respectable" journals these days, like a paper in AIP Advances that claims God is a scalar field becoming a featured article, or a paper in Nature whose Figure 1 is an unusually-crappy AI image talking about "Runctitiononal Features", "Medical Fymblal", "1 Tol Line storee", etc... at the very least, getting a second opinion from an AI before approving a paper would be wise.

Comment Re:I thought we were saving the planet? (Score 1) 187

FYI, their statement about Iceland is wrong. BEV sales were:

2019: 1000
2020: 2723
2021: 3777
2022: 5850
2023: 9260
2024 (first year of the "kílómetragjald" and the loss of VAT-free purchases): 2913
2025: 5195

Does this look like the changes had no impact to anyone here? It's a simple equation: if you increase the cost advantage of EVs, you shift more people from ICEs to EVs, and if you decrease it, the opposite happens. If you add a new mileage tax, but don't add a new tax to ICE vehicles, then you're reducing the cost advantage. And Iceland's mileage tax was quite harsh.

The whole structure of it is nonsensical (they're working on improving it...), and the implementation was so damned buggy (it's among other things turned alerts on my inbox for government documents into spam, as they keep sending "kílómetragjald" notices, and you can't tell from the email (without taking the time to log in) whether it's kílómetragjald spam or something that actually matters). What I mean by the structure is that it's claimed to be about road maintenance, yet passenger cars on non-studded tyres do negligible road wear. Tax vehicles by axle weight to the fourth times mileage, make them pay for a sticker for the months they want to use studded tyres, and charge flat annual fees (scaled by vehicle cost) for non-maintenance costs. Otherwise, you're inserting severe distortion into the market - transferring money from those who aren't destroying the roads to subsidize those who are, and discouraging the people who aren't destroying the roads from driving to places they want to go (quality of life, economic stimulus, etc)

Comment I hate this cliche. (Score 1, Offtopic) 18

I suspect that it's more symptom than cause, and probably not at the top of the list of causes; but I cannot overstate how much I loathe the hyperbolic use of the term 'unthinkable' in these sorts of situations. Both because it's false; and because it often acquires a sort of implicitly exculpatory implication that is entirely undeserved.

Not only is it 'thinkable'; having something awful happen when you perform a procedure that requires longterm hardcore immunosuppression and then let them follow through the cracks is trivially predictable. It's the expected behavior. Successfully reconnecting a whole ton of little blood vessels and nerves is fairly exotic medicine; predicting that thing will go poorly without substantial follow-up is trivial even by washout premed standards.

This isn't to say that it isn't ghastly, or that I could imagine being in that position; but 'unthinkable' is closer to being a claim of unpredictability or unknowability; which is wholly unwarranted. None of this was unthinkable; but nobody really cared to check or wanted to know all that much.

Comment Re:PR article (Score 2) 282

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:Easy Fix... (Score 1) 39

Especially when basically all methods of sabotaging cables(except possibly very near shore) are 'remote'/disposable; if only at the tech level of 'put anchor on rope because water deep'. Nobody is going to give a damn about losing an inert metal chunk.

Reportedly, none of that is public, the business of tapping a fiber line underwater is considerably more fiddly, and enough mines might make that a hassle; but it would also make install and repair far more expensive and probably just theatre when you consider the risk that someone at the telco isn't updating their ASAs.

Comment Re:AI as a sacred prestige competition (Score 2) 26

I think the parent commenter was proposing an analogy to the various temples-overtaken-by-jungle and cathedrals-and-hovels societies; where the competing c-suites of the magnificent seven and aspirants suck our society dry to propitiate the promised machine god.

I have to say; datacenters will not make for terribly impressive ruins compared to historical theological white elephant projects. Truly, the future archeologists will say, this culture placed great value in cost engineered sheds for the shed god.

Comment Re:Air cooling (Score 1) 26

At least for new builds/major conversions; it's often a matter of incentives.

There's certainly some room for shenanigans with power prices; but unless it's an outright subsidy in-kind you normally end up paying something resembling the price an industrial customer would. Water prices, though, vary wildly from basically-free/plunder-the-aquifer-and-keep-what-you-find stuff that was probably a bad idea even when they were farming there a century or two ago; to something that might at least resemble a commercial or residential water bill.

If the purpose is cooling you can (fairly) neatly trade off between paying for it in power and paying for it in water; and when the price differs enormously people usually choose accordingly if they can get away with it. In the really smarmy cases they'll even run one of the power-focused datacenter efficiency metrics and pat themselves on the back for their bleeding edge 'power usage effectiveness'(just don't ask about 'water usage effectiveness').

You can run everything closed loop; either dumping to air or to some large or sufficiently fast moving body of water if available; but the electrical costs will be higher; so you typically have to force people to do that; whether by fiat or by ensuring that the price of water is suitable.

Comment Re:PR article (Score 1) 282

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 1, Insightful) 282

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

The sum of the Universe is zero.

Working...