Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Very incomplete analysis (Score 1) 57

Well said.

Most use cases of LLMs/GenAI are actually just process automation by another name. The pivot point of nearly all automation efforts is process control, especially in the handoffs between systems or between systems and humans. Because humans are so adept at handling exceptions, most automation projects I've seen deal with only the most common exceptions. Well-run projects think carefully about how to structure handoffs such that the exceptions don't eat up all the labor saved by automating the base case (which is easy to do). Poorly-run projects just focus on throughput of the base case, and leave exception handling for later, resulting in extremely cumbersome situations that either degrade quality or require nearly as much labor to handle as the pre-automation state. I think many enterprises are about to get a crash course in this, which will dramatically affect how their labor picture looks going forward.

Another area where the job loss analysis is pretty thin is that it assumes that the jobs that are linked to the so-called AI-exposed jobs (e.g upstream and downstream in the process) are implicitly assumed to stay the same. This is almost certainly false.

One example I know well from healthcare is clinical documentation and payment. There are a bazillion AI companies who make the claim that applying AI to clinical documentation "allows healthcare providers to focus more on clinical tasks". The latter part is mostly marketing fluff, supported by a few trial studies. But most of the assertion of saving labor is what people hope for or think should happen.

What really happens is that when AI documents something, the provider can code for those services and try to get paid more. That's the quickest way to get an AI rollout to pay for itself. But insurers don't just sit still, they adjust their payment rules and systems to deal with this, and now somebody on the provider side has to deal with THAT. The system has changed, but often toward more complexity rather than less effort.

I've never seen any of these job loss models try to account for that phenomenon.

Comment Re:Why? (Score 1) 52

SpaceX can probably accelerate their flight schedule to accommodate Russian crew needs. There's the question of if Russia is able/willing to pay nearly $100m per seat. Their flights on Crew Dragon are currently paid through NASA in a seat exchange program where they provide flights from this site on Soyuz for US astronauts. They don't actually pony up the cash.

This launch site is also essential to attitude control of ISS. To refuel the ISS stabilizer thrusters and hold it steady while the gyroscopes are relieved periodically requires Progress modules launched from there. There isn't currently a backup plan for those services.

Submission + - Swiss Illegal Cryptocurrency Mixing Service Shut Down (europa.eu)

krouic writes: From 24 to 28 November 2025, Europol supported an action week conducted by law enforcement authorities from Switzerland and Germany in Zurich, Switzerland. The operation focused on taking down the illegal cryptocurrency mixing service ‘Cryptomixer’, which is suspected of facilitating cybercrime and money laundering.
Three servers were seized in Switzerland, along with the cryptomixer.io domain. The operation resulted in the confiscation of over 12 terabytes of data and more than EUR 25 million worth of the cryptocurrency Bitcoin. After the illegal service was taken over and shut down, law enforcement placed a seizure banner on the website.

Comment Re:What's old is new again (Score 1) 41

You cannot bypass solving the Navier-Stokes equations with transformers. You will, of course, get some predicted flows with a black box model, and you can, if you choose, claim that prediction accuracy is close enough for 85% of the random samples from your test data, but that will not get you new propulsion physics.

There's a serious danger today that a lot of the science which relies on simulated outcomes is subtly wrong in a way that cannot be rejected outright in peer review, but will take many years to discover later.

Comment Re:What is thinking? (Score 1) 289

The quote you provided didn't say LLM, it said neural network. Neural networks, like any model, can interpolate or extrapolate, depending on whether the inference is between training samples or not.

LLMs are neural networks. You seem to be referring to a particular method of producing output where they predict the next token based on their conditioning and their previously generated text. It's true in the simplest sense that they're extrapolating, and reasonable for pure LLMs, but probably not really true for the larger models that use LLMs as their inputs and outputs. The models have complex states that have been shown to represent concepts larger than just the next token.

Submission + - Oracle's credit status under pressure (latimes.com)

Bruce66423 writes: 'A gauge of risk on Oracle Corp.’s debt reached a three-year high in November, and things are only going to get worse in 2026 unless the database giant is able to assuage investor anxiety about a massive artificial intelligence spending spree, according to Morgan Stanley.'

First sign of the boom's inevitable collapsing?

Comment It's not a dome (Score 4, Interesting) 33

No amount of AI can make up for the number of interceptors. The entire dome metaphor is stupid and misleading. Layered air defense systems have been around since forever. Slapping the AI and dome label on it doesn't change this one bit. It's just marketing.

The only new quality in air defense are laser based systems such as the British Dragon Fire. But those aren't a dome either.

https://en.wikipedia.org/wiki/...

Comment p-value hacking (Score 1) 61

Sometimes, social scientists who are under pressure to publish, anything, no matter what, to increase their publication count, will propose stupid experiments, that don't cost much to do, do not measure any intrinsic behaviour of humanity, and can be modified trivially to generate alternative papers. The trick is to brainstorm and try out a lot of these, until the p-value finally fits.

Comment Re:Wrong question. (Score 1) 185

Investment is a tricky one.

I'd say that learning how to learn is probably the single-most valuable part of any degree, and anything that has any business calling itself a degree will make this a key aspect. And that, alone, makes a degree a good investment, as most people simply don't know how. They don't know where to look, how to look, how to tell what's useful, how to connect disparate research into something that could be used in a specific application, etc.

The actual specifics tend to be less important, as degree courses are well-behind the cutting edge and are necessarily grossly simplified because it's still really only crude foundational knowledge at this point. Students at undergraduate level simply don't know enough to know the truly interesting stuff.

And this is where it gets tricky. Because an undergraduate 4-year degree is aimed at producing thinkers. Those who want to do just the truly depressingly stupid stuff can get away with the 2 year courses. You do 4 years if you are actually serious about understanding. And, in all honesty, very few companies want entry-level who are competent at the craft, they want people who are fast and mindless. Nobody puts in four years of network theory or (Valhalla forbid) statistics for the purpose of being mindless. Not unless the stats destroyed their brain - which, to be honest, does happen.

Humanities does not make things easier. There would be a LOT of benefit in technical documentation to be written by folk who had some sort of command of the language they were using. Half the time, I'd accept stuff written by people who are merely passing acquaintances of the language. Vague awareness of there being a language would sometimes be an improvement. But that requires that people take a 2x4 to the usual cultural bias that you cannot be good at STEM and arts at the same time. (It's a particularly odd cultural bias, too, given how much Leonardo is held in high esteem and how neoclassical universities are either top or near-top in every country.)

So, yes, I'll agree a lot of degrees are useless for gaining employment and a lot of degrees for actually doing the work, but the overlap between these two is vague at times.

Comment Re:Directly monitored switches? (Score 1) 53

There is a possibility of a short-circuit causing an engine shutdown. Apparently, there is a known fault whereby a short can result in the FADEC "fail-safing" to engine shutdown, and this is one of the competing theories as the wiring apparently runs near a number of points in the aircraft with water (which is a really odd design choice).

Now, I'm not going to sit here and tell you that (a) the wiring actually runs there (the wiring block diagrams are easy to find, but block diagrams don't show actual wiring paths), (b) that there is anything to indicate that water could reach such wiring in a way that could cause a short, or (c) that it actually did so. I don't have that kind of information.

All I can tell you, at this point, is that aviation experts are saying that a short at such a location would cause an engine shutdown and that Boeing was aware of this risk.

I will leave it to the experts to debate why they're using electrical signalling (it's slower than fibre, heavier than fibre, can corrode, and can short) and whether the FADEC fail-safes are all that safe or just plain stupid. For a start, they get paid to shout at each other, and they actually know what specifics to shout at each other about.

But, if the claims are remotely accurate, then there were a number of well-known flaws in the design and I'm sure Boeing will just love to answer questions on why these weren't addressed. The problem being, of course, is that none of us know which of said claims are indeed remotely accurate, and that makes it easy for air crash investigators to go easy on manufacturers.

Slashdot Top Deals

"God is a comedian playing to an audience too afraid to laugh." - Voltaire

Working...