Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Very incomplete analysis (Score 1) 57

Well said.

Most use cases of LLMs/GenAI are actually just process automation by another name. The pivot point of nearly all automation efforts is process control, especially in the handoffs between systems or between systems and humans. Because humans are so adept at handling exceptions, most automation projects I've seen deal with only the most common exceptions. Well-run projects think carefully about how to structure handoffs such that the exceptions don't eat up all the labor saved by automating the base case (which is easy to do). Poorly-run projects just focus on throughput of the base case, and leave exception handling for later, resulting in extremely cumbersome situations that either degrade quality or require nearly as much labor to handle as the pre-automation state. I think many enterprises are about to get a crash course in this, which will dramatically affect how their labor picture looks going forward.

Another area where the job loss analysis is pretty thin is that it assumes that the jobs that are linked to the so-called AI-exposed jobs (e.g upstream and downstream in the process) are implicitly assumed to stay the same. This is almost certainly false.

One example I know well from healthcare is clinical documentation and payment. There are a bazillion AI companies who make the claim that applying AI to clinical documentation "allows healthcare providers to focus more on clinical tasks". The latter part is mostly marketing fluff, supported by a few trial studies. But most of the assertion of saving labor is what people hope for or think should happen.

What really happens is that when AI documents something, the provider can code for those services and try to get paid more. That's the quickest way to get an AI rollout to pay for itself. But insurers don't just sit still, they adjust their payment rules and systems to deal with this, and now somebody on the provider side has to deal with THAT. The system has changed, but often toward more complexity rather than less effort.

I've never seen any of these job loss models try to account for that phenomenon.

Submission + - Poll (highspeedinternet.com)

olddoc writes: How many non-satellite options do you have for high speed internet where you live?
0, 1,2,3 or more,

[ignore the URL I needed one in order to submit. Also, sorry for the format for a poll submission. Hopefully a human figures this out and likes the idea for a poll]

Comment Re: I wonder (Score 3, Informative) 11

All of the headline changes go in during a two-week window at the start of the cycle, having been developed previously. Several people write articles during that window about what got merged, so the list is already known when the release actually comes out two months later. (That two-month period is used for testing in more unusual situations and checking for incompatibilities among the set of changes that got merged for the cycle.)

So this article is really reporting that two compact weeks of merge decisions in early October are now officially considered tested and ready, and they wrote the article and people checked it over a while ago now.

The part that's harder to track is ongoing development work, which happens continuously without a set schedule, but it happens in separate trees and only goes into the official tree when it's complete, has been reviewed, and has gone through various testing in systems managed by kernel developers. All of the work described here was done before 6.17 was released, and developed during several releases before that, but it didn't need to affect Linus's tree until he decided it would land in 6.18.

Comment Re:Wrong question. (Score 1) 188

Investment is a tricky one.

I'd say that learning how to learn is probably the single-most valuable part of any degree, and anything that has any business calling itself a degree will make this a key aspect. And that, alone, makes a degree a good investment, as most people simply don't know how. They don't know where to look, how to look, how to tell what's useful, how to connect disparate research into something that could be used in a specific application, etc.

The actual specifics tend to be less important, as degree courses are well-behind the cutting edge and are necessarily grossly simplified because it's still really only crude foundational knowledge at this point. Students at undergraduate level simply don't know enough to know the truly interesting stuff.

And this is where it gets tricky. Because an undergraduate 4-year degree is aimed at producing thinkers. Those who want to do just the truly depressingly stupid stuff can get away with the 2 year courses. You do 4 years if you are actually serious about understanding. And, in all honesty, very few companies want entry-level who are competent at the craft, they want people who are fast and mindless. Nobody puts in four years of network theory or (Valhalla forbid) statistics for the purpose of being mindless. Not unless the stats destroyed their brain - which, to be honest, does happen.

Humanities does not make things easier. There would be a LOT of benefit in technical documentation to be written by folk who had some sort of command of the language they were using. Half the time, I'd accept stuff written by people who are merely passing acquaintances of the language. Vague awareness of there being a language would sometimes be an improvement. But that requires that people take a 2x4 to the usual cultural bias that you cannot be good at STEM and arts at the same time. (It's a particularly odd cultural bias, too, given how much Leonardo is held in high esteem and how neoclassical universities are either top or near-top in every country.)

So, yes, I'll agree a lot of degrees are useless for gaining employment and a lot of degrees for actually doing the work, but the overlap between these two is vague at times.

Comment Re:Directly monitored switches? (Score 1) 54

There is a possibility of a short-circuit causing an engine shutdown. Apparently, there is a known fault whereby a short can result in the FADEC "fail-safing" to engine shutdown, and this is one of the competing theories as the wiring apparently runs near a number of points in the aircraft with water (which is a really odd design choice).

Now, I'm not going to sit here and tell you that (a) the wiring actually runs there (the wiring block diagrams are easy to find, but block diagrams don't show actual wiring paths), (b) that there is anything to indicate that water could reach such wiring in a way that could cause a short, or (c) that it actually did so. I don't have that kind of information.

All I can tell you, at this point, is that aviation experts are saying that a short at such a location would cause an engine shutdown and that Boeing was aware of this risk.

I will leave it to the experts to debate why they're using electrical signalling (it's slower than fibre, heavier than fibre, can corrode, and can short) and whether the FADEC fail-safes are all that safe or just plain stupid. For a start, they get paid to shout at each other, and they actually know what specifics to shout at each other about.

But, if the claims are remotely accurate, then there were a number of well-known flaws in the design and I'm sure Boeing will just love to answer questions on why these weren't addressed. The problem being, of course, is that none of us know which of said claims are indeed remotely accurate, and that makes it easy for air crash investigators to go easy on manufacturers.

User Journal

Journal Journal: Audio processing and implications 1

Just as a thought experiment, I wondered just how sophisticated a sound engineering system someone like Delia Derbyshire could have had in 1964, and so set out to design one using nothing but the materials, components, and knowledge available at the time. In terms of sound quality, you could have matched anything produced in the early-to-mid 1980s. In terms of processing sophistication, you could have matched anything produced in the early 2000s. (What I came up with would take a large comple

Comment Re:Don't blame the pilot prematurely (Score 4, Insightful) 54

It's far from indisputable. Indeed, it's hotly disputed within the aviation industry. That does NOT mean that it was a short-circuit (although that is a theory that is under investigation), it merely means that "indisputable" is not the correct term to use here. You can argue probabilities or reasonableness, but you CANNOT argue "indisputable" when specialists in the field in question say that it is, in fact, disputed.

If you were to argue that the most probable cause was manual, then I think I could accept that. If you were to argue that Occam's Razor required that this be considered H0 and therefore a theory that must be falsified before others are considered, I'd not be quite so comfortable but would accept that you've got to have some sort of rigorous methodology and that's probably the sensible one.

But "indisputable"? No, we are not at that stage yet. We might reach that stage, but we're not there yet.

Comment Re: Single-region deployments by regulated industr (Score 2) 25

They generally use a primary and standby system, just because it's a lot harder to avoid consistency problems with multiple primaries. This means that you need to direct traffic to the current primary, and redirect it to a standby when necessary, which is fine except that the system you're switching away from and the configuration interface for your DNS provider are both in us-east-1, because everything normally is. That's why they're looking for the ability to make a different region primary specifically during in AWS outage.

Comment Let's Go to Science-Hating Slashdot (Score -1) 113

And see what the Pokemon shirt crowd thinks: Aww, too bad we don't get to talk about the science part because Slashdot is still crying about Trump.

Let's take all that money we were going to use to do something scientific and instead give it to the 500 pound unemployed single mom with nine kids and a $300 manicure so she can buy Oreos and bongs with it.

Comment Re:Current LLM's (Score 1) 211

Yes, exactly.

If you want to automate something the automation has to not only be faster per unit task or output, but it also has to make up for the extra time of checking or re-doing something when the automated way failed. To do that, you usually need to constrain the parts of a problem where the automated approach will succeed nearly always and where failures can be identified and mitigated quickly. That requires building a bunch of process oversight stuff, which in turn requires a big investment in instrumenting the current and future process to identify the exceptions and handle them correctly before failures move downstream and become much hard to address.
Additionally, work outputs that have a lot of unpredictability, or require persuasion or consensus (such as defining what problem to solve), or situations where there's no pre-defined correct future state, only a series of choices and murky outcomes, are just hard to automate period.

LLMs not only have regular failures, they have highly unpredictable failures. Yet they're being sold as though than can automate anything.

The reason the "agentic OS" stuff is will fail is the same reason that we didn't automate away our daily work using VBScript - the automation will be clunkier and more annoying than just doing the steps on our own.

Slashdot Top Deals

You had mail, but the super-user read it, and deleted it!

Working...