Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:We've seen technological revolutions before.... (Score 1) 73

I look at this through the lens of automation, regardless of the technology.
Automation tends to be successful when routine aspects of a process can be handled by a machine in such a way that the effort of finding and handling exceptions doesn't swamp the productivity gains of the mainline automation. But that's actually quite hard in practice, because it means that one has to identify and cordon off the parts of a process that are repeatable and where failures can be readily detected, and then create ways of switching to non-automated processes that are efficient enough not to swamp the gains of the automated work. In environments with physical automation, that's slow, incremental work involving developing process controls and efficient online-offline handoffs. It isn't just a question of building a machine, but of learning and rehearsing the process between automation and people.

The theory of the current AI companies are advancing is that somehow their LLM based technologies can magically eliminate that incremental road to automation and replace the varied tasks that people do. I think this is likely to be untrue on a lot of levels.

First, as we know from software development, a lot of the hard work of building a system is deciding what it should do and how it should work. Those kinds of decisions are not particularly amenable to LLMs because they are usually about generating consensus and shared knowledge among people, as well as making intuitive predictions about what will be needed and useful in the future.

Second, the history of automating routine office work is already littered with various forms of automation. Countless platforms and languages were supposed to automatically filter our email, generate replies, track tasks, etc. A lot of the use cases where that is straightforward are already automated by more conventional tools (ticketing systems, chatbots, phone trees, web forms, etc). Sure there are some use case where more automation can be done, like creating skeletal code or prototype code. But specifying how things work remains a central job no matter how the code is built.

Third, in human groups and organizations, many decisions are not really "computable". That is, they aren't just some form of inferential or statistical logic mapping from the priors to the output decision. Rather they involve people forming perceptions, views, and feelings, and from that defining an acceptable decision. Human decision-making involves the nervous system and the amygdala not just cognition. That's not a bug, that's a feature - it keeps us in sync with our agency as living and sentient beings.

The fast and easy road to automation is what AI companies are banking on to increase productivity by the amounts that justify the stratospheric valuations. I'd be pretty surprised if the LLMs actually enable automation in this way rather than the "slow boring" way.

Comment Re:"Profit" on one side of the scale... (Score 1) 64

Is there a word that combines laughable and infuriating?

Because this is both. They fact that MetaZuck has the gall to say openly that they can be trusted to find a "balance" around privacy when their business model is surveillance makes me want to scream. And then when I'm done screaming, to laugh.

The only glimmer of hope is that the downsides of this will probably show up so quickly and they will respond with the usual mealy mouth platitude and lies, that it may finally force some sort of real oversight of their data collection and sales practices. Things I expect we will see soon include:

  • Crimes and murders recorded both intentionally and inadvertently
  • Corporate, government, and military espionage
  • Non-consensual sexual recordings
  • Illegal behavior with, by, and involving minors

Yes, much of this could have been recorded with a phone, but with these glasses, determining whether someone is recording will be effectively impossible. And because of Facebook's business model, they will always want to record as much as possible.

Buckle up.

Comment The right term is "free-riding" (Score 2) 21

Crypto is free-riding on the banking system.

Banks do an awful lot more than just put money in accounts and move it around. They are responsible for finding fraud, mediating disputes over claimed funds, collaborating with law enforcement to prevent illegal activities, as well as engaging in well-understood and sound lending and risk management practices. They also provide separation between financial activities like storing money, lending, and investing. These are all critical functions to ensure that regular people can count on stable and safe vehicles for storing and using their money. The banks' failure in these functions in the 1920s was a major cause of the Great Depression, and the reason that we brought the industry into the trade of government guarantees in exchange for regulations ensuring stability and soundness.

Do they succeed at all of these all of the time, or even as much as they should? Doubtless they do not.

But crypto provides none of these functions, and in fact leverages them. When people want real money from the real financial system, they get it out of crypto and leverage all that the banking system provides. Crypto wants to free-ride on the whole system, and offer the investment and profit upsides without the financial system responsibilities. Pure leechery.

"Just be a bank" is 100% correct.

Comment Re:Bull (Score 1) 48

Super interesting explanation, thanks.

I have heard of Jevon's Paradox in cases like the introduction of LED lights, which did indeed create an explosion of use of lighting everywhere. I guess I've always thought of this in terms of latent demand (people actually wanted to light up more things than they could afford to). The way you're describing it sounds more like demand creation (people did even think about all the things they could do with lights until LEDs made them incredibly affordable).

I think in this case both are operative. I certainly agree that AI has not, is not, and IMHO will not, cause software development costs to fall in the way that LEDs made lighting costs fall. I'm actually also not even convinced that if it could do so that the latent demand is there.

That's because the primary use case the AI bulls envision is still a variation of process automation. The success of process automation has to do with the balance of reducing costs on highly automatable steps vs costs for identifying exceptions, handing them off to non-automated process, and then re-integrating all that into the mainline workflow. A lot of knowledge worker stuff looks automatable at first blush but when one starts examining the exceptions and handoffs they get a lot messier and more expensive.
The technology to do email filtering and automated replies has been around for at least 20 years. The problem is that most of the time the discussion in an email requires thought, action or accountability that is not obvious. I struggle to see how LLMs can overcome that.

Comment Who is going to listen to all those podcasts? (Score 2) 20

I won't repeat the excellent "who asked for this?" comments, but one corollary that is worth discussing is the supply and demand aspects of "summarize this into a blog post or podcast".
If Adobe (and Microsoft and everyone else AFAICT) are making it easy to make X document into a podcast, then the natural result is that there will be a ton more podcast video/audios posted online. Which means that getting someone to actually care about your podcast will get even harder.
We're already seeing AI slop take over the YouTubes and Facebooks. While this isn't quite the same as Shrimp Jesus, it's still low-effort stuff that adds to the noise and makes the signal harder to find.
I imagine Microsoft and Adobe would shrug and say "not my problem", but it sure seems to me like it makes features like this a lot less useful.

Comment Re:Climate Change (Score 1) 203

Actually you'd be surprised. Even though these people are smart, many of them are incredibly blinkered. They think because they're smart when they get excited about some theory it's because it must be true rather than they're just a human trying to figure out a big, complicated, messy world.

Double plusgood if the theory accords with their desires and biases.

And I certainly agree that they also are perfectly willing to overlook damage they are doing, or find some way to justify it.

Comment Re:Bull (Score 1) 48

Not sure I totally follow the Jevon's Paradox argument with respect to software engineers, but I agree that demand has either dropped or is not growing.

And because so many CEOs, including Benioff went all in on how much AI was going to deliver, they have to claim that is in fact delivering.

This quote just combines those two problems into one neat, empty press quote.

Comment Re:Very incomplete analysis (Score 1) 59

Well said.

Most use cases of LLMs/GenAI are actually just process automation by another name. The pivot point of nearly all automation efforts is process control, especially in the handoffs between systems or between systems and humans. Because humans are so adept at handling exceptions, most automation projects I've seen deal with only the most common exceptions. Well-run projects think carefully about how to structure handoffs such that the exceptions don't eat up all the labor saved by automating the base case (which is easy to do). Poorly-run projects just focus on throughput of the base case, and leave exception handling for later, resulting in extremely cumbersome situations that either degrade quality or require nearly as much labor to handle as the pre-automation state. I think many enterprises are about to get a crash course in this, which will dramatically affect how their labor picture looks going forward.

Another area where the job loss analysis is pretty thin is that it assumes that the jobs that are linked to the so-called AI-exposed jobs (e.g upstream and downstream in the process) are implicitly assumed to stay the same. This is almost certainly false.

One example I know well from healthcare is clinical documentation and payment. There are a bazillion AI companies who make the claim that applying AI to clinical documentation "allows healthcare providers to focus more on clinical tasks". The latter part is mostly marketing fluff, supported by a few trial studies. But most of the assertion of saving labor is what people hope for or think should happen.

What really happens is that when AI documents something, the provider can code for those services and try to get paid more. That's the quickest way to get an AI rollout to pay for itself. But insurers don't just sit still, they adjust their payment rules and systems to deal with this, and now somebody on the provider side has to deal with THAT. The system has changed, but often toward more complexity rather than less effort.

I've never seen any of these job loss models try to account for that phenomenon.

Comment Re:Current LLM's (Score 1) 211

Yes, exactly.

If you want to automate something the automation has to not only be faster per unit task or output, but it also has to make up for the extra time of checking or re-doing something when the automated way failed. To do that, you usually need to constrain the parts of a problem where the automated approach will succeed nearly always and where failures can be identified and mitigated quickly. That requires building a bunch of process oversight stuff, which in turn requires a big investment in instrumenting the current and future process to identify the exceptions and handle them correctly before failures move downstream and become much hard to address.
Additionally, work outputs that have a lot of unpredictability, or require persuasion or consensus (such as defining what problem to solve), or situations where there's no pre-defined correct future state, only a series of choices and murky outcomes, are just hard to automate period.

LLMs not only have regular failures, they have highly unpredictable failures. Yet they're being sold as though than can automate anything.

The reason the "agentic OS" stuff is will fail is the same reason that we didn't automate away our daily work using VBScript - the automation will be clunkier and more annoying than just doing the steps on our own.

Comment Re: Clippy on steroids (Score 2) 26

No kidding. I don't know if you've ever tried using Explorer to search files in a directory for a filename, but it's unusable.

Everything from Void Tools does it in milliseconds. It does exactly what you'd expect - builds a list of filenames and searches them.

AFAICT there is nothing you can do in Explorer to make it only search the filenames - apparently it's necessary to search the web, the registry and everything else to find files by filename.
Can't wait for the agentic AI solution to ask Copilot what to do as well...

Comment Re:Damning (Score 2) 65

So true. Sadly these days the market can stay irrational almost indefinitely. The factors that helped constrain this irrationality, like actual government oversight and discipline of equities and debt, willingness to let large corporations fail, and investor discipline in discerning real growth from financial games have all been eroded.

I have no idea when valuations will fall, and I wouldn't want to be better on a market that has no basis in fundamentals at all.

Comment Re:Fixed that for ya (Score 1) 98

HR often has an Orwellian aspect to their communication. They say things in a way that sounds like they are there to help you, but they are really there to gatekeep. Not everyone can have the salary, promotion, office, etc that they want, and HR is there to control those things, and minimize the company's legal problems in doing so. The double-speak and gatekeeping make them incredibly frustrating to deal with.

On top of that they also know a lot of private info, from salary to disciplinary actions to disputes they got involved in, so they're often in a position of quite a lot of leverage.

Comment Tyler Cowen is an AI fanboi (Score 5, Insightful) 69

He is mostly writing for attention *now*, nothing to do with immortality.

The whole premise is ridiculous, like SEO slop dressed up as something intellectual. Odds are high that OpenAI's "authoritativeness" is Google PageRank. That means it will move as traffic moves or if one of them changes the rules.

If you want to write for immortality, figure out something to say that is meaningful across human lifetimes.

That's pretty hard to do, which is why only a few works become and stay "classics". The way to even have a shot is not to internet clout-seeking, it is true thought and creativity.

Slashdot Top Deals

The only thing necessary for the triumph of evil is for good men to do nothing. - Edmund Burke

Working...