Comment Re:We've seen technological revolutions before.... (Score 1) 73
I look at this through the lens of automation, regardless of the technology.
Automation tends to be successful when routine aspects of a process can be handled by a machine in such a way that the effort of finding and handling exceptions doesn't swamp the productivity gains of the mainline automation. But that's actually quite hard in practice, because it means that one has to identify and cordon off the parts of a process that are repeatable and where failures can be readily detected, and then create ways of switching to non-automated processes that are efficient enough not to swamp the gains of the automated work. In environments with physical automation, that's slow, incremental work involving developing process controls and efficient online-offline handoffs. It isn't just a question of building a machine, but of learning and rehearsing the process between automation and people.
The theory of the current AI companies are advancing is that somehow their LLM based technologies can magically eliminate that incremental road to automation and replace the varied tasks that people do. I think this is likely to be untrue on a lot of levels.
First, as we know from software development, a lot of the hard work of building a system is deciding what it should do and how it should work. Those kinds of decisions are not particularly amenable to LLMs because they are usually about generating consensus and shared knowledge among people, as well as making intuitive predictions about what will be needed and useful in the future.
Second, the history of automating routine office work is already littered with various forms of automation. Countless platforms and languages were supposed to automatically filter our email, generate replies, track tasks, etc. A lot of the use cases where that is straightforward are already automated by more conventional tools (ticketing systems, chatbots, phone trees, web forms, etc). Sure there are some use case where more automation can be done, like creating skeletal code or prototype code. But specifying how things work remains a central job no matter how the code is built.
Third, in human groups and organizations, many decisions are not really "computable". That is, they aren't just some form of inferential or statistical logic mapping from the priors to the output decision. Rather they involve people forming perceptions, views, and feelings, and from that defining an acceptable decision. Human decision-making involves the nervous system and the amygdala not just cognition. That's not a bug, that's a feature - it keeps us in sync with our agency as living and sentient beings.
The fast and easy road to automation is what AI companies are banking on to increase productivity by the amounts that justify the stratospheric valuations. I'd be pretty surprised if the LLMs actually enable automation in this way rather than the "slow boring" way.