Comment Re:People Seem to Forget Problems Existed Pre-AI (Score 1) 92
By employing a load of rats. How did you do it?
By employing a load of rats. How did you do it?
Indeed. Instead of doing proper testing and good architecture and coding. MS has no long-term future.
Throwing out slide rules was a pretty expensive mistake. As a competent slide-ruler user you do not make mistakes in the ranges of orders of magnitude. As a calculator user, that is a main risk. Does not mean you always have to use a slide-rule, but if it were still taught, you would have a fast way to check calculations with a different tool and stay in practice with very little effort.
Indeed. Once again non-tech personnel thinks it knows how tech works and can make competent decisions about it. All that shows is that software engineering is a very immature discipline and that the "managers" are still (as they always were) generally really bad at their jobs. Imaging a "manager" telling a construction engineer that a bridge will definitely take a certain load when the engineer knows that is not true. What would happen is that the engineer escalates or quits. Non-tech personal cannot make competent engineering decisions. The only problem is that for software making, this has not been well established enough because the discipline is to young and managers are stupid. Bit the cost of bad software is raising and slowly getting unsustainable. LLMs will accelerate that.
And in actual reality, LLMs cannot do "requirements compilation". That one requires General Intelligence.
That is really nonsense. With actual intelligence you get better at things and the tech debt gets smaller. With code reviews you do not only evaluate the code but the coder. Not all juniors turn into competent coders and you steer them into other paths.
None of that works for LLMs.
Indeed. As you get better as a coder, debugging may get harder but you need far less of it. LLMs killed that and, on top of that, produce "review resistant" code. I expect we will see a lot of LLM-caused burnouts in the next few years and that will reduce the number of desperately needed good coders even further.
That is a misuse of rote work. Rote work is to allow junior devs to get into things and develop a general feel for things. If you are not slowly educating junior devs, you (or rather your organization) is doing it wrong.
As to "research new solutions", absolutely not. LLMs are really bad at giving necessary context, limitations, caveats an the like. At most, you should use an LLM for better finding of actual information sources.
But these are smart people and you can only fool them for a while. And they start to notice that something is really, badly off. Good.
Nuclear reactors use most surface water, not ground water.
Datacentres are no pickier. You can even cool a datacentre with saltwater, you just need a heat exchanger.
Also, closed loop does not evaporate. The loop is not closed if stuff escapes from it.
You're arguing with the actual terminology used in the nuclear industry. "Closed loop" or "closed cycle" designs have the water pumped in a cycle through cooling towers. The towers lose water to evaporation, taking heat with them, but the rest of the water is returned to be reheated again. "Open loop" or "open cycle" designs have no cooling towers. The water is heated and just discharged hot. They consume much more water (over an order of magnitude more), but most of that is returned. Closed loop are more common, but you see open loop in some older designs, and in seawater-cooled reactors.
"How often do you think I print?"
Seemingly not very.
I've printed many hundreds of kg on my P1S, thanks.
I do not consider having to write data out to a card and transport it back and forth between the printer and the computer to be the pinnacle of convenience. That's something that would be considered embarrassingly inconvenient for a 1980s printer, let alone a modern net-connected device. And it's designed to be inconvenient for non-cloud prints for a reason.
The "decision makers" suffer from the delusion that _they_ are the source of all success. Whenever they get confronted with somebody doing actual work, they get a jolt of strong cognitive dissonance. Hence they essentially try to get rid of people doing actual work so that the can maintain the illusion that they matter better. And that is why they are so in awe of these not very impressive chatterbots and statistical parrots.
Just shows that "decision makers" are generally pretty dumb. The actual facts are looking worse and worse for LLMs. Sure, the "better search" aspect is nice, but the failure to produce anything but slop on almost all other tasks is getting harder and harder to ignore. And the business numbers are still absolutely catastrophic.
I expect general LLMs to go the same way as flying cars (possible, but makes no sense to do), because the effort-to-benefit ratio is just not there.
Congratulations! You are the one-millionth user to log into our system. If there's anything special we can do for you, anything at all, don't hesitate to ask!