Comment Re:Will they then apply for H-1B visas? (Score 1) 31
What, he expected something else to happen? Seems he is not very smart...
What, he expected something else to happen? Seems he is not very smart...
Not quite. In Germany, if you have a "desirable" specialty, you can get a residence permit with the perspective to eventually become a citizen. On the other hand, German bureaucracy is so slow and inept that most people go to other places.
Remember that Intel survived the Itanium 64-bit disaster (Itanic).
They basically did that only because of the AMD64 architecture.
Ah, yes, the overpriced and incompetent "technology consultants" from IBM. Those that think arrogance can replace insight. That will go well.
Oh, Intel wants to fail again in the discrete graphics market? After having done so, what, 5 times? I guess they are incapable of learning.
I disagree. Bad tech killed Intel. At some point no amount of hype and mindless believers can hide that one.
And yes, I agree that Intel is dead. They will continue to walk around as a zombie for some time though.
A bit extreme, but not really wrong.
Expert guidance and review requires (a) an actual expert and hence is expensive and (b) requires that expert to invest significant time and hence is expensive.
If you do this right, your process will be significantly more expensive than just having that expert write the code directly, above a very low complexity threshold. Code review is hard and slow, and gets even harder when the review object is LLM code that looks good but is insightless crap.
But, but, that is very the money is! What do you mean we cannot fire all sysadmins and replace them with LLMs?
Ah, but you are mistaken! Obviously, LLM sledge hammers can competently do brain surgery, write complex documents, make insightful decision and are generally better than humans at everything!
Simple and it can already be observed: LLMs stagnate. And then become slowly unusable. This is made worse by the fact that training LLMs on LLM output is a very bad idea and the Internet is now flooded with LLM crap.
That said..I envision a day in the near future where the vast majority of code is cattle, not pets, easily regenerated or recreated, requiring some herders to ensure Q/A and deployments and objectives work as expected, tuning a bit as needed. Its a rare day to see cowboys sleeping on the range, eating beans, and it will be a rare day to see a human writing sorting algorithms in python.
If you have visions, you should get your head examined.
Incidentally, humans write sorting algorithms in Python when the ones available (!) in the library (!) do not cut it. Hence it always was a rare thing to do, no connection to AI. I have done it, because a 2-dimensional sort with intricate border conditions and heuristic elements is not available in the library (also far outside of what an LLM could deliver). But that is it. Incidentally, I have designed and implemented special-purpose hash tables several times as well, for similar reasons.
Depends. Bad programmers have become obsolete with the Internet age where almost every computer system is reachable and attackable via the Internet. Bad, insecure and hard to maintain code does not cut it. The MBA-morons still usually do not get that, but the pressure is raising and pretty high by now. For example, in Germany 2023, the damage from IT attacks was one average salary per person per year. That is not a minor thing anymore, but a massive negative economic factor. And that number is likely too low as it is self-reported. Hence NIS2, KRITIS, regulation and liability will soon become a reality, because they are needed for economic survival and to reduce all the damage that incompetent management is doing by having incompetent people write code.
Now good programmers, ones that at least border on being justifiably called engineers are in no no way at risk of being replaced. LLMs can, at best, augment a bad programmer, for a good one it is somewhere between a minor convenience and a waste of time. And there really is not reason to think this will change anytime soon.
Incidentally, LLMs will make the gap between bad and good programmers larger. Bad programmers will find it much harder to learn anything because of the (currently) ready availability of the crutch. But just as using a crutch does not teach you how to walk, using an LLM to do coding for you does not teach you how to be a better coder.
Obviously, the usual "believers" will find numerous reasons why this is not a failure of the AI tools. You know what? I agree. This is humans being stupid and seeing things that are not there. Just as these "believers" do. In actual reality, LLMs have some limited use and they are a small step in the direction of capable AI, but they are nowhere near as good or revolutionary or a breakthrough as claimed.
Their problem is that the damage their insecure crap causes is getting higher and higher. It will soon be unsustainable. But the greedy assholes in charge of such enterprises never understand that little effect.
Kill Ugly Processor Architectures - Karl Lehenbauer