BTW i still don't understand what's with all the rust hate? There are so many programming languages out there. Each have a different purpose, use the right tool for the job, don't be obsessive with a single tool.
To me, the hate on Rust, is about irrevocable change. Once accepted, even if later rejected, you can never go back to what was before. At best, you can start something new, in the spirit of what was. After so much change "what was" hasn't kept up, while not being maintained. You don't see many development choices willing to sacrifice progress, to dust of obsolescence. You could distill this further, to forecasting the direction a change will bring. The change "before you" may not be so bad; but the aftermath(s) of that change is/are systemic. This is more an issue in large opensouce code bases. Otherwise, who cares? Object-Oriented-Programming was an issue, too. But it easily benefit large commercial code bases. A single person just didn't need to know as much, to provide their part. Along with other benefits, it was much easier to compartmentalize. We may not realize it today, but there were losses; not all good losses. No one should really hold issue with OOP or Rust, as much as they do (or did). Both are thoughtful (well thought out) approaches, to a decided solution. Both "may be" the best solutions. But, even if you are using them where they shine, it doesn't mean they are the solution everyone else wants. I guess C++ also had its time with this. Some people are probably just hopping on a "complaining" bandwagon, or fear that it is a shift to clear out the old and invigorate the young. There is likely a substantial body of people seeing Rust as a Project Management dream, for progress and promotion; having no clue what will still be required. I still prefer ASM, Pascal, Oberon, and Scheme. But only one of those is "a little" useful, career wise. Those that "hate on Rust" aren't missing the direction of Computing I would have preferred. LOL
A good place for people to start is Bash, Zsh, Yash, or Powershell. Notepad, Nano, and Vi are often pre-installed.
One large difference is the complexity and undocumented nature of the hardware.
What might be more interesting is a system not much more complex than a Amiga or AtariST (and they are complex enough), but only a good bit faster. That way you can do some useful things with it. There are projects like the Standalone Vampire Amiga like systems. And the Atari has the Firebee. The Vampire v4 is said to have H264 playback. And these machines are very compatabile with their dated origins.
For C64 there is the Mega64, which can use a pie zero for more power and memory. There is the ZX Spectrum Next too.
I personally would like to see Vampire make something for the Atari systems. It has plenty of opensource tools, like a bootable multitasking OS.
It all depends on the type of programmer you are looking to breed.
To me, it looks like we are heading in the direction of extreme compartmentalized programming. You only need to know how your code relates to the other parts of the project. You learn more and more about what your specific role as a developer is. Then you are used to duplicate that role from project to project. Understanding the detailed ins and outs of the entire system, would then be kind of a distraction. That was one of the goals behind OOP. We are not totally there right now. But unless you are blind, you can see that this is where we will be going. For the sake of "security" systems programming will become a very esoteric art, and not what we commonly refer to when we say programmer. If you look at the closed nature of most devices used today, you see that the average user is very seperated from the OS. For all intents and purposes most UIs are not much different than a webbrowser. Soon that difference will not exist. The system will present the use with a UI that connects them to a cloud stored userspace. You will have that same userspace from device to device, and will not have access to that userspace without a network connection (network based workstations basically). Your device will have a high powered GUI and the cpu crunches local JIT code like javascript or python does now. But all of that code comes from a cloud like service. So development in large, will manly take place in very high level programming environments. Essentially programmers will be basically writing scripts. There will still be people writting compiled code. But more and more we are going to see scripting as the default meaning behind porgrammer.
So to understand how to write scripts, you just need a very limited knowledge on the systax used to write in the scripting language you are using, and the commands relevant to the task you are performing. So you could walk in knowing almost nothing about programming, and write a simple GUI or task, and what you need to know most is how to access the information that provides you with the commands you need. In a more specific and complex senario, you'll need to know how to work with a database or utilized a graphics engine.
I totally agree with the article. But that is because I mean something less modern when I say programmer. I'd like to see systems where you still have the freedom to poke and peek memory. But I'd also like "user" to mean someone that can compile their system to their specific CPU in an architecture family, and administer what running services they do and do not need running in the background of their system.
In any problem, if you find yourself doing an infinite amount of work, the answer may be obtained by inspection.