A good place for people to start is Bash, Zsh, Yash, or Powershell. Notepad, Nano, and Vi are often pre-installed.
One large difference is the complexity and undocumented nature of the hardware.
What might be more interesting is a system not much more complex than a Amiga or AtariST (and they are complex enough), but only a good bit faster. That way you can do some useful things with it. There are projects like the Standalone Vampire Amiga like systems. And the Atari has the Firebee. The Vampire v4 is said to have H264 playback. And these machines are very compatabile with their dated origins.
For C64 there is the Mega64, which can use a pie zero for more power and memory. There is the ZX Spectrum Next too.
I personally would like to see Vampire make something for the Atari systems. It has plenty of opensource tools, like a bootable multitasking OS.
It all depends on the type of programmer you are looking to breed.
To me, it looks like we are heading in the direction of extreme compartmentalized programming. You only need to know how your code relates to the other parts of the project. You learn more and more about what your specific role as a developer is. Then you are used to duplicate that role from project to project. Understanding the detailed ins and outs of the entire system, would then be kind of a distraction. That was one of the goals behind OOP. We are not totally there right now. But unless you are blind, you can see that this is where we will be going. For the sake of "security" systems programming will become a very esoteric art, and not what we commonly refer to when we say programmer. If you look at the closed nature of most devices used today, you see that the average user is very seperated from the OS. For all intents and purposes most UIs are not much different than a webbrowser. Soon that difference will not exist. The system will present the use with a UI that connects them to a cloud stored userspace. You will have that same userspace from device to device, and will not have access to that userspace without a network connection (network based workstations basically). Your device will have a high powered GUI and the cpu crunches local JIT code like javascript or python does now. But all of that code comes from a cloud like service. So development in large, will manly take place in very high level programming environments. Essentially programmers will be basically writing scripts. There will still be people writting compiled code. But more and more we are going to see scripting as the default meaning behind porgrammer.
So to understand how to write scripts, you just need a very limited knowledge on the systax used to write in the scripting language you are using, and the commands relevant to the task you are performing. So you could walk in knowing almost nothing about programming, and write a simple GUI or task, and what you need to know most is how to access the information that provides you with the commands you need. In a more specific and complex senario, you'll need to know how to work with a database or utilized a graphics engine.
I totally agree with the article. But that is because I mean something less modern when I say programmer. I'd like to see systems where you still have the freedom to poke and peek memory. But I'd also like "user" to mean someone that can compile their system to their specific CPU in an architecture family, and administer what running services they do and do not need running in the background of their system.
So the real problem is that the device is so locked down that the user can't install IRC. Shouldn't have bought it if you wanted any rights like that. If everyone knew better, eventually they would sell one you could install IRC on. If you are unprivileged because your employer won't allow you to install software, maybe you shouldn't be.
There is already a program or two for chatting. Why do I need it in my web browser?
The real problem is that most of us that know Javascript sucks, are not developing alternatives. And when we do, no one uses them because they don't know why Javascript sucks.
People used to avoid computers because you had to learn in order to do something useful with it.
Instead of moving people past the barrier of ignorance, we made them ignorant of what their device is doing. Then we locked them into a development cycle scam; so that they would buy new devices to replace ones that could/would likely still function great.
The examples you give are examples created on top of other examples of doing things in a way that we don't need to. But if you want your service or game to get any attention, you have to get it where the people can reach it. So you have to provide it to them in a way that will likely require Javascript.
Bad user experience, means bad consumer experience. Some of us don't want to pay for a locked in commercial playing device that refuses to function because someone decided it was time for you to buy a new one. I agree that this isn't a high quantity of the mass feeling this way. But maybe it should be.
A good but dated computer bogged down with Javascript is a horrible reason to need a new device. Especially if you are just a consumer using it for socializing and shopping. It takes very little to get a device doing that. In the days of Win98 and XP it was millions of system tray apps and unseen services convincing you that you needed a new computer. Now its Javascript.
Those of us that dislike Javascript and Webassembly are not learning fast enough that we need to establish and "USE" alternatives. But it will be like Linux was in the beginning. Only those similarly brave will understand why you just don't buy convenience.
I don't see it happening anytime soon. But even those of us that are fed up are unlikely to subject ourselves to the barren waste land of efficient computing. Most of the younger hobbyists are starting of their adventures inspired by the very technology developments that are locking this into the commercial realm. Since the Internet and computing is more valuable to us than just a tool of commerce, we should be giving more attention to forking the industry. But because there isn't a lot of money in it, it isn't likely to happen. And if it starts, it suddenly becomes a a target of the industry it forked from. Money usually wins.
Even if you limit Javascript down in scale, what it did will just be consumed by something like it. And the drive for innovation will always create insecurity in implantation. This is where the unix philosophy shines some. Do one thing and do it well. If you need some new function, only one piece of software needs to be there for that. Or we can tie a bunch together and load them all at once in one application. Then when ever we mess up a small part, the whole becomes weakened.
Good thing they can be checked since it's open source
You can also check all of the code that suddenly relies only on systemd as a init system. If the systemd code alters the system in a way you don't like, you can fork systemd to a way that works the way you'd like. But then you will, in most cases, need to fork all the programs that depend on systemd working the way you don't like. Being opensource doesn't do you any good, when the design being implemented is what some people are wary of. Have access to the code only lets you read how they are doing exactly what you do not like. Fixing it would mean designing in a different direction. Its not the same as checking to see if you you can trust the code they are writing. Its that you don't trust the direction that the code is going, as it will slowly alter the direction of everything that relies on that code. Then you won't have a choice. You'll just have to take it, or do tons of coding to work around all of the things that have altered your system by the inclusion of systemd reliance. It is a large undertaking, and I am glad someone forked Debain to do it.
The reason computer chips are so small is computers don't eat much.