Don't forget avahi, which reliably causes shutdown to take o^n time (vs number of network interfaces and ipaliases) to shut down
TSA trying oh so very hard to appear effective.
If it doesn't feature the Carnot cycle, it isn't actually A/C, IMO.
Suing likely will cost more than just eating the remainder of the service contract, AND has no guarantee of success.
This is why the whole "proprietary software is supported so it has a lower TCO" is often a myth.
Lockin always has risk, and to that extent, presents a very real cost.
He has a choice.
Pay lawyers to get out of the contract (and run the risk of paying lawyers and STILL being on contract).
Eat the remainder of the support costs, learn from his mistakes, and move on.
Guess which probably costs less and has a better EV?
Either way, somebody made a poor decision, and it is going to cost money and time.
Now, of course most compilers warn you when you do that...
Absolutely. The situation is not sustainable.
Even worse, because every SOC is a haphazard pile of random and arbitrarily buggy peripherals, there is no deterministic way (at run time) to enumerate all of the peripherals, and thus which various driver variants (and even worse, binary blobs) are required to make them work.
So by definition, none of this can EVER go into the mainline. Every kernel fork is its own disconnected universe, dedicated to a single snapshot of a single SOC and its particular collection of peripherals.
But if you try to explain this to a PHB (or, say TI), you'll get nothing but blank stares. There is nobody home.
The reason embedded device kernels never get updated is because the source code for them is on some SOC vendor's way out there fork of some ancient kernel that nobody with a clue actively develops for anymore.
And the vendor (say, TI) had hired a bunch of clueless interns to write the "BSP"s (old acronym from the binary blob obsessed asshats at vxworks et al) for their SOCs and the cluster of shoddily designed peripherals crowbarred into the SOC.
And those interns wrote code so toxic and broken that no sane kernel developer would ever have accept any of their garbage into any mainline kernel tree.
So there are all these embedded devices out there with kernels from the 90s, and it would take time (and expertise) that none of the vendors have (including the SOC suppliers, like TI) to merge the changes into something even remotely contemporary.
All of this because the requirements for these embedded projects (dictated by clueless PHBs) is only "linux support" not "mainline kernel support", so SOC vendors (like TI) just don't have the incentive to develop SOC peripheral driver code suitable for mainline inclusion.
You don't have to be intelligent/reasonable to control intelligent/reasonable people, you just have to convince them that you are. You can go on being a clueless dipshit in all other respects.
Because if you aren't incompetent, you won't get yelled at.
Unlike a corporate structure, where you don't get yelled if you play the game right.
If you are incompetent, please don't develop linux kernel code. Go work for a corporation.You'll find you're a better fit, and if you play your cards right, you won't get yelled at no matter how bad you are at your job.
It's a shame that browsers have such freakouts over self signed certs, because there is really little difference between them and officially signed certs
Exactly. Especially since you can get a "real" cert from one of many, many, free cert signing services. What is the point?
FEC algorithms do not treat (an unknown number) of lost bits as a stream of 0 bits, since it can't know how many bits are lost.