There are a few reasons I can think of. First, Oracle has bundled proprietary tools and features with it's java for a long time, and any software depending on those tools will need a rewrite to work with other Java distributions. Second, I have seen software using/abusing the bundled libraries to the point where even a minor update/bugfix in a library could break the software, and JRE/JDK provided by other companies than Oracle would not work at all. Probably not as common today, but I can still see this happening for for some older commercial software with less frequent release schedules, locking on to a specific version of Oracle's Java.
This is especially a problem for those who has bought software without being able to modify it themselves, and replacing it would be such an expensive and time consuming task that they would prefer to avoid it. Or as mentioned in the story: "He estimated that about half of the customers his team talks to are able to easily move to OpenJDK. Sometimes, customers have third-party applications that are written for Java and unchangeable as opposed to custom applications that in-house engineers can just rewrite."
From what I understand, the big problem is that "non revocable" is omitted from section 4 - which means the license itself could possibly be revoked. Also, if they manage to argue that 1.1 is the only authorized version in section 9, that would prevent distribution under any previous version. That in turn causes section 13 to allow for the interpretation that it gives everyone 30 days to switch to version 1.1 due to distribution under 1.0a would be a breach of section 9.
I'm no legal expert, but as there are possible legal loopholes with the license, it could be risky to continue distributing anything until this has either been settled in court or WotC/Hasbro has made things sufficiently clear that there is nothing to worry about.
This is a very US-centric idea. Daylight savings time might make some sense in the zone 20-40 degrees from the equator (and even there I see limited rationale for it), while outside that zone the length of the day either varies too much or not enough. With most of continental US inside this zone, and most of the rest of the worlds population outside it (except China, which does not observe DST anyway) and advocating for abolishing DST, I find this idea really ignorant.
(For those who failed geography and are too lazy to look at a map; New York is at a similar latitude as Madrid, which is in southern Europe, and Nashville is south of the capital of the African nation of Tunisia. Sweden, Norway and Finland all have capitals close to 60 degrees north.)
I assume you are looking at confirmed cases, and are making assumptions based on numbers alone. Just like most foreign media. What most people fail to realise is that from mid March, testing was limited to care personnel and people that needed hospitalisation - because of limited testing capacity, and using the limited resources for testing where it makes most sense. By early June, testing ramped up and allowed for everyone to get tested, resulting in a spike in new confirmed cases (helped slightly by people starting to enjoy summer and letting their guard down). So any conclusions made based on "confirmed cases" before mid June are likely to be wrong. Conclusions made from later data are also likely to be wrong, but slightly less so.
A more reliable number would be people needing intensive care. After a peak in early April (with almost 50 new cases a day), the numbers have steadily decreased since then, levelling out to at about 2-3 new cases each day by end of June. This also shows in number of deaths, which is now at a level of about 0.2 per million per day - similar to other European countries and significantly lower than the US (with about 3.2 deaths per million per day).
I think what we will see is seemless synchronization of apps and their running state between the phone and other devices, so having more powerful laptops and screens/TV's is still a posibility. However, that will require data to be stored in a central location to work well, and that location will most likely be decided by the application developers - and you can probably guess what they prefer. Add to that the trend of subscription software/services (like Adobe, Netflix and many others) and software partially or fully running server side (like Google Docs or Stadia), and there is little need for a powerful personal computer.
Those of us who would like to have full control over our own data and run all applications on our own computers are already a minority, so I do not expect Apple to care about that when they have the option to lock people deeper into their ecosystem by making things easier for the majority of their customers.
Sure, I believe there will be options - I just do not believe they will come from Apple.
I would not be surprised if in 5-10 years you have your iPhone - with everything else as input/output devices and extenders for it. A laptop that is essentially a screen and keyboard (and cpu/gpu/battery) for the apps in your phone and data in the cloud? Same for your office computer - a dumb terminal that just extends your phone, possibly with some more computational power? And your TV could either be used to watch movies, or edit that presentation you need for a meeting the next day - if there is even a difference between TV's and office computers in the future?
I have never owned an Apple device in my life and do not plan to change that, but I am not entirely opposed to the idea of having a single device that I can carry everywhere and just connect to screens/keyboards whenever I need it.
One good reason why computers can do more work than people is that they never have to stop and answer the phone.