The FAT32 example might be an anti-trust violation. Most of the other stuff probably has nothing to do with Windows desktop OS however. Remember Microsoft was a big player in phones for many years until Apple / Android.
I'm not actually experiencing what you're saying. Where I've seen sites use Bootstrap, or use one of the new Wordpress themes etc, they've actually been pretty usable on a mobile device.
The real problems are getting to be the non-WWW stuff people forget about, like responsive HTML emails.
Ask Linux Torvalds what he thought of what people in operating system design (namely, Andrew Tanenbaum, who famously called Linux "obsolete") thought.
I think Linus (not Linux) Torvalds is actually an operating system designer. He's also one of many who disagrees with Tanenbaum. Neither of which has anything to do with anything at this point, he's not the one designing the whole of Ubuntu or Fedora, his work is on a small part of it that doesn't handle the userland start up process.
The people who are designing Ubuntu, Fedora, etc, are saying init, both in its bad System V version, and in its "Scales for everything you need a 386 to do" BSD variant, is not up to the job in a world that has USB, Bluetooth, e-SATA, et al, in it. I think they're right, personally. And to be honest, I think they've been right since the early 1990s, when Internet service protocols were kinda grafted on, moved to inetd, augmented by Sun RPC services, NFS, blah, etc, and the phenomenon of a Unix system that wouldn't boot up due to anything other than hardware failure or disk corruption suddenly became very real and very common.
"We" haven't done much about it since then largely due to a combination of inertia and the fact an average Unix admin was skilled in shell scripting. The latter hasn't been true, however, for a good ten years, which is why Apple has LaunchD, and why Ubuntu also threw out init in favor of modern alternatives, initially Upstart, and now SystemD.
No idea, but just because you don't see their name on the product doesn't mean it's not their's. In the past they've been very big on data centers, leasing the equipment and supporting it. You wouldn't know that just because Amazon's name is on the product, any more than you would know IBM's hardware powered Ford's data centers when you bought your Ford Escort.
I'm not seeing that. There was a gradual move to decentralization that peaked in the late eighties/early nineties, but then it's been gradual centralization, partially due to ubiquitous office networking (early nineties), and then due to ubiquitous Internet connectivity (mid nineties on.)
There may have been slight ripples during that time that affected the acceleration of the curve, but the broad curve itself was never interrupted.
My history would show:
1950s-1970s: Era of the highly centralized mainframe, with minis used in occasional scientific applications.
1970s-1980s: Increasing use of minicomputers, plus rise of the micro, some of which made their way into businesses. It's slightly less centralized but users are still sharing common computer resources.
1980s-late 1980s: Rise of the home/micro and PC, almost all applications local save for occasional use of Terminal emulators to access "legacy" applications on a central mainframe or minicomputer. Most new development is of decentralized, disconnected, tools.
Late 1980s-1995: Rise of the network. Client-server application development starts to take off. Development in business starts to be for partially distributed, but partially centralized, applications.
1995-2005: Rise of the Internet and associated standards. Businesses start to move all their core applications to the web, leaving a handful of Office type apps as the sole remaining decentralized stuff.
2005+: Rise of the cloud. Driven by a combination of mature web standards, the explosion of interest in non-PC devices, the increasing use and popularization of hosting services, and businesses that run data centers finding they're both hellishly expensive (yet unavoidable) and inevitably end up with huge amounts of unused capacity, there's a huge movement to move core business applications to services like AWS.
If there's a move against the grain (either towards centralization before the late 1980s, or away from centralization after 1995) I missed it.
Don't be fooled, if you have been, by the occasional post-1994 move towards more client devices, frequently out of control of IT (such as the BYOD movement), those initiatives only work because the core business applications are centralized and accessible using standard clients.
Every other new site I see developed these days tends to be written using Bootstrap. Older sites have the entirely reasonable excuse that overhauling an existing design takes time. But I'm seeing older sites switch over to newer technologies. Newer default Wordpress themes are also generally responsive by default, and I assume the same is true of other CMS systems.
Bootstrap isn't perfect but it's pretty good at making it easy to set up a professional looking website that happens to be responsive too.
l. I do, however, care about their OS, the stability and performance of which has been degrading steadily since the loss of Jobs.
That's just false. OSX stability and performance in 10,10 is far far better than say 10.4-6. Take for example the complexity of the video subsystems required to overlay 3 different screens for retina displays. The video subsystem handling of high performance video cards wasn't finished until 10.4 And wasn't stable or usable then. 10.7 is when what 10.7 does became possible. The memory handling for battery life requires a tremendously complex kernel. 10.10 is advanced over 10.9 over 10.8 and really before that you don't have anything remotely as complex.
So I'm going to throw it out this way. What subsystem is less stable or lower performance today and say 5 years ago? Let's hit your list:
Issues like the keyboard and trackpad freezing
That's a bug that gets fixed soon. Apple had bugs in 10.2, 10.3, 10.4...
Messages (which is now part of the OS) using over 2GB of RAM for its own process while making use of another kernel-level process that manages to eat 5GB (watching kernel_task go from over 6GB of RAM to 1.1GB just by closing Messages is freaking silly),
That is. You are loading something else. Run a diagnostic like etrecheck.
I experienced none of these issues in any version of OS X released while Jobs was active within the company.
There were many more bugs in Job’s day. You sound like you have a worm or something, that isn’t OSX.
Care to give any examples of what was un-balanced about Apple's machines under Jobs
The G4 had terrible throughput for memory and hard drives relative to CPU speed. The result was that the machine pulled a lot of no-ops. It was a bad CPU in a period when Intel CPUs were cheap and much more powerful. The G5 was excellent but then Jobs wouldn’t commit to a laptop version so just as his CPU problems were fixed he migrated away.
Another area where Jobs made sacrifices was on his memory sourcing. Apple customers often had to pay 5x or more street price for memory.
2nd or 3rd in every category isn't beating Android. The players are iPhone, Android, Windows Phone, and Blackberry
By 2nd or 3rd I meant compared to individual phones. i.e. HTC One M9, Samsung Galaxy S6, HTC Desire Eye, Motorola Moto X, Lumia 1520
and major apps that exist on both platforms (like Adobe's suite) are routinely found to perform better on Windows.
While the opposite is true on Android vs. iOS. If this were about Tim Cook that shouldn’t be happening.
In the end we disagree that there has been slippage in the software to any great degree. I don’t disagree with your point philosophically: were OSX’s all around experience worse than Windows the hardware wouldn’t make up for that. What I disagree with you on is a matter of fact, that OSX’s experience is worse.
A trusted application is trusted to authorize applications. That's what it means to trust. If you want applications that are only semi-trusted: capability computing, sandboxing, virtual machines... permissions systems are not the way to go.
Ubuntu isn't adding another. They're switching from Upstart, which they were pretty much the only user of left, to SystemD.
BSD init isn't remotely scalable, and requires knowledge of shell scripting from any admin configuring their system or installing software the OS's maintainers didn't plan for. It's actually a worse choice than Sys V init. Hence Apple's (absolutely right) decision to do LaunchD.
You'll have to ask the maintainers of SystemD as to why specifically they saw the other solutions as lacking, but at a guess LaunchD is too tied to Mac OS X and BSD, SMF to Solaris/BSD, and Upstart doesn't solve all the problems SystemD is designed to solve.
the rest of us who have used and managed unix since the 80's have to dump WHAT WORKED WELL and move to some new shit that clearly has issues, does not fit in or belong very well and is being forced on us.
SystemD replaces components that Ubuntu already replaced long ago. The question here for the Ubuntu team was:
1. Do they keep LaunchD, when it offers few, if any, advantages over SystemD, and they're the only people using it and thus they have to maintain it, and Ubuntu stays non-standard.
2. Do they switch back to "init" because it used to be the standard, and it kinda works, except it's kind of convoluted and a huge source of problems, which is why LaunchD was written in the first place.
3. Do they look at what everyone else is switching to (ie SystemD), see if it does the same job as LaunchD just as effectively, and switch to it?
They chose 3. I'd chose 3 too. There's nothing wrong with SystemD, it's just the developers have no PR skills, and some old Unix people are harking back to a past that was never actually that great to begin with. SysV init sucked. It didn't sendmail.cf suck, but it definitely CNEWS sucked. Complicated, over-burdened by shell scripts and hacks to try to keep it going. SystemD isn't perfect, but it's undeniably an improvement.
NBC Universal, Time Warner (not to be confused with TWC, a different company), ABC/Disney, CBS/Viacom, Fox...
Don't kid yourself as to why the regulators were tough on this one...
Well clearly it is not the best OS in 2015. Yet they continued using it for years. Ergo...
Yeah, that turned out to be one of the big problems with IPv6 address aggregation - sounds great in the ivory tower, doesn't meet the needs of real customers, which is too bad, because every company that wants their own AS and routable address block is demanding a resource from every backbone router in the world.
IPv6's solution to the problem was to allow interfaces to have multiple IPv6 addresses, so you'd have advertise address 2001:AAAA:xyzw:: on Carrier A and 2001:BBBB:abcd:: on Carrier B, both of which can reach your premises routers and firewalls, and if a backhoe or router failure takes out your access to Carrier A, people can still reach your Carrier B address. Except, well, your DNS server needs to update pretty much instantly, and browsers often cache DNS results for a day or more, so half your users won't be able to reach your website, and address aggregation means that you didn't get your own BGP AS to announce route changes with, but hey, your outgoing traffic will still be fine.
My back-of-a-napkin solution to this a few years ago was that there's an obvious business model for a few ISP to conspire to jointly provide dual-homing. For instance, if you've got up to 256 carriers, 00 through FF, each pair aa and bb can use BGP to advertise a block 2222:aabb:/32 to the world, and have customer 2222:aabb:xyzw::/48, so the global BGP tables get 32K routes for the pairs of ISPs, and each pair of ISPs shares another up-to-64K routes with each other using either iBGP or other local routing protocols to deal with their customers actual dual homing. (Obviously you can vary the number of ISPs, size of the dual-homed blocks, amount of prefix for this application (since