They are just blaming it on Apple's lack of effort on the OpenGL front. They're hardly pushing the boat out on it. Either way, it's no excuse for such a poor port.
You don't need Apple's drivers for bootcamp for the GPU - you can just install the AMD or Nvidia ones that AMD and Nvidia supply for windows.
The one Apple ships with the bootcamp driver package (that you install from a USB stick when you first set up windows and has everything you need for the keyboard, networking, bluetooth, etc) includes one of those OEM drivers from AMD or Nvidia, it just tends to be an older one since they don't update the package all that often.
Once you have windows installed though it's no different to any other windows machine in terms of GPU drivers.
Sure, no problem. If you dislike systemd that much, it certainly makes sense to move to a different software platform.
I don't particularly dislike systemd per se. I do observe the controversy around it, and the image of it and its project, painted by its opponents (some of whom have enough creds that it's unlikely that they're talking through their hats), indicates that the claimed issues are likely to be real problems, and this may be a tipping point for Linux adoption and user choice among distributions or OSes.
Your Snowden argument isn't particularly applicable in this instance, as you have access to the full source code for systemd. If you're not comfortable looking through C code, then any init system would be a problem for you.
I did my first Linux drivers (a PROM burner and a Selectric-with-selonoids printer) on my personal Altos ACS 68000 running System III, wrote a driver for a block-structured tape drive for AUX - working from my own decompilation of their SCSI disk driver (since the sources weren't available to me initially), ported and augmented a mainframe RAID controller from SvR3 to SvR4, and so on, for nearly three decades, through hacking DeviceTree on my current project. I don't think C has many problems left for me, nor does moving to yet another UNIX environment - especially to one that is still organized in the old, familiar, fashion. B-)
As for trying to learn how systemd works, that's not the proper question. Instead, I ask what is so great about it that I should spend the time to do so, distracting me from my other work, and how doing this would meet my goals (especially the undertand-the-security-issues goal), as compared to moving to a well-supported, time-proven, high-reliability, security-conscious alternative (which is also under a license that is less of a sell to the PHBs when building it into a shippable product.)
Snowden's revealations show that the NSA, and others like them are adept, at taking advantage of problems in obscure corners of systems and using that obscurity to avoid detection. The defence against this is simplicity and clarity, avoiding the complexity that creates subtle bugs and hides them by burying them in distractions. Bigger haystacks hide more needles.
The configuration for systemd isn't buried. It's there for all to see and change, in plain text. Logging in binary form is _optional_. You can choose to direct logged messages to syslog, or use both syslog and binary, to have the "best of both worlds", albeit with the best of disk usage.
Unfortunately, I don't get to make that choice myself. It's made by the distribution maintainers. My choice is to accept it, open the can of worms and redo the work of entire teams (and hope their software updates don't break things faster than I fix them), or pick another distribution or OS.
Again, why should I put myself on such a treadmill of unending extra work? If I could trust the maintainers to mostly make the right choices I could go along - with no more than an audit and perhaps an occasional tweak. But if they were making what I consider the right choices, I wouldn't expect to see such a debacle.
Entangling diverse processes into an interlocking mass is what operating systems are all about!
No, it's not. The job of an operating system is to KEEP them from becoming an interlocking mass, while letting them become an interacting system to only the extent appropriate. It isolates them in their own boxes, protects them from each other, and facilitates their access to resources and ONLY their LEGITIMATE interaction wherever appropriate and/or necessary. The job is to Keep It Simple while letting it work.
Your phrasing, and making a joke of this issue, is symptomatic of what is alleged to be wrong with systemd and the engineering behind it.
At a large scale, the internet was designed to route around individual problems such as this.
Can't this same principle be applied on a smaller scale?
Yes, it can. Just dig a whole bunch MORE trenches around the country at enormous cost.
The SONET fiber networks were designed to be primarily intersecting rings. Most sites have fiber going in opposite directions (with a few having more than two fibers going off in more than two directions so it's not just ONE big, convoluted, ring.) This is built right into the signaling architecture: Bandwitdth slots are pre-assigned in both directions around the ring. Cut ONE fiber run and the signals that would have crossed the break are folded back at the boxes at each end of the break, run around the ring the other way, and get to where they're going after taking the long route. The switching is automatic and takes place in miliseconds. The ring approach means that the expensive cable runs are about a short and as separated as it's possible to make them.
But cut the ring in TWO places and it partitions into two, unconnected, networks. To get from one to the other you have to hope there's another run between the two pieces, and there's enough switching where they join to reroute the traffic.
IP WANs have, in some portions, also adopted the ring topology as they move to fiber, rather than sticking to the historic "network of intersecting trees" approach everywhere. That's partly because much of the long haul is done on formerly "dark fiber" laid down in bundles with the SONET rings from the great fiber buildout (or is carried in assigned bandwidth slots on the SONET networks themselves), partly because the same economics of achieving redundancy while minimizing costly digging apply to high-bandwidth networking regardless of the format of the traffic, and partly because routers that KNOW they're on a ring can reroute everything quickly when a fiber run fails, rather than rediscovering what's still alive and recomputing all the routing tables.
= = = = =
Personal note: Back when Pacific Bell was stringing its fibers around the San Francisco Bay Area, I was living in Palo Alto. They did their backbone as two rings. There was only one section, perhaps a mile long, where BOTH rings ran along the same route. It happened to go right past my house, with the big, many-manhole repeater vault right next to the house. (I used to daydream of running my own fiber the few feet into the vault. B-) The best I had available, in those pre-DSL days, were dialup with Telebit PEP modems (18-23 k half-duplex) and base-rate (128k) ISDN.)
Anyway, I digress. Advantages of systemd are: [long list]
Those are all very nice things to have.
Unfortunately, for my needs, simplicity and understandability are far more important than a fast boot and feature-rich management of the runtime environment. I need to KNOW that things are being handled properly and securely. That's become far more important since Snowden showed us, not that the spooks were getting into our computers (which we'd already figured was happening), but how DEEPLY and EFFECTIVELY their technology and personnel are able to do so.
If the improved functionality is at the cost of burying the configuration and logging in non-human-readable form and entangling diverse processes into an interlocking mass under a complex and ever growing manager, the shark has been jumped.
Though Linux has been becoming (MUCH!) more usable with time, its configuration has been buried progressively more deeply under more and more "convenient and simplifying", but non-transparent, configuration management tools. Systemd is the continuation of the trend. But it is also a quantum leap, rather than another thin slice off the salami. So it has apparently created the "Shelling Point", where a lot of frogs simultaneously figure out that NOW is the time to jump out of the pot.
It's been a great ride. It had the potential to be even greater. But I think this is where it took the wrong turn and it's time for me to get serious about switching.
There's good reason to switch to NetBSD at work, on the product. (The code supporting the secret sauce is on the user side of the API and is Posix compatible, so it should be no big problem.) Porting my laptop, home servers, and desktops to OpenBSD now looks like it's worth the effort - and less effort than trying to learn, and keep abreast of, the internals of systemd.
Call me if somebody comes up with a way to obtain the key benefits of systemd in a simple and transparent manner, rather than creating an opaque mass reminiscent of Tron's Master Control Program. (Unfortunately, the downsides of systemd's approach seem to be built into its fundamental structure, so I don't expect it to evolve into something suitable, even if it's forked.)
As I understand the three major forks:
One (OpenBSD) is for having as secure a desktop/server/embedded platform as the maintainers can manage - important in this post-Snowden era (as it was, all unknown, in the era preceding Snowden B-b). It is based outside the US so it can incorporate strong encryption without coming afoul of US export controls.
One (NetBSD) is for developing network internals software and networking platforms (typically ported, when possible and not part of a proprietary product, to the others and other OSes.)
One (FreeBSD), now that its original purpose of getting the code disentangled from proprietary accomplished and the other two projects forked from it, is for making an open unix-like system run on the widest range of hardware platforms and devices possible.
Unless you're using your machine for building networking equipment or it's a new hardware platform under development, the choice seems clear.
Are we allowed to say that out loud?
According to the first amendment, the government of the United States can't stop you.
If the denizens of the largest religion of the Unitied States (Progressivism), or at least their media spokespreachers, decide to gang-shun you, there's still the other half of the population to interact with.
Fortunately, techies usually have to deal with real-world more than social issues. Unfortunately, PHBs have control of the money and have to interact with the fanatics. Fortunately, techies are noted for not being skilled on social fads and are given much slack. Unfortunately, that slack sometimes comes with a hook: The PHB tells his techies not to be a "lightning rod" and say/post things, in a way traceable to a particular employee of The Company, that might bring down the wrath of the pressure groups, make it look like his "herd of cats" really IS crazy and repell funders and customers, or otherwise make his job harder than it already is.
Which (mainly the "crazy cats" case) is why I started posting anything that MIGHT be controversial under pseudonyms. And a reference to the PHB's order is the origin of the slashdot pseudonym "Ungrounded Lightning Rod" (since slashed down to "Ungrounded Lightning" by changes to the slashcode that limited pseudonym size). And why, now that "ULR" has a large and valuable reputation (and though that reputation might help with job searches) I STILL don't out the corresponding "True Name" on any electronic medium.
(So now you know.)
In Linus' case, I doubt that even a gang-shun by the Politically Correct would have an impact, on his finances, his social standing, or the adoption of his work or technical ideas.
(Can you imagine, for instance, the luddites , or even Microsoft's PR department, trying to get people to avoid Linux and switch to Windows or MacOS, or avoid git and switch to Clearcase, Bitkeeper,
So have faith. Either he's right, and systemd will not turn out to be that bad, or his faith in systemd will end in tears, and then, he'll sit down and write a new startup management system that will kick everybody else's collective asses!
Or maybe somebody ELSE will write a kick-ass init system, and Linus will say "Hey, that's cool!" and promote it. Or the maintainers of a major distribution will adopt it. Or those of a MINOR distribution will - and user will migrate.
Linus is great. But why does THIS have to be HIS problem? The init system may have a bit of extra-special status and privilege, but it's largely NOT the kernel's problem. Along with the system call API it is THE boundary between the kernel guts and the user/demon/daemon firmament. It says to the kernel: "Thanks, I'll take it from here."
You seem to have missed the sarcasm inherent in my original comment.
The GP was claiming that they could just hose the wings down rather than using an anti-bug coating.
I was just wondering out loud how that would work when the plane is in flight given that the hose probably has a finite length.
You think there are piles of answers to this question, but as with all armchair quarterbacks you seem to think that the people who are actually working on the problem are stupid.
Or we build storage.
You don't HAVE to generate it within a fraction of a second of when it's used - that's just been convenient so far. So it's not with renewables? Fine! Store that juice!
When Bill Gates says:
"There's no battery technology that's even close to allowing us to take all of our energy from renewables and be able to use battery storage in order to deal not only with the 24-hour cycle but also with long periods of time where it's cloudy and you don't have sun or you don't have wind."
he's totally wrong.
For starters, there's Vanadium Redox. A flow battery (pumped electrolyte): Power limited by the size of the reaction device's electrode and membrane assembly. Energy storage limited by the size of the tanks. It's mainly used for utility-level energy storage down under (Oz or Nz, I think), because the patents are still fresh and the little startup doesn't want to license it to others. Vanadium is some substantial percentage of the Earth's crust so there's no shortage. Using the same element (in different sets of oxidation states - vanadium has (at least) 6 of 'em) for BOTH electrodes means leakage of small amounts of the element through the dilectric membrane doesn't poison the battery.
Lithum cells are already good enough to run laptops, cars, and houses, and are improving at a Moore's Law like rate. The elements are also not rare and the use of several nanotech techniques on the electrodes have drastically increased the lifetime and other useful properties. (We just had reports of yet another breakthrough within the last day or so, doubling the capacity and extending the life.) The fast-charge/discharge cells are also extremely efficient. (They have to be, because every horsepower is 3/4 kW, so even a few percent of loss would translate to enormous heat in an automotive application.) The main problem is to get companies to "pull the trigger" on deploying them - and risk their new production line being rendered obsolete before the product hits the market by NEXT month's breakthroughs.
Lead-acids need to be replaced once or twice per decade. But they have been the workhorses for off-grid since Edison's and Nikola Tesla's days, and still are today (though not for long, if Elon Musk and the five billion dollars of investments in his lithium battery plant have anything to say about it).
Nickel-Iron wet cells are a technology developed by Edison. They have more loss than lead-acids. But they literally last for centuries. If you have a moderately steady renewable source (like some combination of enough wind and a big enough windmill, enough sun and a big enough solar array, or a stream and a big enough hydro system) you'll have enough more power than you need to keep them topped off. They're just fine for covering days, or even a couple weeks, of bad generation weather, or down-for-maintenance situations. That IS what they were in at least one hydro plant I know of. (The problem is finding them: They last so long you only need to buy them ONCE, so there aren't many plants.)
That's just four FAMILIES of entirely adequate solutions. There ARE more.
So Bill is either uninformed, talking through his hat, or starting on the "embrace" stage of yet another:
So how do wash the wings right after takeoff?
I think that's going to add quite a lot of time if the plane has to circle really low for multiple passes each time for Jose to hose the wings off.
we have a way to turn electricity directly into heat. But there is no direct way to turn heat into electricity. It has to go thru a second step of mechanical energy to spin a magnet to create electricity.
You can go from electricity directly to heat because that increases entropy. You can't go from heat to anything useful because that decreases entropy, and entropy of a closed system only increases. The best you can do is a heat engine, working off a temperature DIFFERENCE. (Some of them also work backward as heat pumps, to go from electricity to heat more effectively, by also grabbing some heat from elsewhere to include in the hot end output.)
There ARE at least two major forms of electronic heat engines - direct from temperature differences to electricity, with only charge carriers as the moving parts: Thermoelectrics (thermocouples, peltier junctions, and thermopiles of them) and thermionics (both heat-driven vacuum diode generators and a FET-like semiconductor analog of them). Both are discussed in other responses to the parent post.
TEs are ridiculously inefficient and aren't looking to be much better anytime soon
Because thermoelectric effect devices leak heat big time.
However there's also thermionics. The vacuum-tube version is currently inefficient - about as inefficient as slightly behind-the-curve solar cells - due to space charge accumulation discouraging current, but I've seen reports of a semiconductor close analog of it (as an FET is a semiconductor close analog of a vacuum triode) that IS efficient, encouraging the space charge to propagate through the drift region by doping tricks (that I don't recall offhand). The semiconductor version beats the problems that plague thermoelectrics because the only charge carriers crossing the temperature gradient are the ones doing so in an efficient manner, so the bulk of the thermal leakage is mechanical rather than electrical, and the drift region can be long enough to keep that fraction down.