God already told us about the 4 corners of the earth
The study compared Neonicotinoids laced pollen to sugar water. Which means it was not a fair comparison. There needs to be a comparison between Neonicotinoids laced pollen and unlaced pollen.
No, the study compared neonicotinoid-laced sugar water with sugar water:
Individual foraging-age worker bumblebees or cohorts of 25 forager honeybees were housed in plastic boxes for 24 h and given access to two types of food tubes: one containing sucrose solution and one containing sucrose solution laced with a specific concentration of the[sic] IMD, TX, or CLO.
(If you follow the "bees prefer nectar laced with neonicotinoids" link in the
So, no, it's not a comparison between neonicotinoid-laced pollen and pollen, but it's also not a comparison between (neonicotinoid-laced) pollen and sugar water.
I can't even get past the fact that the TWC - AOL merger was labeled the worst in the entire history of the US and then they went for a second indentical title with Comcast. Who the hell is running things at Time Warner?
Different people from the ones who are running things at Time Warner Cable, as Time Warner Cable was spun off from Time Warner in 2009. (And Time Warner has nothing to do with Time Magazine; that's now a product of Time Inc.)
... the forward looking understanding of technology that Time Warner, a copyright focused company would have brought to the relationship.
So why would a cable company be "copyright-focused"?
"our skills actually create value." But legal skills create money. Most people would choose money over value (would you like $100 in fiat currency or 40 loaves of bread?).
I might go for the 40 loaves of bread, if I could sell them for more than $100.
On the other hand, I might choose a Fiat 500 over either of them.
FUCK you and your god damned bullshit.
Fuck you culture warrior identity politics fuckface asshole prick.
BC/AD is part of the fucking culture ass grabbing fuck puke.
FUCK YOU, history changer Goebbels re-writer.
I vote for renaming them BFC ("Before Fucking Christ") and AFD ("Anno Fucking Domini").
OS X actually has perfectly fine support for shared libraries. They are supposed to be installed under
...and OS X keeps track of which installed applications use them, and either prevent uninstallation of shared libraries/frameworks that are used by installed applications or at least warn about it?
No, it doesn't - it doesn't even have an official uninstaller (although the ambitious can whip up a script to do that, such as the Wireshark uninstaller script I have; not sophisticated, though, as it doesn't dump the file list from the package manifest to figure out what stuff needs to be removed, it just has that knowledge wired into it).
And it definitely does not have a way to tie installable application A to installable shared library/framework X, so that installing A automatically installs X if A and X are separate packages from separate vendors.
That is the sort of "[management of] shared libraries/frameworks as installable objects separate from applications that use them" to which I was referring.
And, yes, that does sound a bit like the packaging management systems on some Linux distributions. From my limited experience with various Linux VMs on my Mac, they seems to work OK, but I don't have enough experience with them to say that there aren't problematic failure modes.
At least OS X frameworks have some support for versioning, so that if application A tested with version n of X and set up to require version n and application B is tested with version m of X and set up to require version m, they could be installed "side-by-side".
There's no reason why they couldn't do that on iOS as well, just let developers share frameworks on the App Store and build a mechanism into the App Store where an app can require other apps or frameworks.
And set up a framework (no pun intended) so that they can do the same sort of vetting of frameworks that they do on applications. Yes, that would be a Good Thing, but I'm not about to assume, without further information, that it's not that hard.
Non system libraries are statically linked
I suppose they could support providing dynamically-linked libraries as part of an app bundle. However, it's not clear why that would be any better than statically linking the library, as Apple probably wouldn't allow those dynamically-linked libraries to be shared between applications (apps being sandboxed, they couldn't pull in a
Neither OS X nor iOS are really set up to manage shared libraries/frameworks as installable objects separate from applications that use them. Perhaps they should be set up to do so, but that might need to be done carefully to avoid, well, DLL hell.
Trains don't have security theater yet because of the lower perceived potential impact - you can't crash a train into something, for example. This is, of course, an display of lack of imagination.
These days you don't see the same hype around microkernals that you did back then
No, but they are still in use. HURD, FreeBSD, OS X, and iOS all use the Mach microkernel to some extent.
For FreeBSD, presumably you mean "FreeBSD is based on 4.4-Lite, and 4.4BSD picked up the virtual memory system from Mach", rather than "FreeBSD uses the Mach messaging code", which it doesn't. So it doesn't use any of the microkernelish parts of Mach.
(Not that OS X or iOS make much traditionally-microkernelish use of them, either.)
That explains why Windows NT and OS X never got anywhere, considering that one was based on Mach and the other actually uses Mach.
Now, in Windows NT and OS X all the modules ran in the same address space. But they didn't call each other directly. They used the same generic messaging API that modules would from user space, there's just wasn't less overhead in passing the messages. But those examples are ancient history.
Not sure what "modules" you're referring to, but if you're referring to "modules" such as network protocols and file systems in OS X, they most definitely are called directly from the system call layer. Go take a look at the kern , net , and vfs directories of XNU, as well as the netinet directory of XNU and the source to the FAT file system kernel module for examples of code that plugs into the BSD-flavored socket layer and VFS mechanism.
As for the drivers they sit atop, those are called by Boring Old Procedure Calls (and method calls, given that IOKit uses a restricted flavor of C++), not by Mach message passing.
As far as I know, network protocols, file systems, and network and storage device drivers work similarly in NT.
though they have something to do with micokernels
Which isn't that much.
Great, can we agree now that not much is something and not nothing?
Sure, if we'll also agree that "[introducing] (un)loadable modules" to a monolithic kernel "to address maintainability and extendability" does not in the least make that kernel any closer to a microkernel (because, in fact, it doesn't).
In other news Thylacines and Jackals have nothing to do with each other, except they both look like canids and fill similar ecological niches. Apples and oranges . . .
In other other news, Felis catus and Loxodonta africana have nothing to do with each other, except that they have four legs and bear live young.
Srsly, "both are kernels" and "both let you load and unload stuff" isn't much of an ecological niche. True microkernels (not "hybrid kernels" like the NT kernel or XNU) and monolithic kernels (with our without loadable modules) are sufficiently different from one another than "you can add or remove stuff at run time" isn't much in the way of commonality.
I can't find the article now. It was years ago. Perhaps I misunderstood it. But I think it meant something like:
- Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules.
That's more like "mechanisms X or Y both allow Z to be accomplished"; the only thing that says X and Y have to do with one another is that they both allow Z to be accomplished, which isn't that much.
I'm sure you're right, though they have something to do with micokernels. There was Linus interview from a few years back explaining his preference for the monolithic approach, and he explained that modules were introduced to give most of the benefits of the microkernel, without the drawbacks.
I'd have to see that interview to believe that's exactly what he said. In this essay by him, he says
With the 2.0 kernel Linux really grew up a lot. This was the point that we added loadable kernel modules. This obviously improved modularity by making an explicit structure for writing modules. Programmers could work on different modules without risk of interference. I could keep control over what was written into the kernel proper. So once again managing people and managing code led to the same design decision. To keep the number of people working on Linux coordinated, we needed something like kernel modules. But from a design point of view, it was also the right thing to do.
but doesn't at all tie that to microkernels.
> There are over 100 separate
The last time I looked, which was quite a few years ago TBH, the BSDs have, IIRC, less than 100 lines of x86 assembly, in the bootstrap.
From relatively-recent FreeBSD:
$ find sys/i386 -name '*.[sS]' -print | xargs wc -l
$ find sys/amd64 -name '*.[sS]' -print | xargs wc -l
It's about 45,000 lines in Linux 3.19's arch/x86. A fair bit of that is crypto code, presumably either generally hand-optimized or using various new instructions to do various crypto calculations.