Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Microkernal Boner (Score 1) 229

These days you don't see the same hype around microkernals that you did back then

No, but they are still in use. HURD, FreeBSD, OS X, and iOS all use the Mach microkernel to some extent.

For FreeBSD, presumably you mean "FreeBSD is based on 4.4-Lite, and 4.4BSD picked up the virtual memory system from Mach", rather than "FreeBSD uses the Mach messaging code", which it doesn't. So it doesn't use any of the microkernelish parts of Mach.

(Not that OS X or iOS make much traditionally-microkernelish use of them, either.)

Comment Re:Microkernal Boner (Score 2) 229

That explains why Windows NT and OS X never got anywhere, considering that one was based on Mach and the other actually uses Mach.

Now, in Windows NT and OS X all the modules ran in the same address space. But they didn't call each other directly. They used the same generic messaging API that modules would from user space, there's just wasn't less overhead in passing the messages. But those examples are ancient history.

Not sure what "modules" you're referring to, but if you're referring to "modules" such as network protocols and file systems in OS X, they most definitely are called directly from the system call layer. Go take a look at the kern , net , and vfs directories of XNU, as well as the netinet directory of XNU and the source to the FAT file system kernel module for examples of code that plugs into the BSD-flavored socket layer and VFS mechanism.

As for the drivers they sit atop, those are called by Boring Old Procedure Calls (and method calls, given that IOKit uses a restricted flavor of C++), not by Mach message passing.

As far as I know, network protocols, file systems, and network and storage device drivers work similarly in NT.

Comment Re:Should be micro kernel (Score 1) 209

though they have something to do with micokernels

Which isn't that much.

Great, can we agree now that not much is something and not nothing?

Sure, if we'll also agree that "[introducing] (un)loadable modules" to a monolithic kernel "to address maintainability and extendability" does not in the least make that kernel any closer to a microkernel (because, in fact, it doesn't).

In other news Thylacines and Jackals have nothing to do with each other, except they both look like canids and fill similar ecological niches. Apples and oranges . . .

In other other news, Felis catus and Loxodonta africana have nothing to do with each other, except that they have four legs and bear live young.

Srsly, "both are kernels" and "both let you load and unload stuff" isn't much of an ecological niche. True microkernels (not "hybrid kernels" like the NT kernel or XNU) and monolithic kernels (with our without loadable modules) are sufficiently different from one another than "you can add or remove stuff at run time" isn't much in the way of commonality.

Comment Re:Should be micro kernel (Score 1) 209

I can't find the article now. It was years ago. Perhaps I misunderstood it. But I think it meant something like:

  • Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules.

That's more like "mechanisms X or Y both allow Z to be accomplished"; the only thing that says X and Y have to do with one another is that they both allow Z to be accomplished, which isn't that much.

Comment Re:Should be micro kernel (Score 4, Insightful) 209

I'm sure you're right, though they have something to do with micokernels. There was Linus interview from a few years back explaining his preference for the monolithic approach, and he explained that modules were introduced to give most of the benefits of the microkernel, without the drawbacks.

I'd have to see that interview to believe that's exactly what he said. In this essay by him, he says

With the 2.0 kernel Linux really grew up a lot. This was the point that we added loadable kernel modules. This obviously improved modularity by making an explicit structure for writing modules. Programmers could work on different modules without risk of interference. I could keep control over what was written into the kernel proper. So once again managing people and managing code led to the same design decision. To keep the number of people working on Linux coordinated, we needed something like kernel modules. But from a design point of view, it was also the right thing to do.

but doesn't at all tie that to microkernels.

Loadable kernel modules in UN*Xes date back at least to SunOS 4.1.3 and AIX 3.0 in the early 1990's. I'm not sure they were introduced to compete with microkernels.

Comment Re:Speedups? (Score 3, Informative) 209

> There are over 100 separate ... speedups

The last time I looked, which was quite a few years ago TBH, the BSDs have, IIRC, less than 100 lines of x86 assembly, in the bootstrap.

From relatively-recent FreeBSD:

$ find sys/i386 -name '*.[sS]' -print | xargs wc -l
208 sys/i386/acpica/acpi_wakecode.S
40 sys/i386/bios/smapi_bios.S
396 sys/i386/i386/apic_vector.s
78 sys/i386/i386/atpic_vector.s
160 sys/i386/i386/bioscall.s
470 sys/i386/i386/exception.s
900 sys/i386/i386/locore.s
279 sys/i386/i386/mpboot.s
831 sys/i386/i386/support.s
538 sys/i386/i386/swtch.s
179 sys/i386/i386/vm86bios.s
37 sys/i386/linux/linux_locore.s
127 sys/i386/linux/linux_support.s
32 sys/i386/svr4/svr4_locore.s
202 sys/i386/xbox/pic16l.s
494 sys/i386/xen/exception.s
361 sys/i386/xen/locore.s
5332 total
$ find sys/amd64 -name '*.[sS]' -print | xargs wc -l
282 sys/amd64/acpica/acpi_wakecode.S
326 sys/amd64/amd64/apic_vector.S
73 sys/amd64/amd64/atpic_vector.S
541 sys/amd64/amd64/cpu_switch.S
906 sys/amd64/amd64/exception.S
88 sys/amd64/amd64/locore.S
236 sys/amd64/amd64/mpboot.S
56 sys/amd64/amd64/sigtramp.S
732 sys/amd64/amd64/support.S
75 sys/amd64/ia32/ia32_exception.S
161 sys/amd64/ia32/ia32_sigtramp.S
38 sys/amd64/linux32/linux32_locore.s
124 sys/amd64/linux32/linux32_support.s
246 sys/amd64/vmm/intel/vmx_support.S
42 sys/amd64/vmm/vmm_support.S
3926 total

It's about 45,000 lines in Linux 3.19's arch/x86. A fair bit of that is crypto code, presumably either generally hand-optimized or using various new instructions to do various crypto calculations.

Comment Re:Should be micro kernel (Score 2) 209

As I understand it, NeXT / OSX started with a micro-kernel philosophy and then introduced some monolithic kernel concepts to address the performance bottleneck of messaging between true micro modules.

Meanwhile Linux starts as a monolithic kernel, but introduced (un)loadable modules to address maintainability and extendability.

So if we described it as a continuum with 'pure microkernel' being a '1' and pure monolithic kernel being a '10', then OSX would be something like a '3' and Linux would be a '7'.

Loadable kernel modules have nothing to do with microkernels. A truly micro microkernel wouldn't need loadable kernel modules because all the loadable functionality would run in userland; plenty of monolithic kernels have loadable kernel modules.

And OS X is a lot further from "pure microkernel" than 3. The "monolithic kernel concepts" include "running the networking stack in the kernel, standard BSD-style", "running the file systems in the kernel, standard BSD-style", and "implementing most process and VM management operations with standard system calls in the kernel".

Comment Re:Should be micro kernel (Score 2) 209

To add your claim that XNU does not follow any microkernel rules is simply false. XNU uses microkernel-style message passing.

XNU has system calls to allow messages to be sent between processes, including sending large amounts of data by page flipping.

It just doesn't happen to use that to implement very much of the UNIX API; it's not used to implement file I/O (that goes through ordinary BSD system calls to in-kernel file systems that are called through a BSD-style VFS) or network I/O (that goes through ordinary BSD system calls to in-kernel networking stacks that are called through a BSD-style kernel socket layer) or much of the process/thread management or VM code (that goes through ordinary system calls that end up calling Mach task, thread, and VM management calls).

It is used for communication between user processes, and for some kernel user communication, but that's the same sort of use that happens in systems with Boring Old Monolithic Kernels.

Comment Re:And yet, no one understands Git. (Score 1) 203

Yes, I have multiple trees for multiple projects. Why on earth would my life be better if I didn't?

Presumably you're not saying something such as "one directory tree for multiple projects is better than multiple directory trees for multiple projects"; that's just the moral equivalent of tabbed browsing vs. non-tabbed browsing, and some people quite legitimately don't find tabbed browsing to be an improvement for them.

Of course not. Every project should have their own repo. SVN, Git or Otherwise. I've seen many places that try to stick it all into one giant SVN project - and the result is that they end up using it as if it was just a network share.

Sorry, by "project" I didn't mean "software project", I meant "development project for a software project". I do not, for example, do all my libpcap work in a single directory tree; I have a tree checked out (cloned, tracking the remote repository) for remote-capturing work, and a tree checked out for directly talking to netlink rather than going through libnl, and so on.

There is precisely one person touching my repository, and that's me, so there's nobody to trip over. Other people's work is separated from my work by being on other people's machines. The ultimate goal of all of our work is to get our changes into the official common repository on the trunk or a release branch; how do branches either on my machine or in the common repository simplify this task?

How do branches make it more difficult?

More commands to type.

You want to do all your work in master and periodically push to a central server?

Yes.

I have my devs working on different branches per bug/feature because they may need to pass incomplete work between different developers depending on skillset or what layers of the application stack are being changed.

That's never come up as an issue for me, so that doesn't benefit me.

I want to keep them in a branch so the work doesn't slip into a release.

I avoid doing that by not running "git push" or "git review" until it's ready.

When their work is complete, reviewed, and tested, then we merge into a release candidate branch for client review.

So your organization's workflow is more formal than that of the projects on which I work with Git. On those projects, core developers can just check things in, and others can put up patches for review and pulling on GitHub (libpcap, tcpdump) or on Gerrit (Wireshark). Neither appear to require the contributor to create branches in their own repositories.

For you, branches per change may be useful. For me, not so much.

So "branch per bug" isn't "the right way to work with Git", it's merely "something Git lets you do if you choose".

If your preferred workflow doesn't map to the Git model, then don't use Git.

That's not a choice I was offered, unless there's something such as "svn-git" to let me avoid using Git even when developing for a project that uses Git (i.e., using some other VCS as a front end to the master Git repository).

And I do make my workflow work with Git. Said workflow avoids "git branch", as it's just extra keystrokes to type.

Over the years I have often found that having a flexible worfklow and allowing it to fit the capabilities of the tools being leveraged often introduces more efficient ways of working that I never would have otherwise considered.

Unless the workflow ends up trying to sleep in Procrustes' bed. Personally, I prefer more flexible tools.

Comment Re:And yet, no one understands Git. (Score 1) 203

Without branching, you can only have branch-like behavior by having multiple checkouts.

You say that as if it were a bad thing.

Yes, I have multiple trees for multiple projects. Why on earth would my life be better if I didn't?

Presumably you're not saying something such as "one directory tree for multiple projects is better than multiple directory trees for multiple projects"; that's just the moral equivalent of tabbed browsing vs. non-tabbed browsing, and some people quite legitimately don't find tabbed browsing to be an improvement for them.

And when you go to resolve those (at least in Git) you will still be doing a merge operation that is identical to merging a branch.

A "merge operation" that I actually bother to notice is what happens if somebody else changed some stuff in ways that collide with my changes. If that happens, it's my job to figure out what to do - including "discard my work because their changes do it better" or "discard my changes because their changes make my changes no longer necessary".

How would branches make a positive difference there?

When you have multiple people touching the same project, branching helps keep people's work separated so they aren't tripping over each other,

There is precisely one person touching my repository, and that's me, so there's nobody to trip over. Other people's work is separated from my work by being on other people's machines. The ultimate goal of all of our work is to get our changes into the official common repository on the trunk or a release branch; how do branches either on my machine or in the common repository simplify this task?

and merging provides a place for reviewing and synchronizing changes across different efforts.

Either the merge produces no conflicts - or it produces conflicts, in which case I have to resolve them, as per the above. Again, how do branches improve things here?

Event CVS and SVN have branching because it is useful.

Yes, but "having a branch for fixes to a stable release line", to keep feature work for the next release separate from fixes that go into stable release lines (possibly backported from the trunk), is different from "branching for every change you're working on". The former is useful to the project; I've yet to see any scenario in which the latter is useful.

(And, yes, I've worked at a company where you'd check your fixes into a CVS or SVN branch and have somebody else merge it with the trunk. Perhaps it worked for the organization, but it was just a nuisance for me. I did most of my work on a checked-out-from-the-trunk tree, and created the branch when it was time to submit.)

You are complaining that you refuse to use one of the very key features of Git, and that somehow that refusal on your part means that Git is hard to use.

No, I'm complaining that fans of Git seem to think that, because they believe that some particular way of using Git happens to work better for them, it's the Right Answer For Everybody.

Comment Re:And yet, no one understands Git. (Score 1) 203

And use as many dev branches as you want in Git.

I do. I don't want to use any dev branches, as they provide no obvious benefit to me, so I don't use any.

Well if you don't want to branch, then you're arbitrarily enforcing a condition on yourself that makes things far more difficult then they need to be.

If you insist on branching, you're arbitrarily enforcing a condition on yourself.

So why does arbitrarily choosing to work on a branch make things easier? Please give explicit examples of the downside of not explicitly creating branches.

Comment Re:And yet, no one understands Git. (Score 1) 203

About 20 years ago, I worked for a company which I shall not name, which used CVS as its source repository. All of the developers' home directories were NFS mounted from a central Network Appliance shared storage

...

One day, the NA disk crashed. I don't know if it was a RAID or what,

It was a NetApp box, so it used RAID 4 (and had more than one disk - the minimum was 2 disks). Perhaps the failure was something more than just a single-disk failure, as a single-disk failure shouldn't have lost the data.

Comment Re:And yet, no one understands Git. (Score 1) 203

Sounds to me like you try to make huge complex commits. Try rethinking your work as smaller modular commits. It makes life so much easier to do diffs and handling complex branch/merging behaviors.

The commits I do are "what's necessary to fix the problem". Perhaps that means that change A depends on change B, which depends on change C, and if none of those changes represent a regression, I do them as separate commits, pushing each one upstream as soon as the commit is done.

But what does that have to do with this "branch early, branch often" stuff I keep hearing from Git fans?

And use as many dev branches as you want in Git.

I do. I don't want to use any dev branches, as they provide no obvious benefit to me, so I don't use any.

You can always cherry pick which changes you want to bring over into your master/release branch.

If I didn't want to bring them over, I wouldn't have committed them in the first place.

Comment Re:And yet, no one understands Git. (Score 1) 203

Branch per bug? Why not just do the bug fix, and commit it after you've tested the fix?

What if it takes multiple commits to fix a bug

I.e., you tested the fix, and committed it, but it wasn't good enough and you needed to do some more fixing?

What if you want to switch away to a different branch to work on something else without losing where you're at?

Separate source trees in separate directories. I occasionally use "git stash", but the problems with that are 1) with too many stack entries or too-old stack entries, I lose context (so I'd be better off doing the work in a separate directory) and 2) on the occasions when I've managed to get my local repository in some weird state where it's not obvious how to fix it, doing my usual "clone another tree, move my changes over to that tree, and then nuke the tree with the messed-up repository from orbit, just to be sure" strategy loses the stashed stuff, so I try to use "git stash" only for 1) checking the behavior of the code without the fix I'm working on and 2) doing some relatively quick fix in the tree so that the stashed changes don't get forgotten.

Slashdot Top Deals

Today is a good day for information-gathering. Read someone else's mail file.

Working...