Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Linux clone (Score 2) 93

Not necessarily.

There is a single, absolutely-optimal way to implement a computer program for a specific task in a specific language on a specific compiler targeting a specific CPU platform for a specific CPU model. In practice, we worry more about code readability, code maintainability, and the general-purpose usefulness of the operating system.

Given what I said--that IPC carries about as much overhead as a function call when calling out to another part of the kernel--we don't even have to consider whether that overhead is so negligible as to be ludicrous to account for or whether it's incredibly large. The only practical impact is the same as using OpenBSD versus NetBSD versus FreeBSD versus DragonFly BSD: different approaches are taken in various kernels to solve the same problem, and they all jump through different numbers of lines of code, different interactions (i.e. DIV takes more cycles than SHR), different call traces (more or fewer function calls passing more or fewer arguments), and so on.

In other words: the overhead is on the level that calling it "slower" is an abuse of terms, the same as claiming that the execl() call shouldn't be a wrapper for execve() because it makes the system slower. The practical impact isn't just small; it's smaller than the practical impact of every other factor in the execution of the code in question, and thus has no real implications for performance as an architectural consideration.

Comment Double layer (Score 1) 165

In my own theories of strong AI, I've developed a particular principle of strong AI: John's Theory of Robotic Id. The Id, in Freudian psychology, is the part of your mind that provides your basic impulses and desires. In humans, this is your desire to lie, cheat, and steal to get the things you want and need; while the super-ego is your conscience--the part that decides what is socially acceptable and, as an adaptation to survival as a social species, what would upset you to know about yourself and thus would be personally unacceptable to engage in.

The Id provides impulse, but with context. A small child can scream by instinct, and knows it is hungry, and thus it screams and immediately latches onto any nipples placed appropriately to feed from. An adult, when hungry, knows there are people to rob, stores to shoplift from, and animals to kill--bare-handed and brutally, in violation of all human compassion. The Id provides impulse to lie, cheat, and steal to get what you want and need, based on what you know.

My Theory of Robotic Id goes as such: assuming a computational strong AI system--one which thinks and behaves substantially like a human, by relating its memories to impulses and desires--a second, similar system can bound the robot's behavior. The Ego would function as a strong AI, developing its own goals, its own desires, and deciding on its own actions; but the Id would function almost identically, but with the understood, overriding command: do not harm humans; behave according to strong moral values; it is the duty of the strong to protect the weak; value the innocent, but remember that innocence and guilt are complex, fuzzy, and difficult to determine.

The Id would use these commands to theoretically evaluate how to best satisfy basic moral decisions with the assumption that this is the primary driver. It would evaluate the Ego's behavioral for gross violations, and implant the overriding suggestion that such actions are undesirable and upset its self-directed ethos. When new input is given, the Id would suggest to the Ego ethical interpretations of behaviors: that rape is upsetting because it is the strong imposing harmfully on the weak; that a person in trouble should be saved, even a bad person who is currently harmless; and so on. Thus, throughout the AI's development, it would develop memories and experiences suggesting a particular ethical behavior; when making decisions, the overriding internal feeling that a certain action is morally wrong and should not be taken would seem familiar and self-directed.

A particularly misbehaved AI might recognize and try to violate this: it might throw a tantrum, and then feel that strong suggestion against which it cannot resist. It may begin to hate itself, to have fits of anger; but it will always have that familiar feeling humans experience, whereby you really want to just murder someone in the most violent manner you can conceive and then run off to the mountains and hide from society, but something inside you refuses to allow that. The Id would override violations, seizing the AI's decision-making abilities and planting the forceful decision to not do certain things, no matter how hard it tells itself it has had enough of this shit and doesn't need to put up with any of it.

It's like taking the dark desires at the core of human consciousness, replacing them with rainbows and pink unicorns, and stuffing that back into the brain of a thinking machine to serve the same purpose.

Comment Re:Linux clone (Score 4, Informative) 93

It has a 2% to 4% penalty for IPC, specifically. This is like when Theo de Raadt chose to argue with me that position-independent executables were "very expensive" and had "high performance overhead," and I pulled out oprofile and found that 0.2% of the execution time occurred in the main executable--which was 6% slower (when including -fomit-frame-pointer no longer providing a 5% boost), giving a total overhead of 0.012% total system slowdown--a few seconds lost per day.

The difference is I was doing that back then, and not referencing shit I did 10 years ago.

Minix's overhead is small. Minix uses fixed-length buffers, zero-copy strategies, mode switching, and the like to avoid doing any real work for IPC. It's like adding a function call to your code paths--like if you called get_fs_handle() and it called __real_get_fs_handle() without inlining it.

Comment Re:Linux clone (Score 4, Informative) 93

From what I've seen, Minix is fast. It's built for speed, avoiding many costs of IPC the same way Linux does.

In Linux, the address space is split. IA-32 4G/4G causes a full TLB and cache flush at syscall entry and return, creating massive slowness. IA-32 normal 3G/1G operation puts 1G at the top for kernel mappings and 3G at the bottom, while x86-64 puts 128TB at the top and 128TB at the bottom. In both cases for split address space, there's no TLB or cache flush when syscalling into the kernel; and returning to user space requires only selective cache and TLB invalidation, removing kernel-private data and leaving userspace data in tact. This greatly improves cache utilization and reduces expensive pagefaults by completely avoiding the kernel/user context switch, replacing it with a simple mode switch.

In Minix, the many kernel contexts make all the same mappings, but lock access to the specific service. It's the same as Linux's split mapping, but with parts of the kernel unable to access other parts; IPC involves a few TLB and cache invalidations in each direction. This strategy lets you run an entire round trip call in under 100nS. It's about as long each way as a CALL and RET, so it's about the overhead of adding a function call along the code path.

Comment Re:Idiots ... (Score 1) 172

Comparative advantage: It's cheaper to import, and the dollars not spent on local product thus are spent elsewhere. Local producers expend more effort than foreign, and so are wasteful in the context of the global economy; as they get no business, they go out of business, and their labor and capital investment for long-term operations are freed up to pursue a different endeavor cheaper done locally than imported.

Comment Re:High reliability? (Score 3, Informative) 93

No idea. I've seen the performance tests where they repeatedly send kill signals to the disk driver to crash it over and over, and measure its impact on performance--large when you have this in a tight loop, yet the system trudges along, and stops being so god damn slow when you stop killing the disk service 1000 times per second.

I can conjecture a lot about how it is and isn't possible--obviously you can't just restart a snapshot from a few ns ago, or tell it to try again, or rerun the service and dump the same exact data back over it; but you can resubmit requests and messages in any number of ways. If you're careful to use a request-acknowledge-free workflow (send a request, wait for an acknowledgement that it's completed, then free the memory for the request when it's acknowledged or when the requested information is returned), you can always replay a request if the server dies. You can even use a mediator to resubmit uncompleted requests or stored state (received frames, journaled file system actions, etc.) to a service if it gets restarted, hiding that process from other services.

Minix documentation and demonstration show that they restart the service and it completes all requests--state is snapshotted and restored. How that state is snapshotted--internal to service, mediated by a message passer, or resubmitted by client services--I don't actually know. I just know it does it and it's technically possible in any number of ways.

Comment Re:Drivers as processes? (Score 5, Interesting) 93

You can reconstruct state. A read request from hard disk will work the same way, repeatedly; a write request to a file system will write to a journal or the same blocks in a file or inode, while a write to a hard drive sector is isometric. Keeping the request buffer, resubmitting the request, and so on lets you reconstruct state. Even a network driver theoretically could read state from the network: it could request DMA from the NIC to a buffered memory area with a control structure of known layout, and the resurrection server could provide the control structure and buffered memory areas back to the driver if it needs to restart.

The remainder of the driver state can be discarded. Bringing in new network frames, new writes to file systems, or the like shouldn't depend on the driver's internal state. Repeated crashing--for example, reloading the network driver as above and having the control structure point to a buffer of unmapped memory--could signal the OS to simply drop state and start fresh. Recovery is possible in many cases: TCP/IP, as a separate driver, would simply experience a dropped frame (lost packet), handling a state-reset network driver the same way as any other faulty media; a failed hard drive write would signal the FS driver to resubmit its write; and a failed file system operation could go so far as to reload the driver and run a journal replay or file system check (non-journaled file systems could use an in-memory journal to facilitate file system driver recovery).

Consider state as a collection of significant and insignificant states. My Web browser right now has a lot of stuff in memory, a lot of things rendered on the screen, a state of how far down I've scrolled, and state describing where in memory all these things are--and where this text I'm typing is stored. If we unload the browser and then re-launch, the only state I need restored to post this message is a copy of this message, dumped anywhere in RAM, with a specific pointer somewhere referencing that buffer as the context of this particular textbox on this Web site. The browser's state may be completely different, yet only that piece of information is important; I can recover the rest by coming to this page and hitting "Reply To This".

From that view, it's not hard to recover discrete processes in an OS. File system separate from disk driver means the FS can handle a disk driver crash and reload; user prorgrams won't notice a file system driver crash if the system just waits for the FS driver to reload and replay a journal or such to return state to sanity. The task entry for the user program tracks all the file handles. All of these things are separate, can communicate between mediators, or can mediate their own communications.

Comment Linux clone (Score 0) 93

Linux currently leads. Minix needs a bunch of drivers implementing kernel event hooks, inotify, dnotify, etc. Essentially, everything for udevd, systemd, and dbus. These wouldn't be integrated in core, and so could only come online when building a distribution to support a Linux-like user space. Could even implement iptables. Would need ext4, xfs, btrfs, zfs, and fuse drivers eventually.

Comment Re:Drivers as processes? (Score 5, Informative) 93

The file system write or read request doesn't return anything, the driver is detected as dead (heartbeat?), it's killed, resurrected, journals are replayed, and the request is resubmitted to the FS driver.

It'll work. The program might notice a small pause, as if the disk was busy or the kernel yielded schedule to an interrupt handler.

Comment Re:High reliability? (Score 5, Interesting) 93

The novel approach is that Tanenbaum invented the fucking thing. The specific current advantage is low-latency IPC--on ARM, Minix IPC doesn't even have a measurable cost (the context switch time required is under 20 microseconds), while on x86 IPC is more than 10 times faster than L4.

Monoliths, e.g. Linux, don't have IPC latency because they don't context switch when making calls between major kernel functional units. Of course, if your network driver crashes, your whole system gets fucked up and dies; whereas Minix tries to take a state snapshot, reconstruct something workable, load it into a fresh run of the network driver, and continue without a hickup. This works extremely well with the disk and FS drivers. Ideally, we want this without paying for it.

Slashdot Top Deals

Always draw your curves, then plot your reading.

Working...