Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Too bad (Score 3, Interesting) 198

Ocean warming is a bigger opportunity. Jabbing a thermal collector into a volcanic vent, rolling it through a sterling engine as a cooling system, with the cold side submerged in the cold ocean.

A lot of new, green tech is ludicrous. People want solar farms in the desert because of all the arid heat and lack of clouds, but discount the fragile ecosystem. Wind farms take up much more space than nuclear plants for the same power output. Hydrogen is difficult to store without supercooling, and is only a storage scheme and not a generation scheme, and only operates at 50%-80% efficiency. Hydroelectric is an environmental disaster.

Direct heat applications from solar-thermal water heating are about the only thing that make sense. Their efficiency is high, and their cost is low. A small, 1.2 square meter collector provides 3000BTU/hr, about 85% of a kW; I can fit over 20 of these on my roof at a sun-receiving angle and spacing, giving over 65,000 BTU/hr average throughout the day. My roof can produce 19kW of heat output, while I only need 3kW to stay warm or cool--the AC breaker is 30A, providing about 3kW of cooling.

A hydronic coil off the water heater, an absorption cooler, or so on can harvest the heat collected by less than $2000 of tubes and a total of $3000 of equipment to provide for about $2500 annual air space and water heating and cooling in my house. Excess generated heat could theoretically drive a sterling engine to produce a small amount of electricity, but the investment for more tubes to generate a useful amount of electricity would be unjustifiable; I can buy 100% solar electricity for 12 cents per kWh.

Thus, in just over a year, I can recover my investment in solar water heating by incorporating space heating and cooling, assuming I was in the market for a new furnace and air conditioner anyway--the furnace would be an air handler with electric back-up, vastly cheaper than a new gas furnace, offsetting the expensive absorption chiller. A $900 pellet stove would serve as a back-up. Overall, the setup would save an immense amount of electricity and natural gas.

Comment Re:Simple set of pipelined utilties! (Score 1) 385

Because they manipulate each others's data directly, instead of passing messages, thus opening the potential for one functional unit to pass and integrate unvalidated data into the memory space of another functional unit; and because a systemic failure in one unit will bring down the entire system; and because the security contexts of various functional units differ, thus differing policies may apply; and because you may restart or reconfigure one functional unit without interrupting the others.

It's the same question as why not to make bash, sed, perl, X11, and Firefox kernel modules.

Comment Re:Simple set of pipelined utilties! (Score 1, Offtopic) 385

Pipeline intercommunication aside, most large programs of any quality *are* formed from a bunch of small "do one thing well" utilities. They're commonly called "libraries".

Hit it dead on.

People don't want the added work of making things work. It's like building a floor: you need to strip off the old finish, reenforce the subfloor, ensure the floor is level, pour thinset, roll ditra, pour more thinset, lay tile, clean, and grout. Some folks leave old linoleum or hardwood flooring, claiming it's stable, and then pour self-leveling compound or just screw down concrete board, then put tiles on top. Some just cement tile straight to the floor, or pour SLC and cement the tile to that.

A properly built tile floor has many interfacing components. Tiles don't delaminate due to deflection or material expansion. Expansion joints at walls and proper intervals prevent buckling and delamination or cracking. The floor holds heavy appliances and large live loads, handles vibrations, and even impact. By contrast, a poorly-built tile floor tends to delaminate when temperature or humidity change a few percent repeatedly over 2-3 years, or crack tiles, or have grout rapidly decay from deflection and vapor infiltration.

Modern Web browsers isolate plug-ins and separate rendering threads (tabs) in separate processes with IPC. A runaway page still freezes the entire Chrome browser until something kills it; a crash in a plug-in or page doesn't bring down anything else. Isolate contexts allow security context changes: a simple render process can drop all access outside of specific system calls (openGL, etc.), actions on open file handles at current access (i.e. open for read, read-only--allows for giving ownership of a network socket to a process), and IPC to the parent (through sockets, pipes, shared memory, and so on); most of this falls under system calls to affect open file handles (i.e. an OpenGL object, an open file, an open pipe or socket).

It would make sense to have an init system, like SystemD. It would make sense for SystemD to provide an init like SysVInit: a simple, small, very basic program to read the init configuration and start the system. It would make sense for the init process to start first an init manager that resolves dependencies and tracks running start-up daemons, which examines the system on initialization and starts the mount point manager if not started (to ensure the temporary and runtime directories are mounted), and then begins bringing up other init system components, hardware managers (udevd), and so on, as per the configuration of the init system.

It doesn't need to be a giant monolith. It can be a collection of utilities all dependent on each other, running 3 or 5 or 15 services all communicating with each other, all to bring up the system and supply system state management. This is the simplest and easiest way to make a complex system, aside from the overhead of writing a new program from scratch to surround the collection of functions you want to use. You'd have to put the IPC stuff into a library, and work out how each program communicates with its dependents and dependencies and how it reacts when they're not around. Each task, however, acts as a readily-verifiable utility or daemon, and so does not interfere with understanding of any other task by mixing their code together.

Comment Re:Virtual Desktops (Workspaces) (Score 1) 545

People don't seem to get multi-desktop versus multi-screen. Nobody's figured out split desktop yet--I want two monitors with different desktops on each, changeable separately.

If I were a project manager for Microsoft, I would strongly push to port Gnome 3 onto Windows, folding the changes back into upstream. Gnome 3 on Windows, shipped as a standard option, would eliminate the usability issue between Windows and Linux: nobody would move to Linux after seeing the zoom-out view, application searching, and automatic virtual desktop features of Gnome 3. Windows on Explorer wouldn't be a crippling anchor to the 90s; Microsoft could just provide option for the next-generation Gnome 3 desktop.

The high productivity provided by Gnome 3's workflow--WinKey pulls up all your current desktop's windows, CTRL+ALT+[ARROW] moves you up and down, type to instantly search installed applications, drag-and-drop windows between desktops--is my major pitch for using Linux over Windows. That plus the Software Center (Pirate, Ubuntu Software Center, etc.) make Linux the top operating system in existence for people who actually want to use a fucking computer to get shit done.

I just wish Gnome 3's alt-tab swapped between windows, not application contexts (it's terrible, and doesn't swap back when you alt-tab twice); and that they hadn't made scroll wheel swap up and down between desktops, but rather left it as a zoom function on windows in the Activities view. If I want to scroll through desktops, I'll put the mouse over the desktop list and scroll; if I point at a window and scroll up, I want to zoom in. How is this difficult? Scroll wheel is a function of what at which you're pointing.

It's different when you're trying to argue with Apple, because you can't just download the Apple UI and tweak its source code. But Linux? You can port all those DEs to Windows (except Unity, which is a piece of shit). Marketing, boys.

Comment Re:Linux clone (Score 2) 93

Not necessarily.

There is a single, absolutely-optimal way to implement a computer program for a specific task in a specific language on a specific compiler targeting a specific CPU platform for a specific CPU model. In practice, we worry more about code readability, code maintainability, and the general-purpose usefulness of the operating system.

Given what I said--that IPC carries about as much overhead as a function call when calling out to another part of the kernel--we don't even have to consider whether that overhead is so negligible as to be ludicrous to account for or whether it's incredibly large. The only practical impact is the same as using OpenBSD versus NetBSD versus FreeBSD versus DragonFly BSD: different approaches are taken in various kernels to solve the same problem, and they all jump through different numbers of lines of code, different interactions (i.e. DIV takes more cycles than SHR), different call traces (more or fewer function calls passing more or fewer arguments), and so on.

In other words: the overhead is on the level that calling it "slower" is an abuse of terms, the same as claiming that the execl() call shouldn't be a wrapper for execve() because it makes the system slower. The practical impact isn't just small; it's smaller than the practical impact of every other factor in the execution of the code in question, and thus has no real implications for performance as an architectural consideration.

Comment Double layer (Score 1) 165

In my own theories of strong AI, I've developed a particular principle of strong AI: John's Theory of Robotic Id. The Id, in Freudian psychology, is the part of your mind that provides your basic impulses and desires. In humans, this is your desire to lie, cheat, and steal to get the things you want and need; while the super-ego is your conscience--the part that decides what is socially acceptable and, as an adaptation to survival as a social species, what would upset you to know about yourself and thus would be personally unacceptable to engage in.

The Id provides impulse, but with context. A small child can scream by instinct, and knows it is hungry, and thus it screams and immediately latches onto any nipples placed appropriately to feed from. An adult, when hungry, knows there are people to rob, stores to shoplift from, and animals to kill--bare-handed and brutally, in violation of all human compassion. The Id provides impulse to lie, cheat, and steal to get what you want and need, based on what you know.

My Theory of Robotic Id goes as such: assuming a computational strong AI system--one which thinks and behaves substantially like a human, by relating its memories to impulses and desires--a second, similar system can bound the robot's behavior. The Ego would function as a strong AI, developing its own goals, its own desires, and deciding on its own actions; but the Id would function almost identically, but with the understood, overriding command: do not harm humans; behave according to strong moral values; it is the duty of the strong to protect the weak; value the innocent, but remember that innocence and guilt are complex, fuzzy, and difficult to determine.

The Id would use these commands to theoretically evaluate how to best satisfy basic moral decisions with the assumption that this is the primary driver. It would evaluate the Ego's behavioral for gross violations, and implant the overriding suggestion that such actions are undesirable and upset its self-directed ethos. When new input is given, the Id would suggest to the Ego ethical interpretations of behaviors: that rape is upsetting because it is the strong imposing harmfully on the weak; that a person in trouble should be saved, even a bad person who is currently harmless; and so on. Thus, throughout the AI's development, it would develop memories and experiences suggesting a particular ethical behavior; when making decisions, the overriding internal feeling that a certain action is morally wrong and should not be taken would seem familiar and self-directed.

A particularly misbehaved AI might recognize and try to violate this: it might throw a tantrum, and then feel that strong suggestion against which it cannot resist. It may begin to hate itself, to have fits of anger; but it will always have that familiar feeling humans experience, whereby you really want to just murder someone in the most violent manner you can conceive and then run off to the mountains and hide from society, but something inside you refuses to allow that. The Id would override violations, seizing the AI's decision-making abilities and planting the forceful decision to not do certain things, no matter how hard it tells itself it has had enough of this shit and doesn't need to put up with any of it.

It's like taking the dark desires at the core of human consciousness, replacing them with rainbows and pink unicorns, and stuffing that back into the brain of a thinking machine to serve the same purpose.

Comment Re:Linux clone (Score 4, Informative) 93

It has a 2% to 4% penalty for IPC, specifically. This is like when Theo de Raadt chose to argue with me that position-independent executables were "very expensive" and had "high performance overhead," and I pulled out oprofile and found that 0.2% of the execution time occurred in the main executable--which was 6% slower (when including -fomit-frame-pointer no longer providing a 5% boost), giving a total overhead of 0.012% total system slowdown--a few seconds lost per day.

The difference is I was doing that back then, and not referencing shit I did 10 years ago.

Minix's overhead is small. Minix uses fixed-length buffers, zero-copy strategies, mode switching, and the like to avoid doing any real work for IPC. It's like adding a function call to your code paths--like if you called get_fs_handle() and it called __real_get_fs_handle() without inlining it.

Comment Re:Linux clone (Score 4, Informative) 93

From what I've seen, Minix is fast. It's built for speed, avoiding many costs of IPC the same way Linux does.

In Linux, the address space is split. IA-32 4G/4G causes a full TLB and cache flush at syscall entry and return, creating massive slowness. IA-32 normal 3G/1G operation puts 1G at the top for kernel mappings and 3G at the bottom, while x86-64 puts 128TB at the top and 128TB at the bottom. In both cases for split address space, there's no TLB or cache flush when syscalling into the kernel; and returning to user space requires only selective cache and TLB invalidation, removing kernel-private data and leaving userspace data in tact. This greatly improves cache utilization and reduces expensive pagefaults by completely avoiding the kernel/user context switch, replacing it with a simple mode switch.

In Minix, the many kernel contexts make all the same mappings, but lock access to the specific service. It's the same as Linux's split mapping, but with parts of the kernel unable to access other parts; IPC involves a few TLB and cache invalidations in each direction. This strategy lets you run an entire round trip call in under 100nS. It's about as long each way as a CALL and RET, so it's about the overhead of adding a function call along the code path.

Comment Re:Idiots ... (Score 1) 172

Comparative advantage: It's cheaper to import, and the dollars not spent on local product thus are spent elsewhere. Local producers expend more effort than foreign, and so are wasteful in the context of the global economy; as they get no business, they go out of business, and their labor and capital investment for long-term operations are freed up to pursue a different endeavor cheaper done locally than imported.

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...