Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Should be micro kernel (Score 4, Insightful) 209

I'm sure you're right, though they have something to do with micokernels. There was Linus interview from a few years back explaining his preference for the monolithic approach, and he explained that modules were introduced to give most of the benefits of the microkernel, without the drawbacks.

I'd have to see that interview to believe that's exactly what he said. In this essay by him, he says

With the 2.0 kernel Linux really grew up a lot. This was the point that we added loadable kernel modules. This obviously improved modularity by making an explicit structure for writing modules. Programmers could work on different modules without risk of interference. I could keep control over what was written into the kernel proper. So once again managing people and managing code led to the same design decision. To keep the number of people working on Linux coordinated, we needed something like kernel modules. But from a design point of view, it was also the right thing to do.

but doesn't at all tie that to microkernels.

Loadable kernel modules in UN*Xes date back at least to SunOS 4.1.3 and AIX 3.0 in the early 1990's. I'm not sure they were introduced to compete with microkernels.

Comment Re:Speedups? (Score 3, Informative) 209

> There are over 100 separate ... speedups

The last time I looked, which was quite a few years ago TBH, the BSDs have, IIRC, less than 100 lines of x86 assembly, in the bootstrap.

From relatively-recent FreeBSD:

$ find sys/i386 -name '*.[sS]' -print | xargs wc -l
208 sys/i386/acpica/acpi_wakecode.S
40 sys/i386/bios/smapi_bios.S
396 sys/i386/i386/apic_vector.s
78 sys/i386/i386/atpic_vector.s
160 sys/i386/i386/bioscall.s
470 sys/i386/i386/exception.s
900 sys/i386/i386/locore.s
279 sys/i386/i386/mpboot.s
831 sys/i386/i386/support.s
538 sys/i386/i386/swtch.s
179 sys/i386/i386/vm86bios.s
37 sys/i386/linux/linux_locore.s
127 sys/i386/linux/linux_support.s
32 sys/i386/svr4/svr4_locore.s
202 sys/i386/xbox/pic16l.s
494 sys/i386/xen/exception.s
361 sys/i386/xen/locore.s
5332 total
$ find sys/amd64 -name '*.[sS]' -print | xargs wc -l
282 sys/amd64/acpica/acpi_wakecode.S
326 sys/amd64/amd64/apic_vector.S
73 sys/amd64/amd64/atpic_vector.S
541 sys/amd64/amd64/cpu_switch.S
906 sys/amd64/amd64/exception.S
88 sys/amd64/amd64/locore.S
236 sys/amd64/amd64/mpboot.S
56 sys/amd64/amd64/sigtramp.S
732 sys/amd64/amd64/support.S
75 sys/amd64/ia32/ia32_exception.S
161 sys/amd64/ia32/ia32_sigtramp.S
38 sys/amd64/linux32/linux32_locore.s
124 sys/amd64/linux32/linux32_support.s
246 sys/amd64/vmm/intel/vmx_support.S
42 sys/amd64/vmm/vmm_support.S
3926 total

It's about 45,000 lines in Linux 3.19's arch/x86. A fair bit of that is crypto code, presumably either generally hand-optimized or using various new instructions to do various crypto calculations.

Comment Re:Should be micro kernel (Score 2) 209

As I understand it, NeXT / OSX started with a micro-kernel philosophy and then introduced some monolithic kernel concepts to address the performance bottleneck of messaging between true micro modules.

Meanwhile Linux starts as a monolithic kernel, but introduced (un)loadable modules to address maintainability and extendability.

So if we described it as a continuum with 'pure microkernel' being a '1' and pure monolithic kernel being a '10', then OSX would be something like a '3' and Linux would be a '7'.

Loadable kernel modules have nothing to do with microkernels. A truly micro microkernel wouldn't need loadable kernel modules because all the loadable functionality would run in userland; plenty of monolithic kernels have loadable kernel modules.

And OS X is a lot further from "pure microkernel" than 3. The "monolithic kernel concepts" include "running the networking stack in the kernel, standard BSD-style", "running the file systems in the kernel, standard BSD-style", and "implementing most process and VM management operations with standard system calls in the kernel".

Comment Re:Should be micro kernel (Score 2) 209

To add your claim that XNU does not follow any microkernel rules is simply false. XNU uses microkernel-style message passing.

XNU has system calls to allow messages to be sent between processes, including sending large amounts of data by page flipping.

It just doesn't happen to use that to implement very much of the UNIX API; it's not used to implement file I/O (that goes through ordinary BSD system calls to in-kernel file systems that are called through a BSD-style VFS) or network I/O (that goes through ordinary BSD system calls to in-kernel networking stacks that are called through a BSD-style kernel socket layer) or much of the process/thread management or VM code (that goes through ordinary system calls that end up calling Mach task, thread, and VM management calls).

It is used for communication between user processes, and for some kernel user communication, but that's the same sort of use that happens in systems with Boring Old Monolithic Kernels.

Comment Re:the superbowl of stupidity (Score 1) 290

Does the title "the superbowl of stupidity" refer to the content of your post? Because you seem to be forgetting how Apple did after entering the lackluster MP3 player market, the lackluster smartphone market, and the lackluster tablet market -- each time, entering an existing space and selling orders of magnitude more products than their competitors. Maybe, just maybe, they deserve the benefit of the doubt here.

Comment Re:Trade off tape vs HD (Score 1) 229

A good backup strategy involves reading the tapes back on a fairly frequent basis to make sure your tapes are still readable. Regardless of it's reliability, tapes do drop bits at a similar rate to hard drives at rest. You're very lucky that you have read 100% of the DAT tape without any issue (or maybe you didn't notice it, bit errors are hard to notice until you need that particular bit). Also, most companies really don't care about their data from 10 years ago (and if they do it's because they want it GONE in case of discovery).

Back then, tape WAS the cheaper option. Today, hard drive storage is on par as far as investment cost. Most tape strategies involve a large(r) tape robot and multiple heads which is where the expense comes in (also energy costs). With hard drives that is less of an issue and hard drives can also be spun down. Hard drives are also better at random access which is generally what you'll need when restoring day-to-day backups. Massive failures are usually not the problem, backup restoration usually boils down to that user wanting to get a version of that Word document from a year ago but they forgot what it was called back then. Reading through a tape for that kind of stuff is SLOW to the point of being infeasible.

Tape backup is still a good solution but it's being outgrown because hard drives are faster and for most people what it can provide is 'good enough'. Tape is great if you have so much data that you need it because of the density (a rack can hold 100's of PB worth of tape but only ~5PB worth of hard drives).

Businesses

Report: Apple Watch Preorders Almost 1 Million On First Day In the US 290

An anonymous reader writes The launch of the Apple Watch has got off to a good start, with an estimated 1 million pre-orders in the U.S. on Friday. "According to Slice's Sunday report, which is based on e-receipt data obtained directly from consumers, 957,000 people preordered the Watch on Friday, with 62% purchasing the cheapest variant, the Apple Watch Sport. On average, each buyer ordered 1.3 watches and spent $503.83 per watch."

Comment Re:Holy Fuck (Score 1) 304

The parameterizations in climate models are done to cover things that don't fit well with the scale* the models have to run on or things that are not well enough understood to put into code in the model. The parameterizations just basically emulate things we know about that can't be directly included in the code. Even though they are derived from measurements that have error it makes no sense to carry that error into the models because the parameters are only an emulation of reality. It would make more sense to just vary the parameters based on the measurement error for different model runs to see how that affects the output.

* By scale I mean the grid size and time divisions they have to use due to the limitations of computer power.

Comment Re:Adaptation versus Mitigation (Score 1) 304

I think we're using two different definitions for forcing and researching the matter doesn't really clear it up very well. I was using forcing in the sense that the 280 ppm of CO2 that was in the atmosphere before the recent rise is a forcing and by adding more CO2 we've increase the forcing. A quote from a 2005 paper "Earth's Energy Imbalance: Confirmation and Implications". by James Hansen, et. al. supports this:

The largest forcing is due to well-mixed greenhouse gases (GHGs)—CO2, CH4, N2O, CFCs (chlorofluorocarbons)—and other trace gases, totaling 2.75 W/m^2 in 2003 relative to the 1880 value (Table 1).

Notice the paper says "2.75 W/m^2 in 2003 relative to the 1880 value" which implies they're taking existing natural forcing into account. But I can see where you might consider forcing to mean just that part that's over and above the exiting natural forcing that preexisted anthropogenic climate change.

Also I don't think your math of subtracting the current anthropogenic forcing of 2.9 W/m^2 from the energy imbalance in valid. In the first place if the 2.9 W/m^2 forcing is relative to sometime in the 1800's then we've already realized a fair amount of the warming it caused so the energy imbalance is from only the part of that forcing that hasn't been realized yet, not the whole 2.9 W/m^2. To me that implies if the energy imbalance continues to remain the same over time then the forcing must be increasing to keep the imbalance going. Otherwise the energy imbalance would cause temperatures to eventually catch up to the existing forcing (natural and anthropogenic) reducing the imbalance to zero.

The only way we could reduce the anthropogenic forcing of 2.9 W/m^2 is by reducing the concentration of CO2 in the atmosphere. If all we did was stop emitting CO2 the excess that we've added would remain and the anthropogenic forcing would still exist.

Comment A matter of priorities (Score 3, Insightful) 212

The US government has lost sight of the larger issue here. The tail (NSA and law enforcement) is wagging the dog.

The NSA and law enforcement agencies want to be able to intercept anything, since it makes their jobs easier. However, this runs counter to the larger national interest of the United States.

Which country has the highest level of connectedness and dependence on the Internet? Which country would be worst hurt if a sophisticated attacker was able to penetrate and conduct malicious actions using the systems connected to the Internet? The US, that's who. It is by far in the US's overall national interest to properly secure the Internet and communications infrastructure. Eavesdropping on everyone else is a secondary benefit, in comparison.

The proper role of the President and the Attorney General is to separate the desire of the NSA and law enforcement to make their jobs easier from the greater benefit to the country as a whole. They need to tell the ambitious underlings "NO" in unequivocal terms, then bitch slap them if they keep whining about it.

--Paul

Comment Re:Never consumer ready (Score 1) 229

They are actually cheaper per TB if you need to consume at least a 4U tape robot worth of tape (20-30k, several PB). Otherwise disk storage is the way to go. Most enterprises don't need tape though, they have it grandfathered in from 'mainframe' systems and 90's Sun systems and a 'we don't know any better' mentality.

Comment Re:LHC Too (Score 2) 229

They have 1PB worth of disk cache in front of their tape storage... so yeah, quite a bit of PB. Tape isn't dead, but it's not worth it for small quantities (100TB) and many companies don't even have 100TB worth of centralized storage. Most companies can get away with storing stuff on the 'cloud' which is very expensive per GB/TB compared to either local or colocated disk storage or tape.

Slashdot Top Deals

What is research but a blind date with knowledge? -- Will Harvey

Working...