Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Windows NT Microkernel, by David Cutler et al (Score 2, Insightful) 497

In 2000 they should have copied Apple again and based their next windows(that would become Vista) on a BSD or Linux kernel.

I have never heard anyone say a bad word about the actual NT Microkernel, or, for that matter, about Cutler et al's work on VMS [which, to this day, has a reputation as being one of the most rock-solid, 24x7x365, 5/6/7/8/9-sigma operating systems known to man].

Even the old embedded versions of NT, although they never gained all that much market share [vis-a-vis VXWorks], had a reputation for being very solid operating systems.

Now you might not like some of the cruft which has been bolted on top of the NT Microkernel [Win32, Win64, NTVDM's, DirectX, etc etc etc], but if anyone has a beef with the underlying microkernel, then I haven't heard about it.

Comment right, that was my original point (Score 1) 240

It's probably more likely that kernel developers will just need to adjust defense mechanisms to account for a new set of attack vectors.

Right - that was my original point, up at something like the GGP or GGGP level of this thread.

The kernel guys [and/or the Intel guys] were really sloppy when Intel first introduced dual-cores with shared-cache - we had all sorts of exploits where one core could sniff from a cache which [ostensibly] was supposed to have been the under the purview of another core.

And I'm saying that the kernel/microkernel guys - in conjunction with the hardware guys writing the drivers [ATi and the various "free"-lancers], and even the "application" guys, like the DirectX team at Microsoft, and the OpenGL crew - will all need to buckle down and put on their thinking caps and ask themselves: How are we going to harden the kernel [microkernel] against any incestous attack vectors coming from a GPU core which lives on the same silicon as the CPU cores? [And then they need to burn a little midnight oil to produce a stable implementation of their plans.]

Eventually they will get it right [and hopefully AMD has put a fair amount of thought into this already], but if anyone gets sloppy [from the AMD CPU team to the kernel/microkernel teams to the ATi driver team to the "applications" guys at DirectX and OpenGL], then we could be looking at some great big gaping holes in the security model.

Comment no "chipset" anymore; pr0n cache sniffers? (Score 2, Interesting) 240

In the old days, there was a physical chipset which sat between the GPU and the CPU.

But in this architecture, there is no physical barrier - they're on the same silicon.

Look for the bad guys to try to force the graphics drivers to sneak over and sniff the memory of the CPUs - I can imagine how they might be able to load some code in a pr0n movie that could tell some pointer in a GPU driver to point to addresses of cache which [at least ostensibly] belong to a CPU, at which point they should be able to read the cache.

And if they're lucky, their specially-crafte pr0n-videos might even be able to WRITE to the CPU cache, at which point they can probably pwn the entire operating system.

Hopefully AMD has put some thought into their implementation, and has some sort of hardware safeguards that force the GPU to always act as the "slave" of its masters [the CPUs], but, if not, then all Hades could break loose.

[And Intel probably won't put nearly as much thought into their implementation as AMD did with theirs.]

Comment Security implications for kernels & drivers (Score 1) 240

Many of the improvements stem from eliminating the chip-to-chip linkage that adds latency to memory operations and consumes power - moving electrons across a chip takes less energy than moving these same electrons between two chips. The co-location of all key elements on one chip also allows a holistic approach to power management of the APU.

Dual-core shared-cache architectures wreaked havoc on kernel security when they were first introduced - and we still aren't entirely certain that our operating systems are fully secure against shared-cache exploits - we seem to get about a new one about every six to twelve months.

So are the kernel [& micro-kernel] and driver guys fairly confident that we won't be getting incestuous security problems when kernels and drivers start sharing the same silicon?

I predict an initial round of exploits as the kernel guys have to re-learn their approaches to hardening their operating systems against the graphics drivers.

Comment deduction, induction, pragmatism, fanaticism... (Score 1) 73

The big difference between design and academia is that when you build something it is judged by Reality. In academia another person is the judge. A person can be manipulated into agreeing with your theoretical ideas. Reality doesn't care.

The underlying questions, to which you allude here, are [or once were] the passionate obsession of professional philosphers, from David Hume [in the mid-18th Century], through Immanuel Kant & Friedrich Schiller [at the turn of the 19th Century], and on to Charles Sander Peirce [in the late 19th and early 20th Centuries].

If you are at all interested in these topics, then you ought to read a little Peirce, on questions of common sense and pragmatism [or pragmaticism, as Peirce liked to call it].

Of course, during the remainder of the 20th Century, all of that progress came to a screeching halt, with the rise of the fanatical nihilism of Sartre, Derrida, Foucault, Chomsky, and their ilk...

Comment MBAs, meet Novell BorderManager, circa 1997 (Score 2, Informative) 414

What about tying a firewall into an authentication system so that when jdoe logs in, only then are the firewalls opened to pass her traffic?

Novell was doing much of what the OP was asking for, back circa 1997, with their BorderManager product.

Unfortunately, Novell always seemed to have the evil MBAs running the company [is there such a good MBA?], and, the last I heard, BorderManager was allowed [decreed? required?] to wither on the vine.

But BorderManager, as originally envisioned [and it was a helluva nice vision], provided a spectacular framework for dealing with these problems.

Oh well, only the good die young.

Comment Monolithic Kernel = Death of Self-Teaching (Score 5, Insightful) 742

Have any of you guys ever looked at a picture of the Linux kernel?

My best guess [and I am not trying to be facetious] is that unless you were in on kernel development in the very early days [so that you had some hope of learning it when it was still tractable], then the thing has gotten so big now [what is it - like 20,000 files which get compiled in the basic kernel?], and the learning curve has gotten so steep, that no new developers have any realistic hope of grokking it anymore.

Seriously - at this point, just learning the kernel would be akin to a 6- or 8-year PhD project [in something like a Department of Archaeology, studying ancient Egyptian hieroglyphics].

Comment looks to be $75 to $100 per month (Score 0) 178

1.Cable and FTTH 2.DSL 3.Satellite and 3G 4.Dial-up

It's an imperfect world, but all of those media [with the exception of dial-up] seem to be settling in the general vicinity of $75 to $100 per month.

Which I guess is what the free market is telling us is the cost of delivering high [or high-ish] speed "last mile" access to a nation with a population as widely-dispersed as the USA.

If you want significantly cheaper access, then I guess you would need to move to downtown Tokyo, or downtown Shanghai, and live like a sardine in a tin can.

Comment what Turing & Church proved (Score 1) 345

The assertion made by the Toyota representative was that it was impossible for software to ever be proven scientifically. This is unquestionably false.

What Turing [& Church] proved is that algorithms CANNOT be examined "scientifically" - that there can exist no [interesting, non-trivial] algorithm for examining algorithms - that there can be no "meta-theory" of algorithms.

In the end, there can only be eyeballs [accompanied by trial and error].

Comment ALAN TURING: HOW WRONG, INDEED? (Score 1) 345

How wrong can you be? Yes there is. Software is fundamentally the composition of many mathematical functions. Its results can be formally proven if the hardware it is running on is assumed (or preferably also proven) to be error free. Don't get me wrong, it would be incredibly cost, labor and time expensive, and require real computer scientists, but it is certainly possible.

The 1930s just called, and they want their Halting Problem back...

Slashdot Top Deals

Any program which runs right is obsolete.

Working...