Comment Re:1980s? (Score 1) 180
Moreover: which cycles? Core? FSB? Memory? Eh? Under what test conditions?
Moreover: which cycles? Core? FSB? Memory? Eh? Under what test conditions?
Given that we had 80-bit extended precision FPU registers in 8087 chips 30 years ago, I don't think the 64-bit path/register assertion holds any water. I have lots of code that uses 128 bit registers and runs on pretty boring consumer CPUs. The reason to increase data path width is not to address more data, but to increase the throughput. I use 128 bit registers with code that uses no virtual memory and runs with a couple MBytes of RAM.
I was running VMs on Z80 hardware - it was slow, since they were software-emulating the CPU (I did 6502 and Z80), but hey, you don't need any special hardware or CPU features for virtualization. The special features are performance optimizations, nothing else.
Not only do their prices sting, but they suffer heavily from living on their own little island and steadfastly refuse to use standard terminology, and seem to be doing a lot of stuff differently just because they can - not because it makes sense.
For whatever reason, VirtualPC/XP was always sluggish compared to VMware with an XP VM on the same machine, with same host OS and otherwise identical settings. And this wasn't on underpowered hardware either.
I'd perhaps add that OS X didn't have any revolutionary changes in its UI, like we got with Windows 8. The dock is still here, 10 major OS X releases after the first one. The stoplights in the title bar are still there, too. Things have changed around multi-screen, virtual desktop and full screen modes, certainly, and we've got Spotlight halfway along the way (10.4). I've been using OS X as my main desktop since 10.5, and it seemed to be mostly painless experience, with very little re-learning needed to go between versions.
$200 is roughly a monthly lease on a Volt.
Volt has 350+ mile range. Prius has 500+ mile range. So I don't buy your range rambling.
Sigh. In the U.S., the existence of the Copyright Law is due to Constitution, and the purpose has nothing whatsoever to do with creator's rights. The Copyright Law exists
... To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries. (U.S. Constitution. Art. I, Sec. 8.)
Again, let's be clear: the purpose is to promote the progress of science and useful arts. The means of achieving it are to give authors and inventors some rights for a limited time.
Oooh, Colargol! Someone should something something Colargol in SpaceX's dragon
I forgot to add that if, due to a very unlikely circumstance, a domain will be partially magnetized during one track writing, it'll very likely be partially magnetized during another writing of the same track as well. So yes, a poorly inter-track domain may contain noise, not data. That doesn't make it useful for data recovery, and such domains are relatively few.
There's two things in this picture that are very different than what you have, say, in an audio tape recorder. 1. A head producing very high magnetic field gradient. 2. A high-coercivity material that doesn't give shit about fields that aren't strong enough to remagnetize a domain. This adds up to a situation where domains at the boundary between tracks work in a binary fashion. Either they flip, and they contain data from the current track, or they don't and they contain data from the neighboring track. There's no situation where they'd have previous data from the same track, because the head positioning errors are smaller than the domain size!
The head positioning error is much smaller than the width of the track. There are no buffer zones.
That's not the case with modern hard drives, at least nothing made in the last decade. You worked on such a machine long time ago and/or you weren't told really what it was for. The machine you worked on, if it was done for recovery of drives made after 1998 or thereabouts, was simply made to read data that was not overwritten. It was to be used when one wished to read the drive's contents without using the drive's electromechanical system to do so. Such a machine makes life easier iff you have reverse-engineered enough of the drive to know the encoding used on the data, and the formatting of all the housekeeping information. It lets you skip having to do platter transplants, and generally having to use the drive's own firmware for data recovery - where the firmware wasn't designed for that.
most of the data can generally be recovered easily enough
Nope. The drives manufactured in the last two decades, give-or-take, have the size of magnetic domains matched to the size of the field generated by the heads. The "edge" of the track is defined by where data from one track ceases to be, and the data from another track begins, and this is a binary thing. One domain here has data from this track, another domain there has data from that track.
What people constantly fail to realize is that if there was an area of disk, the mythical "inter-track gap", that was any good at storing any data, it'd be stupid for the manufacturer to not put the expensive platter real estate to good use. And they do precisely that: they use all of the platters to store your current data. There is no inter track gap.
"If it ain't broke, don't fix it." - Bert Lantz