Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Use mixed Linux + Windows mode. (Score 1) 313

Currently I would not recommend installing an old 32bit Windows on new hardware. Reasons include (1) Complete lack of drivers for some hardware and (2) Hardware (eg ahci) that has to work in slow compatibility modes.

Assuming your application runs on it I would suggest a Windows 2000 in a VM, with the guest given about 2GB of private memory (or just under). This is because (1) Windows 2000 is still very light on modern (or nearly) hardware. (2) It's compatibility is very good with both the 9X versions and the later XP and W7 so most (non Microsoft) programs will run. (3) It as a reasonable dos box and good 16bit support. (4) using the "270" hack you are not going to have any problems with license keys or activation servers going offline.

For the VM host I would suggest a 64bit linux using KVM as the virtual environment. I would NOT recommend using a 32bit version of the Linux kernel because the caching will not be able to use all of memory (despite it being available to applications) and the VM guest will be limited to 2GB without any choices. This is not the simplest of virtual hosts to work with but it does have very good performance and very wide hardware support. In addition with the correct choice of distribution it will be a very light host in terms of disk and memory overheads.

OTOH, if you just want something simple use Window 7's XP mode. (or perhaps Win2k in a Virtual CP)

Comment Usenet to the planets. (Score 1) 109

Forget all this talk of UUCP, Fido and normal packet protocols, the closest current similarity is sending binaries over usenet.

The most important part is the delay time, when you 'launch' a usenet message you won't receive anything at all from the remote end for a very long time. It will probably be long enough for you to transmit the entire message and then some.

The medium also has some limitations ...

  • you can't send a 'message' over a few (hundred?) kilobytes, still small, but a lot larger than a single packet.
  • The medium is unreliable, message will get corrupted or lost.

For usenet the binary files are packaged up into one archive them split into messages. Usually something isn't considered to be received until the entire archive has been received intact. It used to be that the receiving end would request repeats of messages that didn't get through. This takes a long time and wasn't simple to automate because of the multiple receiver nature of usenet. Nowadays more messages are added using the 'parchive' protocol the idea being that the extra messages are 'universal substitutes'. Say the transmitter needs to send out an archive of 1000 messages, furthermore it's likely that 4%-9% of messages will be lost, then adding 100 extra PAR messages will (normally) mean that the archive will get there intact first try. No retransmission request needed.

I expect 'bp' is very similar.

Comment Re:Windows for Linux users, advice (Score 2) 503

Some minor notes here...

1. Windows 7 on a new laptop.

IMO a new laptop is not essential; BUT it must be 'Windows Logo' for Vista or later otherwise Windows 7 will use a rubbish unaccelerated frame buffer video driver.
Also I would make sure you use the 64bit version of Windows; it's a slightly more hostile environment for malware.

3. Create a regular user account ...

This is good idea; but treat it as a 'best practice', give him both passwords. After all we have here a 12 year old with some skill at Linux. He has physical access to the machine so he already has higher access than Windows Administrator. If all else fails he can take a screwdriver and move the hard disk to another machine.

5 Backup the machine ...

Lots of tools for this: One I like is http://www.drivesnapshot.de/en/index.htm it has a linux restore option so you only have to do a PXE Linux boot and restore the image from the network. In addition it does Differential Disk Image backups; something that most Image backup makers claim is impossible. All this using VSS from the running Windows installation and you can initially store the backup files on the same disk you're backing up. (But don't forget to clone the boot partition too).

But if I'm only doing a one off backup (day Zero) I'll use the Linux tool "ntfsclone" (from ntfsprogs). For Windows 7 you need to copy both partitions and dd(1) the first megabyte of the hard disk to a file.

BACKUPS. I really cannot say this often enough, You will have to restore the machine at some point and you will have to roll back the windows install to day zero. This is not like Linux where you can reasonably upgrade the filesystem through 15 years of changes and still have a fast and clean system. There is no package manager. Windows programs depend on install and uninstall scripts and they are very rarely complete or consistent. They break things, they leave debris behind, and game installers tend to be the worst of the bunch. They not only have "mistakes" in them they have intentional "anti piracy measures" and "DRM" which can never leave the system because that would let you reinstall the game for another 20 day teaser session.

Even that "drive snapshot" program leaves a single registry key behind, insignificant on it's own, but some applications leave hundreds and this machine will have lots of installs and reinstalls. Remember the Microsoft 3 R's ... Retry, Reboot, Reinstall.

Comment Very simple... but long... (Score 1) 867

  • SLS 1.02
  • SLS 1.02 + Manual updates
  • Inplace manual upgrade to Debian Bo
  • Debian Hamm
  • Debian Slink
  • Debian Potato
  • Debian Woody
  • Debian Sarge
  • Debian Etch
  • Debian Lenny
  • Debian Squeeze & Ubuntu

All the upgrades have been on a single filesystem that's been upgraded and transplanted from machine to machine. Some secondary machines have had other copies of Debian and an occasional other distribution (but never for long). The Ubuntu (on a little laptop) is just Debian enough that I don't replace it.

Parts of the home directory started life on a SCO Xenix machine with honest timestamps back in 1989. A few files are dated before that but they are generally DOS backups and files that have lost their timestamps for one reason or another.

Comment Re:This has existed for years (Score 1) 139

Yup, After a short web search I have a solid sightings for you from http://www.storagesearch.com/

1997 - in the SSD market
Bridgeworks designed a RAM SSD with hard drive backup. Sales Director - David Trossell told me - "It was a little ahead of its time and the company dropped it after poor sales."

And later, once Flash was big enough your nice little ramsan ...

Houston, Texas - July 22, 2008 - Texas Memory Systems today launched the world's fastest SSD - the RamSan-440,

The RamSan-440 is a 4U rackmount fibre-channel connected RAM SSD with upto 512GB of storage capacity. It can sustain up to 600,000 random IOPS and over 4GB/second of random read or write bandwidth, with latency of less than 15 microseconds.

the RamSan-440 is a 4U RAM SSD delivering 600,000 random IOPS - click for more info

It's the first RAM SSD to use RAIDed flash memory modules for data backup (instead of hard disk) and the first system to incorporate Texas Memory Systems' patented IO2 (Instant-On Input-Output) technology.

Comment Re:Alright, I'll play. (Score 1) 673

I understand that was a somewhat special case.

It started when the Americans got the allocation for WiFi channels wrong, someone didn't realise that you needed an extra 10Mhz at the top end beyond the highest official channel frequency because a WiFi transmission actually covers four channels. Rather than fix the problem they just mandated that the top two channels shouldn't be used.

At the time IBM's lawyers read these rules for US WiFi cards and decided that American law was such that they might be sued if some user put a European WiFi card in and got in trouble.

End result; the programmers put in a dumb PCI ID check in the BIOS.

Comment Re:Torrent stream? -- No (Score 1) 96

While using data forwarding within the CDN would probably be a win it doesn't work well for this sort of application where you need both high quality and fast distribution to the end clients.

The problem is the client's upstream speed. Most end client systems are ADSL or configured as if they are in that the upstream bandwidth is a tenth or less of the downstream. Bittorrent works kind of like a (safe, self building) "bucket brigade" line where the seeder passes the data to the first client and it passes the data to the second and so on. This means that the third client can receive data only as fast as the second one can upload it.

Bittorrent is is better in that copes with differences between clients, arrivals and disappearances. But it keeps the important limitation that you can only download a "fair share" of the combined upload of all the clients. (Frequently you can download at the same rate you upload)

So if the BBC were delivering 700Gb/s to a million users that is 700kb/s per user, most users could not manage that as an upstream. They might manage a half or even only a quarter which would mean that more than half the total bandwidth would still have to come from the BBC. (The seed box)

Comment Re:AFSK over lightwave (Score 1) 157

Well, of course they can, each wavelength of your signal there are billions of cycles of the carrier frequency and untold numbers of photons. Of course classical mechanics works perfectly fine in this state.

What the OP is talking about is trying to push the bit rate beyond the baud rate of the carrier by switching frequencies and phases mid cycle, this works wonderfully in the kHz range and is probably workable upto somewhere in the GHz range but if you try to do this to a signal of half an petahertz you're well into the regime where quantum mechanics reigns and the maths of classical wave theories are just plain wrong.

You wouldn't be able to get enough photons in your signal to get anywhere near classical mechanics, if I've done the maths right a laser of 2mW would have about one photon per bit when modulated at a petabit rate. We already have single photon sensors so I don't doubt that it's possible but it's not gonna be wave mechanics.

Comment Re:I am still waiting for the day.. (Score 1) 157

Except QAM and QPSK require a medium with a practically pure wave nature, pulse modulation of light is more a particle nature effect with the pluses of light consisting of numbers of individual photons each with their own specific frequency. The higher the data rate the fewer photons in each pulse.

Optical frequency division multiplexing is even more a particulate effect where the prism or grating effectively sorts the photons into individual streams. Though or course the fact that it works at all is actually a wave effect which will go away if you try to measure which slit an individual photon goes through.

I still think we'll get petabyte streams, but it won't be with QAM/QPSK.

.

IrDA! I don't think I've ever used that for real; I always had a laplink cable available. It's main problem IMO was that it was one of those 'bastardised by committee' standards (like the ISO seven layer cake) where they tried to make it fit with every special interest they could. This meant that unless your software was made by a member that specific group it probably wouldn't work because the standard was what they thought they would implement, not what they eventually did. Bluetooth is very much the same. OTOH WiFi, which is technically very similar to BlueTool interoperates very well, presumably because the standards people limited themselves to one task ... moving ethernet packets ... no packets moving == no WiFi logo.

Comment Re:Logarithmic vs linear scale (Score 1) 404

I think I'd disagree with you there, I know that one to ten or one to a hundred are "the same" it's true and I would suppose their intervals are the same. Perhaps it's something to do with multiplication tables.

But large numbers are always logarithmic. People talk in 'thousands' and 'millions' and 'billions' you know intuitively that the difference between one million and one million and one is tiny even though you've been taught it's the same as between two and three.

The only difference I see here is a matter of scale, one to a hundred are the same because we can see it; you can get a hundred pennies and put them on a multiplication table and see they're the same. But ask most people to visualise a million and you don't get a million pennies.

PS: Check out the megapenny project to see how close you are.

Comment Re:I like Firefox, but... (Score 1) 411

Firefox doesn't have to be installed as superuser.

It's quite possible to install it into the home directory of a 'ffox' user and use sudo to switch from your user to the ffox user when you want to run it. Never run anything else as ffox and never run firefox as yourself. In this setup firefox is quite capable of updating itself.

Of course it's debatable how much extra security this provides (if any, you are still downloading stuff after all) but it is nice shutting down firefox and seeing that there are no processes running under the ffox user.

Slashdot Top Deals

The use of money is all the advantage there is to having money. -- B. Franklin

Working...