Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Bad RNG will make your crypto predictable (Score 1) 64

What I liked was the original version of PGP when it would ask you to type some random numbers/letters and would use that as a seed.

It depends on the crypto I was doing: For a number of tasks, say if I were rolling dice for a game, /dev/urandom is good enough. For generating a nonpersistant key (like a session key that is used and tossed during a SSL transaction), /dev/random. For a key that is persistant, it might be even better to use /dev/random, but also ask the user to toss in some random keypresses/mouse movement [1], similar to how TrueCrypt and KeePass request it. This way, if there is something defective with the RNG, it is mitigated by a chunk of random bits from the user.

[1]: Take the time down to as precise as possible, plus the keystroke result itself, hash it (using the hash function as a "bit blender"), toss that in. For mouse movements, take measurements every random fraction of a second, hash those, toss those in as well.

Comment Re:Now I WANT ONE! (Score 1) 818

Bingo. Blowback is already happening. Yes, TESCO, WM, and other places have stopped selling the CSA items... but small businesses are being flooded with business. Flagmakers are absolutely slammed with requests.

That is the one thing about the US... it seems for something to get popular, it needs to be initially banned.

Comment Would it be possible to see UMSDOSfs return? (Score 2) 383

During the early days of Linux, UMSDOSfs was a quite useful tool, being able to superimpose UNIX file names and ACLs on top of a vanilla FAT filesystem.

With devices that might need to restrict access, but still require FAT32 because of interoperability concerns, would a variant of UMSDOS that works on this filesystem ever be feasible? Take Android for instance. The only way to keep app "A" and app "B" separated when they are granted access to an external SD card is by using SELinux rules (which the default pretty much denies access.) Having the ability to enforce permissions while still preserving interoperability of SD cards would be very useful.

Comment Re:Is that English? (Score 1) 108

I'm guessing it can be fitted with a remote control or guiding device to function autonomously, as well as with someone atop of it.

Is it me, or is this a variant of the hovercraft? Normal hovercraft are useful in swampy terrain, but something this small requires a lot of engine usage to keep the cushion of air underneath, and unlike most hovercraft which use curtains to keep the air from escaping as fast, this doesn't have this, so it needs to push significantly more air to keep it afloat.

Comment Re:Memory Safe Languages As Countermeasure (Score 1) 165

Ada has a very good reputation for security. I know of a few websites that use Ada for the backend. Not as easy as the web language of the month... but tend to be decently bug resistant, and from what I've seen, haven't had any real security issues.

I do wish for a resurgence in Ada's use. Security depends on the programmer mainly (regardless of language), but there are better tools to do it right in Ada than most other languages. This doesn't mean it is a one size fits all language... but for code that is critical to security, it might be wise to use a language designed with security from the ground up. Spark Ada has provable security, for example (as per "SPARK - A Safety Related Ada Subset")

Comment Re:well done. (Score 1) 289

If really worried, do your work in a VM and have something like AutoProtect in VMWare Workstation save a snapshot every few hours. If you go home, find the VM rebooted, it isn't tough to go back to a point in time before the reboot, save one's work then reboot.

If the host machine reboots, it will just suspend the VM before the reboot, so unless one is running an item in real time, the RPO is 0 and RTO is just getting the VM turned back on.

Another option is using WSUS. I have it configured to auto-approve all patches, but if one wants to take the risk at delaying being patched, no machines will reboot until you tell it to.

Finally, you can always set Windows Update to notify you about updates, so you don't get any reboots until you push the button.

No, it isn't fun rebooting, especially when one has been in IT long enough to be proud of system uptimes, but better a low uptime than hacked box, so patches are a necessary evil.

Comment Re:Which OEM has the best track record on this? (Score 1) 289

Clevo comes to mind. They are a Taiwan based company, and have produced some very good hardware in the past.

They have a wide range of products to choose from. If you want a Xeon based laptop with three hard disks in a RAID 0/1/5 configuration, they have a model for that, although the battery with something like that is more of a UPS function (lasts ~30 minutes) than something you would use without having it plugged in. If you want an ultralight model, they also have those, and a lot in between.

Comment Re:This is why Microsoft (Score 4, Informative) 289

Look at the Vista fiasco. OEMs had to be dragged, kicking and screaming, to the privilege model (which has been in the UNIX world for decades, and was in the Mac world for at least five years) where they don't have all their stuff run with admin rights. Then, when MS added some fundamental security features like ASLR, forcing drivers to be rewritten, OEMs shipped alpha-quality code, then blamed the crashes on MS.

Comment Re:Wow ... (Score 4, Interesting) 289

Windows 2003 had a 64 bit version, but Windows 2003 mainly was 32 bit. If you used the /PAE option on the 32 bit edition, you could get past the 4GB barrier on that OS... but the caceat was only if you had the enterprise or data center editions (which got you to 32 GB or 64 GB respectively.)

So, I do agree with the parent... the ability to get past 4GB did exist, but required a bunch of flaming hoops to go through.

As for monitors, I've seen lots of screwy, nonsensical stuff, stuff (such as a glitch on a SCSI card causing the monitor to tint green), so I wouldn't be surprised if this was the case.

Comment Re:Wow ... (Score 2) 289

I had a laptop like that. It had drivers which were only in the OEM image, and the only way I still had the image was because I used ghost and copied the hard disk stuff somewhere safe.

I eventually was able to find the real OEM for the USB 2 drivers after looking by PCI ID, but the video card maker refused to give drivers, saying only the OEM had say in that, so I wound up using a third party's drivers that actually could make the video work. Of course, after the laptop's fan bearings went south and sounded like a jet plane taking off, I just yanked the hard drive for an external device and placed the carcass in a drawer, if I ever might have to use it again.

With Windows post-Vista, drivers should never be an issue. By default, the driver OEM needs to register their software with Windows Update, so on initial install, the machine can go out, fetch the drivers and autoinstall them.

In any case, it is still wise (assuming this is not an enterprise with a large amount of machines) to either pull the HDD (again, if possible), image it off somewhere safe, or boot the machine to Ghost or CloneZilla and save the HDD image. This way, if there is a driver present, it can be found, and the machine can always be returned to its factory state should the need arise.

Comment Re:Good luck ... (Score 1) 107

I find there are multiple ways to skin this cat:

Scenario 1: Archiving. This is where one sticks files into some archiving program (ZIP, RAR, etc.) and then uploads the archive to some place like Amazon Glacier where it pretty much remains indefinitely until needed. This takes some thought, since even though uploading and keeping stuff on Glacier is inexpensive... retrieving it isn't cheap. One should figure out a size of archive that isn't hard to download, but not too small that documents and other items require multiple downloaded files to retrieve. This can be done to mobile devices, but again, it is a balance of useful file size versus time downloading.

Scenario 2: Random access block-based file. Using TrueCrypt on a DropBox synced partition for example. The changes are propagated. Of course, the download with this is cross-platform compatibility.

Scenario 3: Subdirectory of files encrypted with something like CFS, EncFS, or another tool. This may work for OS X, Linux, and Android, but won't on iOS and Windows.

Comment Re:Wrong question. (Score 1) 297

WHS is still around, it is called W2012 R2 Essentials, and comes with the OS as a feature/role to toss on.

It still is WHS pretty much (other than the name change and being part of the OS)... and stashes backups as .dat files.

Comment Re:Always backup your data to a different machine! (Score 2) 297

At the minimum, the guy should have at least an external hard disk and Mozy, Carbonite, BackBlaze, or another provider. The external HDD is for the backup program to allow for a bare metal restore of the box, and saving it on a remote provider helps with retrieving the files if the computer and its backup drive become inaccessible (destroyed/stolen/etc.)

Comment Re:Wrong question. (Score 1) 297

SSDs just make it worse, since when they fail, they are usually impossible to recover.

I do three layers of backups:

Layer 1 is an external HDD. That covers "oh shit" failures where I can completely rebuild and bare-metal a machine quickly, as well as restore files.

Layer 2 is a server that "pulls" backups. It runs Windows Server 2012R2 and Server Essentials, (if I get past ~10 machines, that means time to go for a "big boy" backup platform like DPM or NetBackup.) What this provides is resistance against ransomware, because the client machines cannot access the server, as the server does the data pulling. The only downside is that a bare metal is easily doable... but it isn't as fast as recovering from a directly attached drive.

Layer 3 is an encryption layer and the cloud for documents. With DropBox and encrypted disk images, one basically can use a cloud storage provider a very limited bandwidth SAN. Definitely not fast, and not the best way to recover... but your documents are stored securely.

Layer 4 is an archive of documents done every 6-12 months, broken up into manageable pieces with an index file, burned to local optical media, and tossed onto Amazon Glacier.

Comment Re:More stupid reporting on SlashDot (Score 2) 192

MS would make money hand over fist by doing that. Look at OS/2. There is a company, EComStation still cranking out support and updates for Warp.

The problem is that XPe and other embedded versions can't be upgraded. Try that, and millions of dollars worth of equipment will be rendered into scrap. One can treat XPe like a broken SCADA system and firewall/airgap the living hell out of it, but the best of all worlds is to have MS continue supporting it (for a decent fee) which is a win/win for all parties involved.

This problem isn't going away anytime soon even with future releases. Embedded versions of newer operating systems exist, and when Windows 7 loses support, the same thing will happen.

Ideally, MS should see about a RLTS (really long term support) embedded platform that is intended to be supported for at least 20-50 years. In the past, this couldn't really be done, but now that technology has matured to the point where 20 years from now, we will still have RAM, storage, CPU, and other items, supporting something on a long time scale is possible.

Slashdot Top Deals

Never ask two questions in a business letter. The reply will discuss the one you are least interested, and say nothing about the other.

Working...