Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Isolating the problem (Score 1) 220

I really cannot think of a reasonable workflow where that would make sense but I'm not trying to judge

The workflow is pretty much anyone who has to wear multiple hats during the day. Think of open tabs in background windows as short-term bookmarks.

One browser window with half a dozen tabs to keep an eye on the internal ticket system. Another window open with a dozen tabs to track stats on jobs in-progress across multiple days (so that you can just alt-tab to that window, glance through the tabs, rather then rummage for bookmarks or use the awesome-bar). Then typically one window per task / project with anywhere from 1-20 tabs.

As an example, let's say I need to look into GlusterFS. I can either re-purpose one of the my existing browser windows, or better, open a new one and keep all tabs relating to GlusterFS in a single window. I'll start with Google or the GlusterFS home page, then will start proliferating tabs as I find things that are interesting enough to be read, but I'm not ready to dive into that tab yet, nor is it something that I'll want as a long-term bookmark.

As I work through the various tabs, they either get bookmarked after I've read them or just closed.

Not hard to hit 100 tabs. Today is about average and I have 10 windows open, each has 1-15 tabs in it.

Comment Re:Seemed pretty obvious this was the case (Score 1) 311

Of course, you should keep a record of those questions and answers so you can correctly answer them if the need arises.

That's what GPG encrypted text files were invented for.

One text file per account, the contents are a GPG ASCII armored encryption block containing things like the site name, password, account name, answers to security questions, or anything else.

I then store those text files in a version control system, which makes it easy to share across multiple machines.

(The weak link in all of this is the GPG key - but there are options to strengthen that like smartcards.)

Comment Re: Too late (Score 1) 107

Encrypt the tablet / phone - use a 6-9 digit PIN (which is a lot better then just a 4-digit PIN). Have the device wipe after 10 bad attempts (the default on Android).

Most thieves, when presented with that obstacle - will just reformat the device for sale rather then try and steal information off of it.

As for apps, keypass / lastpass are frequently mentioned. My personal preference is a strong master password in Firefox, and just let it remember the 100s of secondary website account passwords (i.e. not my bank, webmail, or other financial sites). The best choices are those where you setup your own webdav cloud storage on your own hardware, and use that to keep things synchronized.

Comment Re:Why? Simple bullshit is why. (Score 2) 107

Four words, strung together, can be a key space as small as 3000^4 (roughly 46 bits of entropy), especially if they are chosen from the top 3000 words in the dictionary. That's nowhere near 6.2 * 10^36.

Misspellings can help a lot and make it a lot stronger (adding maybe 3-4 bits per word). Adding spaces or punctuation between them adds maybe 1 bit per word. Random capitalization of something other then the first letter adds 2 bits per word.

Basically, if you're using English language phrases / words without any munging, you're only getting about 2 bits per character. A bit lower if it's a grammatically correct phrase (~1.5 bits/character), a bit higher if it's random words strung together (~2.3 bits/character). That puts a 26 character phrase like you provided at somewhere between 39-60 bits (and it is always better to assume the lower bound).

Most attackers will assume 2-6 words strung together, from the top N lists. So just tacking words together is not safe. Or they'll use N-grams (sort of like Markov chains, but more general) and go after the most common phrases.

In comparison, an 8-character password, chosen from a field of 64 possibles per character (6 bits) is 48 bits strong. If you managed to use one of 90 possible characters per position, that is 52 bits strong (6.5 bits/char * 8 bits).

48-52 bits is just not a lot these days, if the attacker gains access to the hashed password and can attack it offline. Minimum bits of complexity really needs to be about 64 bits (10-12 characters, fully random) to deal with offline attacks, and 80 bits of entropy is far better.

Comment Re:Notified and ignored? (Score 1) 107

These days the password on your email account is more important then your bank account password...

Because if they can gain access to your email, they can do password resets to gain access to dozens / hundreds of your accounts.

Some of the web email providers have 2FA (two-factor authentication) - those are probably better choices if you don't run your own email server.

Comment Re:Final nail in the Itanium coffin (Score 2) 161

All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.

Itanium was dead on arrival.

It ran existing x86 code much slower. So if you wanted to move up to 64bit (and use Itanium to get there), you had to pay a lot more for your processors, just to run your existing workload.

Okay, you say, but everyone was supposed to stop running x86 and start running Itanium binaries! Please put down the pipe and come back to reality. No company is going to repurchase all of their software to run on a new platform, just because Intel says this is the way forward.

Maybe, maybe! If all of the business software was open-source and easily ported to a different CPU architecture it might have worked. But only if you'd gain a 3x-5x improvement in wall clock performance by porting from x86 to Itanium instruction sets. (An advantage that never materialized.)

And once AMD started shipping AMD64 and Opterons that could run your existing x86 workload, on a 64bit CPU, at slightly fastter speeds then your old kit for the same price - that buried any chance of Itanium ever succeeding in the market. Any forward looking IT person, when it came time to upgrade old kit, chose AMD64 - because while they might be running 32bit OS/progs today, the 64bit train was rumbling down the tracks. So picking a chip that could do both, and do both well, was the best move.

Comment Re:Switched double speed half capacity, realistic? (Score 1) 316

One thing I'd LOVE to see, and even think there's a market for, would be a single-platter drive suitable for mounting in the optical bay of mobile workstation laptops

' Thinkpads T-series laptops have had that capability since the early 2000s. I'm pretty sure that current models still let you swap out the DVD drive for a 2nd SATA drive slot.

The problem with any solution that attempts to be multi-vendor is that every laptop has a slightly different form factor for their optical bay tray - there is no standard.

Comment Re:Switched double speed half capacity, realistic? (Score 3, Interesting) 316

As you mention, 15k SAS drives are going to be rapidly undercut by SSDs. The price difference is no longer 10x or 20x when looking at cost/gigabyte, the price difference is now only 2-3x.

Pay 2x-3x the amount for a SSD of the same size as the 15k SAS, and you gain 50x improvement in your IOPS. For workloads where that matters, it's an easy choice to make now. As soon as you say something like "we'll short-stroke some 15k RPM SAS drives" - you should be considering enterprise level SSD instead. Less spindles needed, less power needed, and huge performance gains.

The only downside of SSDs is that write-endurance. A 600GB SSD can only handle about 120TB of writes over its lifespan (give or take 20-50% depending on the controller, technology, etc). The question is - are you really writing more then 60GB/day to the drive (in which case it will wear out in 5 years).

And more importantly... will you care if it wears out in 4-5 years? That you could handle the same workload using fewer spindles and less power likely pays for itself, including replacing the drives every 4-5 years.

Comment Re:Seagate failures (Score 3, Informative) 316

External 3.5" drives are generally put in junky enclosures with no cooling and iffy controller chips and 1-year warranties. Since 3.5" hard drives are much more sensitive to heat issues then their 2.5" laptop drive cousins, you need active cooling (at least a minimal amount of airflow 24x7 over the drive).

One external drive enclosure that I've been happy with is a Mediasonic HF2-SU3S2. This is a USB 3.0 unit which can hold up to (4) 3.5" drives in a few different configurations (I use JBOD). Not that expensive, has a fan, and has good performance.

Stick some moderate quality 3.5 drives in it (WD Red, Seagate Enterprise Capacity drives, Hitachi Ultrastars) and it should run fine for a few years. Most of those drives have 3 or 5 year warranties.

(For the 4-drive unit, we write to a different drive each day. And our backups are based on rdiff-backups, so each backup set has the full 53 weeks of change history for the source data.)

Comment Re:Can we get a tape drive to back this up? (Score 5, Informative) 316

Agreed - tape is a good choice as soon as you:

- need removable backup storage that gets swapped daily and goes offsite (legal reasons)
- have the budget for multiple tape drives, including a spare at your offsite disaster recovery location
- have enough data that you need an auto-loader
- have someone to babysit the tape drive on a daily basis, swapping in tapes in an organized fashion, replacing tapes based on usage history (not when they break), and run period cleaning tapes

The tape drives are $2-$5k each, you should always have at least two of the current generation, in case one breaks. Individual tapes are $40-$60 and you're going to be buying 50-60 per year if you follow a normal setup (daily backups, one tape per week gets pulled for permanent storage, etc.)

For smaller companies, hooking up a 1TB or 2TB USB drive to the server and running a backup is about the limit of their technical proficiency (and limits of their budget). For $800, you could buy 6 or 8 USB drives and have them rotate them out on a weekly basis.

Sure, it's not a daily backup with permanent retention offsite. But it's generally more foolproof then tape (or less fiddly). And it's a lot easier to sell a $800 backup solution then a $8000 backup solution. Plus you can start with a $400 solution, then slowly add more drives to the pool over time to get better historical backups. Older, smaller, USB drives can be repurposed for other uses as you slowly increase the size of individual drives. Not as easy to repurpose old tape drives or media that is now too small.

Comment Re:Can we get a tape drive to back this up? (Score 1) 316

For smaller offices, I prefer rdiff-backup over rsnapshot (but both work well) combined with USB drives instead of tape drives.

Clients backup to a central server, each client has its own mount point and own file system (limits the possible damage if a backup client goes crazy since this is a push system). Inside that mount point, they create as many rdiff-backup directories as they need to.

Once per day the server checks the file system for a particular backup client (iterates through them in a random ordering), snapshots the logical volume (using LVM), then uses the read-only snapshot to rsync all of the content to the USB drive(s).

The nice part about this is that it can also easily send those backups offsite using rsync. The other nice part about rdiff-backup is that metadata (ownership, permissions, ACLs) get stored in regular files and you can store rdiff-backup directories on any file system without losing that information.

Once a week, someone at the office swaps the drives attached to the cables and takes the latest set home. I recommend at least (3) sets of drives, with a goal of getting of (10) sets.

The drives are easily encrypted with LUKS, you can use udev to attach/detach a block device under /dev/mapper with a LUKS keyfile stored in /root/something. Combine that with autofs to automatically mount the USB drives at a predictable point on the file system.

Downside is that it does take 20-30 minutes to setup a new USB backup drive. You have to format it with LUKS, set the passphrase, then attach the keyfile to it. Plus add the udev rules and autofs rules. But that time is worth it because even if someone loses a backup drive, the content is encrypted.

The udev/autofs tricks made it pretty easy for someone non-technical to swap out the drives every few days or every week.

If you use rdiff-backup - make sure you put /tmp on a SSD or dedicated 15k RPM spindle. When using the rdiff-backup verify commands, it has to create/read a lot of files in /tmp. We have a 300GB RAID-1 SSD pair on the server dedicated to the /tmp directory, which speeds up rdiff-backup a lot.

Comment Re:My opinion on the matter. (Score 1) 826

I do think that the threat of ones skill set rapidly becoming obsolete is a justified reason for change, otherwise you get firmly ejected out of the IT business. All major Linux distros are changing to systemd. In the future you either know systemd well, or you don't work with Linux.

If you work in the IT world and are not making it a priority to learn something new every week / month - then yes, you will be unemployed when your current gravy train falls off the tracks. And frankly - that applies to just about any knowledge-based job these days. If you're not figuring out how to do your job better and service your clients better -- your competitors are and will eventually eat your lunch.

My personal goal is to read one IT related book per month. I don't have to memorize it, but I should be getting enough knowledge that I can recognize things and ask meaningful questions. And it gives me a reference point for when I do run into an issue, that I know the solution probably looks like X or Y - even if I don't know exactly how to solve it.

SystemD is just going to be one more thing where I'll pickup a book on it for my monthly quota. Probably around the first time we roll out RHEL7 next year.

Comment Re:How Linux wins the Desktop (Score 1) 727

#1 - There's really only two games in town for Linux. Either you publish an RPM for use on RedHat derived distros or you publish DEB style for Debian derived distros. If you service those two markets, you cover maybe 80-90% of the Linux systems in use. The outliers are SUSE and Mandriva, followed by the source-based distros like Slackware or Gentoo.

There's also the Filesystem Hierarchy Standard (FHS) which your installer should adhere to, which smooths away most issues.

On the UI side, you really only have Gnome or KDE, and most apps run as-is on either because they use things like Qt.

#2 - "Chef" or "Puppet" or some other configuration management. Those tools have existed for a few years now and are stable and used.

#3 - Generally a solved problem, some of it is covered by configuration management tools like Chef/Puppet. Others have to be adapted from the cloud solutions. With a good cloud setup (private, hosted, or whatever) you can create and boot a new server in 10 minutes or less. On the desktop side, install a standard image, then let your configuration management software take over.

Slashdot Top Deals

A penny saved is a penny to squander. -- Ambrose Bierce

Working...