Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Final nail in the Itanium coffin (Score 2) 161

All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.

Itanium was dead on arrival.

It ran existing x86 code much slower. So if you wanted to move up to 64bit (and use Itanium to get there), you had to pay a lot more for your processors, just to run your existing workload.

Okay, you say, but everyone was supposed to stop running x86 and start running Itanium binaries! Please put down the pipe and come back to reality. No company is going to repurchase all of their software to run on a new platform, just because Intel says this is the way forward.

Maybe, maybe! If all of the business software was open-source and easily ported to a different CPU architecture it might have worked. But only if you'd gain a 3x-5x improvement in wall clock performance by porting from x86 to Itanium instruction sets. (An advantage that never materialized.)

And once AMD started shipping AMD64 and Opterons that could run your existing x86 workload, on a 64bit CPU, at slightly fastter speeds then your old kit for the same price - that buried any chance of Itanium ever succeeding in the market. Any forward looking IT person, when it came time to upgrade old kit, chose AMD64 - because while they might be running 32bit OS/progs today, the 64bit train was rumbling down the tracks. So picking a chip that could do both, and do both well, was the best move.

Comment Re:Switched double speed half capacity, realistic? (Score 1) 316

One thing I'd LOVE to see, and even think there's a market for, would be a single-platter drive suitable for mounting in the optical bay of mobile workstation laptops

' Thinkpads T-series laptops have had that capability since the early 2000s. I'm pretty sure that current models still let you swap out the DVD drive for a 2nd SATA drive slot.

The problem with any solution that attempts to be multi-vendor is that every laptop has a slightly different form factor for their optical bay tray - there is no standard.

Comment Re:Switched double speed half capacity, realistic? (Score 3, Interesting) 316

As you mention, 15k SAS drives are going to be rapidly undercut by SSDs. The price difference is no longer 10x or 20x when looking at cost/gigabyte, the price difference is now only 2-3x.

Pay 2x-3x the amount for a SSD of the same size as the 15k SAS, and you gain 50x improvement in your IOPS. For workloads where that matters, it's an easy choice to make now. As soon as you say something like "we'll short-stroke some 15k RPM SAS drives" - you should be considering enterprise level SSD instead. Less spindles needed, less power needed, and huge performance gains.

The only downside of SSDs is that write-endurance. A 600GB SSD can only handle about 120TB of writes over its lifespan (give or take 20-50% depending on the controller, technology, etc). The question is - are you really writing more then 60GB/day to the drive (in which case it will wear out in 5 years).

And more importantly... will you care if it wears out in 4-5 years? That you could handle the same workload using fewer spindles and less power likely pays for itself, including replacing the drives every 4-5 years.

Comment Re:Seagate failures (Score 3, Informative) 316

External 3.5" drives are generally put in junky enclosures with no cooling and iffy controller chips and 1-year warranties. Since 3.5" hard drives are much more sensitive to heat issues then their 2.5" laptop drive cousins, you need active cooling (at least a minimal amount of airflow 24x7 over the drive).

One external drive enclosure that I've been happy with is a Mediasonic HF2-SU3S2. This is a USB 3.0 unit which can hold up to (4) 3.5" drives in a few different configurations (I use JBOD). Not that expensive, has a fan, and has good performance.

Stick some moderate quality 3.5 drives in it (WD Red, Seagate Enterprise Capacity drives, Hitachi Ultrastars) and it should run fine for a few years. Most of those drives have 3 or 5 year warranties.

(For the 4-drive unit, we write to a different drive each day. And our backups are based on rdiff-backups, so each backup set has the full 53 weeks of change history for the source data.)

Comment Re:Can we get a tape drive to back this up? (Score 5, Informative) 316

Agreed - tape is a good choice as soon as you:

- need removable backup storage that gets swapped daily and goes offsite (legal reasons)
- have the budget for multiple tape drives, including a spare at your offsite disaster recovery location
- have enough data that you need an auto-loader
- have someone to babysit the tape drive on a daily basis, swapping in tapes in an organized fashion, replacing tapes based on usage history (not when they break), and run period cleaning tapes

The tape drives are $2-$5k each, you should always have at least two of the current generation, in case one breaks. Individual tapes are $40-$60 and you're going to be buying 50-60 per year if you follow a normal setup (daily backups, one tape per week gets pulled for permanent storage, etc.)

For smaller companies, hooking up a 1TB or 2TB USB drive to the server and running a backup is about the limit of their technical proficiency (and limits of their budget). For $800, you could buy 6 or 8 USB drives and have them rotate them out on a weekly basis.

Sure, it's not a daily backup with permanent retention offsite. But it's generally more foolproof then tape (or less fiddly). And it's a lot easier to sell a $800 backup solution then a $8000 backup solution. Plus you can start with a $400 solution, then slowly add more drives to the pool over time to get better historical backups. Older, smaller, USB drives can be repurposed for other uses as you slowly increase the size of individual drives. Not as easy to repurpose old tape drives or media that is now too small.

Comment Re:Can we get a tape drive to back this up? (Score 1) 316

For smaller offices, I prefer rdiff-backup over rsnapshot (but both work well) combined with USB drives instead of tape drives.

Clients backup to a central server, each client has its own mount point and own file system (limits the possible damage if a backup client goes crazy since this is a push system). Inside that mount point, they create as many rdiff-backup directories as they need to.

Once per day the server checks the file system for a particular backup client (iterates through them in a random ordering), snapshots the logical volume (using LVM), then uses the read-only snapshot to rsync all of the content to the USB drive(s).

The nice part about this is that it can also easily send those backups offsite using rsync. The other nice part about rdiff-backup is that metadata (ownership, permissions, ACLs) get stored in regular files and you can store rdiff-backup directories on any file system without losing that information.

Once a week, someone at the office swaps the drives attached to the cables and takes the latest set home. I recommend at least (3) sets of drives, with a goal of getting of (10) sets.

The drives are easily encrypted with LUKS, you can use udev to attach/detach a block device under /dev/mapper with a LUKS keyfile stored in /root/something. Combine that with autofs to automatically mount the USB drives at a predictable point on the file system.

Downside is that it does take 20-30 minutes to setup a new USB backup drive. You have to format it with LUKS, set the passphrase, then attach the keyfile to it. Plus add the udev rules and autofs rules. But that time is worth it because even if someone loses a backup drive, the content is encrypted.

The udev/autofs tricks made it pretty easy for someone non-technical to swap out the drives every few days or every week.

If you use rdiff-backup - make sure you put /tmp on a SSD or dedicated 15k RPM spindle. When using the rdiff-backup verify commands, it has to create/read a lot of files in /tmp. We have a 300GB RAID-1 SSD pair on the server dedicated to the /tmp directory, which speeds up rdiff-backup a lot.

Comment Re:My opinion on the matter. (Score 1) 826

I do think that the threat of ones skill set rapidly becoming obsolete is a justified reason for change, otherwise you get firmly ejected out of the IT business. All major Linux distros are changing to systemd. In the future you either know systemd well, or you don't work with Linux.

If you work in the IT world and are not making it a priority to learn something new every week / month - then yes, you will be unemployed when your current gravy train falls off the tracks. And frankly - that applies to just about any knowledge-based job these days. If you're not figuring out how to do your job better and service your clients better -- your competitors are and will eventually eat your lunch.

My personal goal is to read one IT related book per month. I don't have to memorize it, but I should be getting enough knowledge that I can recognize things and ask meaningful questions. And it gives me a reference point for when I do run into an issue, that I know the solution probably looks like X or Y - even if I don't know exactly how to solve it.

SystemD is just going to be one more thing where I'll pickup a book on it for my monthly quota. Probably around the first time we roll out RHEL7 next year.

Comment Re:How Linux wins the Desktop (Score 1) 727

#1 - There's really only two games in town for Linux. Either you publish an RPM for use on RedHat derived distros or you publish DEB style for Debian derived distros. If you service those two markets, you cover maybe 80-90% of the Linux systems in use. The outliers are SUSE and Mandriva, followed by the source-based distros like Slackware or Gentoo.

There's also the Filesystem Hierarchy Standard (FHS) which your installer should adhere to, which smooths away most issues.

On the UI side, you really only have Gnome or KDE, and most apps run as-is on either because they use things like Qt.

#2 - "Chef" or "Puppet" or some other configuration management. Those tools have existed for a few years now and are stable and used.

#3 - Generally a solved problem, some of it is covered by configuration management tools like Chef/Puppet. Others have to be adapted from the cloud solutions. With a good cloud setup (private, hosted, or whatever) you can create and boot a new server in 10 minutes or less. On the desktop side, install a standard image, then let your configuration management software take over.

Comment Re:Oh, the timing... (Score 1) 727

And a techie's definition of 'working', i.e. drinking coffee and reading slashdot is still the same too.

Which tends to involve reading about technologies that you are not already familiar with, or getting information about finer points explained. In sales-speak, just another form of "continuing education".

It used to be much better. Someone would post an article about new technology X (such as Xen or KVM or HyperV) and you'd get 50-100 modded-up posts detailing what it is good for, why to use it, why not to use it, and anecdotes about how well or poorly it works in reality.

These days, I only read 10-20% of the articles, and only briefly browse the comments (usually at 3+ or 4+ scores).

Comment Re:Duh. (Score 1) 235

IM is strongly suited to information that needs to be conveyed exactly in written form. Such as a list of commands that need to be run, or a code fragment. In a crowded environment, it's also more private and less obtrusive then a voice conversation. It can also be slightly delayed, you can finish up your thought before dealing with the conversation.

Voice is better for inflection and topics where exact spellings don't matter. It has a higher rate of back and forth (as long as one party does not monopolize the conversation). But trying to convey technical information such as "type XYZ" is frustrating over voice connections (you end up having to use a phonetic alphabet to get the other side to enter the right information). Sometimes you need the high synchronicity of voice communications, sometimes it gets in the way.

Both are synchronous, both have their place.

EMail, on the other hand, is asynchronous, where replies can be measured in minutes / hours / days. Very good for long amounts of information which need detailed thought and replies. The other person is not sitting there twiddling thumbs (or should not be) while waiting on you to compose your message.

Comment Re:0.50$ per Gb was already broken (Score 2) 183

Enterprise quality SSDs are still $1.00 to $2.50 per GB.

The Intel DC S3500 is only about $1/GB for a 600GB version. Which is not bad for a drive suitable for use in a server. The S3700 series is closer to $2/GB.

(Both of those drive series have the capacitor inside to enable the SSD to shutdown cleanly in cases where the drive loses power.)

Comment Re:It's a mental barrier (Score 1) 183

Exactly. The magic price point for business use was when $150 would buy you a big enough drive to meet the needs of 90% of your office workers. The cost is small enough that it's worth spending the extra amount of money in order to get a machine that performs much better then a traditional drive. It means less twiddling of thumbs of your employees while they wait on a slow hard drive. (More common then a lot of people think, they've just grown used to the slowness.)

Personally, I think that happened at the $1.20/GB mark. It made 80-120GB SSD drives cheap enough for office machines that you'd recoup the savings in a year or two. Either through improved productivity, or not having to replace the machine for another 2-3 years.

As the price gets lower and lower, unless you need >500GB of raw storage, it makes more and more sense to just go SSD instead of traditional. And maybe by next year, that break point will be 1TB, then 2TB.

Slashdot Top Deals

"Only the hypocrite is really rotten to the core." -- Hannah Arendt.

Working...