Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment: Write it! (Score 1) 430

I was really concerned about the sorry state of the Bugzilla documentation a decade ago. So I wrote first an unofficial FAQ, and then later a book called "The Bugzilla Guide" and submitted it to the CVS repository under my own copyright. A few years later when I felt it was in a reasonable state, I released the documentation to the Mozilla Foundation and washed my hands of the project. They've done a pretty good job keeping it updated; SOMEBODY has to do the groundwork to create a framework for the documentation to hang together. It's either a labor of love or a labor of money. Lucky for me in writing The Bugzilla Guide, I had both: first was paid to work on it part-time as part of my job, and then for several years with no remuneration. Eventually I stopped using the product professionally, so therefore had no need to revisit and update any further. The tale is the same for many of us, I believe. I parlayed that experience into a series of lucrative contracts that leveraged the fact that I was the guy who "wrote the book on Bugzilla". These days, I have largely stopped doing contract work on the basis of that anymore; my full-time job -- that, not coincidentally, involves writing a lot of documentation! -- is way too interesting for me to spend more time on that :-) So in short: find a personal or professional reason to use a product and write the docs. If you don't do it, who will?

Comment: Re:Better ideas anyone? (Score 1) 393

by Doc Hopper (#46128629) Attached to: Confessions Of an Ex-TSA Agent: Secrets Of the I.O. Room
The myth of explosive decompression from a bullet hole has been long since busted; there's no "suddenly" about it: . The hole would just make a whistling noise until someone put some duct tape over it, and the pilot would probably ask everyone to put on their oxygen masks as a precaution while he brings the plane below 11,000 feet. Now if you equipped everyone with some plastic explosive to blow holes in the side of the plane, your concern would be legitimate. Not saying I think we should equip everyone with guns, but an officer or two on every plane carrying standard police-issue sidearms would be effective and safer than what we're doing today.

Comment: Re: maybe (Score 1) 267

by Doc Hopper (#45589221) Attached to: How the LHC Is Reviving Magnetic Tape
Several other people in the thread also mentioned you can pick up 4TB SATA drives for around $150. I was referring to 4TB SAS drives which retail for more like $350-$450 as of this writing. It's a more apples-to-apples comparison; even though SAS is several orders of magnitude worse than tape for bit errors, it's an order of magnitude better than SATA.

Comment: Re:but what about cheap disk? (Score 1) 267

by Doc Hopper (#45585497) Attached to: How the LHC Is Reviving Magnetic Tape
I'm going to disagree; have you seen the ZFS appliance sales figures, including as part of the Exalogic and Supercluster bundles? It's kind of phenomenal, from last place to nipping at the heels of Netapp & EMC now. Admittedly, closing new updates to the code was unpopular both internally & externally, but has proven to be a pretty solid business decision from a sales perspective.

Comment: Re:maybe (Score 1) 267

by Doc Hopper (#45581591) Attached to: How the LHC Is Reviving Magnetic Tape

My name's Matthew Barnson. I'm happy to talk storage and tape technologies any time, and am pretty certain I'm not a pathological liar. But, you know, I could be lying about that. I live in Utah, and work in a pretty large data center nearby. It's my job to know what I'm talking about, and I've lived and breathed this stuff for a number of years. That said, I can always be mistaken.

Nice to meet you, Anonymous Coward. Feel free to send me an email ( and we can talk use cases where tape is the obvious and better choice, and those where disk is the obvious and better choice. I'm a storage and backup admin working in the industry for nearly twenty years, and have had discussions similar to this over coffee tables, water coolers, and in board rooms. The discussions end up being about things like performance, ROI, archival needs, reliability, typical use case, auditability, and more. Depending on which angle you look at it, some technologies win and others lose.

The point of THIS discussion was some writer who assumed tape was dead learned otherwise. I allege tape is not dead, and has never been over the past six decades, for numerous good reasons (and some bad ones). That said, I have no particular attachment to it other than that it is often the right solution for enterprise needs when other solutions -- like finicky, unreliable optical media -- will not do.

Anyway, if you want to argue about raw vs. compressed capacity, that's fine. We compress data on our ZFS storage appliances because it improves performance, not just capacity. Same with tape. I routinely shove more than 10GB of uncompressed data at the 5TB at my T10K T2 tapes, and seamlessly/transparently pull 10GB of uncompressed data off of them. The fact it was compressed in between is relevant, perhaps, but what's also relevant is that we usually fit in excess of 10TB of data per tape. If you're willing to play by real names, I can provide some stats to back up the claim that most modern tape drives easily and typically achieve their rated compressed capacity figures.

We see that with LZJB compression on our storage appliances as well: about 1.7 to 2.4:1 compression, on average. It varies by what you're storing, of course. Our patch repository, for instance, sees pretty terrible compression ratios as it's trying to compress gzipped and zipped data. On the other hand, general-purpose file storage can see considerably better results.

I maintain that tape is a key sell for customers who audit us regularly. The fact that data is stored on tape, shipped to a secure facility for storage in an EM-resistant container and cage, and retained for a specific period is a revenue driver in the post-9/11, Sarbanes-Oxley, HIPAA era. I have to provide evidence on this to auditors regularly. Among other things, customers who care about their data often aren't satisfied with many pure on-disk solutions: they want data guarantees of timeliness, throughput, encryption and the keys for decryption, and timely windows for restoration of data in case of disaster or "oops". Yet these same customers often aren't willing to pay what it costs to have a fully redundant, disaster-tolerant environment that could weather another 9/11 and come up in an alternate location instantly. In that great land of the "in between" is one gigantic area where tape shines at a reasonable cost.

Tape has its share of problems, to be sure. But there are many cases where it is simply the best solution, providing a solution to common data transport and archival challenges like it has for the past sixty years.

Comment: Re:No shit Sherlock (Score 1) 267

by Doc Hopper (#45579655) Attached to: How the LHC Is Reviving Magnetic Tape
Yeah, I was comparing enterprise-grade tape storage to enterprise-grade hard disk storage where the cost per terabyte is much higher. A 4TB 7200RPM SAS drive compared to your typical home-oriented 4900RPM SATA drive has several times more remapping sectors available, better ball bearings, wider temperature alert thresholds, higher quality control, etc. Unless it's Seagate... I don't use their SAS offerings anymore if I can avoid it due to enduring QC problems in their enterprise storage line from 2009 to present (late 2013). Their SATA stuff is actually higher reliability, and that's not saying much. For the home user, tape storage died a decade ago. For the enterprise, it's just kept growing. For good reasons.

Comment: Re:but what about cheap disk? (Score 1) 267

by Doc Hopper (#45579607) Attached to: How the LHC Is Reviving Magnetic Tape
I'm a big believer that SSD vs. rotating disk is not an either/or thing. Check out ZFS Hybrid Storage Pools. We've done a ton of testing on this, and basically beyond a certain point (and that point moves over time!), shoving in more SSDs doesn't buy you very much at all performance-wise over using the SSD as ZFS intent log and L2ARC (read cache, more or less). A 15,000RPM drive (or 10,000 in small-form-factor disk) with fast read SSDs and fast write-optimized SLC NAND SSDs is really the best of both worlds right now: massive capacity and massive speed. You can try this out on your home Linux or BSD box. Split one SSD into two partitions, set one as the intent log and the other as L2ARC. For IOPS-oriented transactions, this kind of setup is really the cat's meow. For throughput? SSD absolutely, positively SUCKS. Spinning disk has better throughput... and tape, better throughput still :)

Comment: Re:I wish that they looked like the old ones (Score 1) 267

by Doc Hopper (#45579327) Attached to: How the LHC Is Reviving Magnetic Tape

The size of storage has continued doubling with surprising regularity. Not quite Moore's Law-ish, but close. For 7200RPM SAS drives:
2009: 600GB drives in common use.
2010: 1TB drives in common use.
2011: 2TB drives in common use.
2012: 3TB drives in common use.
2013: 4TB drives shipped, not quite common.
2014: 6TB drives are shipping Real Soon Now (gotta get the cash out of the new 4TB drives)
2015: 6TB drives will be common.

Today's average single-rack storage appliance runs a little over half a petabyte raw capacity, and three-quarter petabyte single-racks are shipping today. I think we'll see "a petabyte in 1 rack" by year-end 2014 as 6TB 7200RPM disks start arriving (looks like we'll be skipping 5TB completely). Where I work, filesystems still tend to be smaller than that, more-or-less governed by the compressed size of tape that services them. So an average filesystem runs about 2TB-17TB depending on the tape tech backing it up. To back up a 17TB filesystem on a single tape still takes about 15-16 hours; to transfer it onto another hard drive, still longer!

Comment: Re: maybe (Score 1) 267

by Doc Hopper (#45578949) Attached to: How the LHC Is Reviving Magnetic Tape
What's missing from your comparison is Scale. For small-scale solutions such as you suggest -- 16TB is TEENY, TINY STORAGE -- I absolutely would advocate disk-to-disk kind of stuff. Cheap, fast, easy. Sync it over the cloud. It's small. 16TB is just statistical noise from an enterprise storage perspective. Tape is pretty much mandatory when you need to figure out how to deal with a few hundred petabytes... not a few dozen terabytes.

Comment: Re:but what about cheap disk? (Score 1) 267

by Doc Hopper (#45578227) Attached to: How the LHC Is Reviving Magnetic Tape
Wait wait wait... we're comparing 4TB "consumer" drives (no short-stroking, NCQ is the only thing saving it from horrible performance, 5900RPM, low max throughput ) to 250mbyte/sec T10K enterprise tape storage? Apples and Ferraris. I agree that trying to get pricing on enterprise tape drives without having an existing relationship with the vendor is incredibly frustrating. I don't agree that a slow consumer SATA drive is an appropriate comparison to a decent LTO6 or T10K tape drive.

Comment: Re:No shit Sherlock (Score 1) 267

by Doc Hopper (#45578133) Attached to: How the LHC Is Reviving Magnetic Tape

What's a good brand of tape drive for a home-user?

For most, the answer is "none". Use a cloud service to store your critical data, or a second hard drive with Time Machine or something like that. The Cloud service provider will do tape backup of critical data (even Google does!) to cover disastrous situations which can and have occurred. If you're dead-set on tape backup at home, any recent table-top LTO5 or LTO6 drive (typical cost: $1,500-$3,000) will fit the bill. Media cost is pretty trivial after that initial investment, less than $30 for 3TB. It's this high initial-investment cost that convinces people "tape is expensive". The initial cost layout is prohibitive for some home users. But let's say you buy ten 4TB hard drives; you've spent $4,000 for 40TB (late 2013 prices), and typically have to worry about ongoing power costs & failure rates for those drives (MTBF means you have something like a 1 in 4 chance of one of those drives failing each year). For a thousand bucks, you can buy about 33 LTO5 tapes for something like 100TB of capacity. Different costs depending on your needs.

Comment: Re:but what about cheap disk? (Score 1) 267

by Doc Hopper (#45578035) Attached to: How the LHC Is Reviving Magnetic Tape

Where I work now I'm far away from that, and we cannot think of anything but 24x7x365 ops, so restore is a euphemism for failure.

Ditto. The ability to restore data is required for disasters and an "oops". The latter you can mostly prevent through sound policy, the former you work around after-the-fact. Even in a 24x7x365 environment, you can't totally prevent disasters -- human-caused and otherwise -- from occurring. Recovery plans that don't include tape typically have vastly higher costs, both initial and ongoing. Tape is a speedy cost-saving measure for large enterprise that provides some unique advantages, but is not a complete DR solution by itself! StorageTek has some pretty amazing products to allow tiered storage services that leverage tape for infrequently-accessed data. Check into it; it provides VERY strong support for 24x7x365 operation while dramatically reducing storage cost, and is very transparent for users.

A mathematician is a device for turning coffee into theorems. -- P. Erdos