Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Seeing how most companies won't migrate... (Score 5, Insightful) 675

Hello, nice to meet you. Now you do know somebody who uses the start menu. I'm typing this on a Windows 7 64 bit system, and I use the "Start" menu all the time. Personally I keep a list of the top 10 applications I launch (Chrome, Visual Studio, a screen capture utility, etc) right at the very top level of the "Start" menu so I can get to them quickly, but the shortcuts disappear (when I release the mouse button) and don't clutter my view all the time.

I work at a company that does both Mac and Windows apps deployed to customer's desktops. So we *HAVE* to stay current and support all the new Microsoft and Apple OS releases. Windows 8 is the future, it's just that the future really sucks. The only thing keeping my spirits up for now is the hope that Microsoft comes to its senses and makes Windows 8.5 or Windows 9 suck less. Honestly I don't have much hope left, they are still pushing the tool ribbon and pretending it is a success. Microsoft doesn't like to admit it made a mistake, even when the evidence is overwhelming.

Comment Re:First job within how far of home? (Score 1) 630

Around the time of my first job (1990-ish), most large employers understood this catch-22 issue and would loan you the money to relocate, sometimes the loan was "forgiven" if you lasted a year at the new job. I don't know if this is still commonplace? Seriously, what would a relocate cost? First and last months rent, plus a moving van... Let's say $5,000. If I just hired an energetic 22 year old for $50,000 per year and they explained the situation I would easily front them the money.

Comment Re:We will get solar when there's a profit. (Score 1) 589

Several posters have mentioned "oil subsidies" in this thread. Can somebody be specific? I'm not denying it or anything, but I thought it was exactly the opposite - I thought every gallon of gas I buy at the pump had a HUGE amount of taxes included, way more than say a loaf of bread I buy. Which is it - do we tax gasoline or subsidize it?

Comment Re:Original blog post (Score 1) 239

We don't have any data yet. The oldest Backblaze pods contain hard drives that are not quite 4 years old, so we haven't seen any old age mortality yet.

Here is another totally random thought: We pay $1,400 / month / cabinet in physical space rental plus electricity, which comes to about $5 / drive / month / cabinet. Even if the old (smaller) drives last forever, there will come a moment where it is just a good financial decision to copy all the data off of them onto denser drives because in some number of months the savings in physical space rental and saving electricity (assuming energy use is fixed per drive) pays for the new drives. If a new hard drive is 10 times as dense, it saves Backblaze $4.50 / drive / month in physical space and electricity rental. In 22 months it pays for the $100 replacement drive. (I did that math super quick, so let me know if I'm off by a factor of 10.)

Comment Re:Anything over 2TB should be ZFS... (Score 2) 239

Using JFS instead of ZFS is the biggest mistake for this build.

(Disclaimer: I work at Backblaze) - We no longer deploy new pods with JFS, but over half our fleet of 200 pods are running JFS and we are perfectly happy with it. We worked through a couple bugs related to large volumes, but after that our main reason for using EXT4 going forward is that in our application EXT4 is measurably faster than JFS, and it is reassuring to be on a filesystem that is used by more people so it (hopefully) has more bugs fixed, etc.

Earlier we were totally interested in ZFS, as it would replace RAID & LVM as well (and ZFS gets great reviews). But (to my understanding) native ZFS is not available on Linux and we're not really looking to switch to OpenSolaris.

ANOTHER option down this line of thinking is switching to btrfs, but we haven't played with it yet.

Comment Re:Meh (Score 1) 239

The drives do not look to be hot swapable

(Disclaimer: I work at Backblaze) All SATA drives are inherently hot swappable, including the ones in the Backblaze pod. We have tried it, it worked the few times we did it. But for normal operations, we shut the pod down completely to swap drives. The first reason is that because the pods are stacked on top of each other and the drives are replaced from the top, we have to slide the pod out half way out of the rack like a drawer. It feels kinda wrong to slide servers around like that while the drives are spinning, so we avoid it (I have no proof it actually causes significant problems). Another reason is that with the top of the pod open, the cooling airflow isn't the same and some of the drives in the center start rising in temperature. This isn't fatal, but it puts you on a "timer" where you want to get the hot swap done within a reasonable amount of time (like 5 minutes) and get the pod closed back up again. Finally, it just seems safer to let the machine come up cleanly with the drive replaced. For our application it doesn't matter at all, no customer can possibly know or care if one, two, or ten pods are offline during a reboot.

Comment Re:Original blog post (Score 1) 239

RAID6 uses 2 drives for data parity, so I believe you would need 3 drives out of 45 to fail within a week to actually lose data. I suspect they would shut a pod down if 2 drives in it failed at the same time.

(Disclaimer: I work at Backlaze) We have 3 RAID groups inside each 45 drive pod, each RAID group is 15 drives. So you need 3 drive failures out of one single 15 drive group to lose data. So... when the FIRST drive fails in one 15 drive RAID group, our software automatically stops accepting any more customer data on that particular 15 drive group and the management software puts the file system sitting on top of that RAID gruop into read only mode. This may seem obvious in retrospect, but we found writing to drives causes them to fail or pop out of RAID arrays at more than 100 times higher frequency than just keeping them spinning and reading the information off of them. So by doing this the customers can still restore data from that pod, and we're pretty relaxed about replacing that particular drive sometime in the next few days.

When a second drive subsequently fails inside a pod, pagers start going off and a Backblaze employee starts driving towards the datacenter.

With that said, it is worth noting that multiple simultaneous drive failures in one pod are WAAAY more common than pure statistics would indicate. If a SATA card fails, it has three SATA cables plugged into it leading to three separate port multipliers and ultimately is talking with 15 hard drives. So we'll see 15 drives simultaneously drop out of the RAID arrays in one pod and it's pretty obvious what just happened. No big deal, it doesn't (necessarily) corrupt any data. I'm just mentioning you can't take the random drive failure rates of one single drive and do straight multiplication to get to pod failure rates.

Comment Re:Anything over 2TB should be ZFS... (Score 4, Interesting) 239

... if you really care about the data.

(Disclaimer: I work at Backblaze) - If you really care about data, you *MUST* have end-to-end application level data integrity checks (it isn't just the hard drives that lose data!).

Let's make this perfectly clear: Backblaze checksums EVERYTHING on an end-to-end basis (mostly we use SHA-1). This is so important I cannot stress this highly enough, each and every file and portion of file we store has our own checksum on the end, and we use this all over the place. For example, we pass over the data every week or so reading it, recalculating the checksums, and if a single bit has been thrown we heal it up either from our own copies of the data or ask the client to re-transmit that file or part of that file.

At the large amount of data we store, our checksums catch errors at EVERY level - RAM, hard drive, network transmission, everywhere. My guess is that consumers just do not notice when a single bit in one of their JPEG photos has been flipped -> one pixel gets every so slightly more red or something. Only one photo changes out of their collection of thousands. But at our crazy numbers of files stored we see it (and fix it) daily.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...