Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:user-friendly? (Score 1) 398

Were you saying that having to right click is difficult, or that any interface that relies on right clicking to bring up a different interface is less than ideal?

I'd agree with the second but have no trouble right clicking with my trackpad/magic mouse.

Spam

Submission + - Fined for spamming on Dutch social network site (google.com)

An anonymous reader writes: A spammer promoting a webgame was fined with 12.000 euros for promoting his online webgame via comments on user profiles on the large Dutch social networking site called Hyves. This is a first for the battle against social networking spam in the Netherlands. The google translation is quite accurate.

Comment Re:More reason to be a ZFS fanboy (Score 2, Informative) 386

TCP overhead at 1GbE for a modern processor is negligible - you're only talking about processing 120MB/sec or so.

Here is a document including a pretty graph: http://media.netapp.com/documents/tr-3628.pdf

"...enabling the TCP Offload Engine (TOE) on the Linux hosts did not noticeably affect performance on the IBM blade side."

Comment Thinking differently (Score 1) 183

If the data is processed and lives in the cloud then bandwidth is no longer a major issue. As an example:

In one world you could have the Exchange servers backups pushed out to a cloud provider. This would result in many hours to get the data out there, and the challenge of restoring it in the event of a problem. As the OP indicated.

Or...

Push the Exchange server and it's data into a "cloud" provider. Now the clients access the data from the Exchange server in the "cloud" and the "cloud" provider provides DR copies of that data at their network bandwidth to their correctly managed data centers. The cloud provider could manage that Exchange server on your behalf or just provide the infrastructure.

Now when some disaster strikes the DR is performed in the "cloud", at local speeds.

I.e. Why have any local services at all? [assuming security is covered elsewhere... a traditional challenge that exists even in internal datacentres where the local internal admin can access data they shouldn't be able to and backups they shouldn't be able to]

If you're using your applications entirely in the cloud suddenly it all looks a bit of a different problem. How do I get my apps into the cloud, how do I move my apps between cloud providers, how do I ensure my cloud provider is delivering an SLA that is appropriate for my business.

Comment Re:Why single out games? (Score 1) 358

We don't have HBO here but we have Sky with Sky Sports. You pay for the Sports channel and get it, except for the premium content (boxing matches etc) where you have to pay extra to see the event (PPV).

When I buy a car I'm asked "oh well, if you want bigger alloys and a bad boy spoiler then you have to pay another £5,000".

Instead of complaining that once you've got something that everything else derived from it must be free (included) why not just be happy with whatever you get for whatever you've paid for. If new content comes along that is compelling to you - buy it, or don't.

Comment Re:simple idea (Score 4, Interesting) 444

You're not likely to see 30k RPM drives any time soon. The speed of a 15k drive means that the outer edge of the 3 1/2" drive is spinning pretty fast... getting close to the speed of sound and the lions share of power consumed by 15k drives is consumed in counteracting the air buffeting the heads. With 2 1/2" drives we could go faster but while drives are open to the air it's not likely we'll see much in the short term.

It's why CDROM speeds haven't gone up much since the old day of 52x.

As areal density improves the drives will be able to push out more raw MB/sec just like DVD is better than CD, but in terms of IOPs it's not likely to dramatically improve.

Comment Re:RAID is here to stay (Score 2, Interesting) 444

RAID 1 has much less reliability than RAID 6. Assume a typical case: one disk totally fails. You then start to reconstruct - in a RAID 1 scheme a single sector error will result in the rebuild failing. Not great.

In RAID 6 you start the rebuild and you get a single sector error from one of the drives you're rebuilding from. At that point you've got yet another parity scheme available (in the form of the RAID 6 bit) that figures out what that sector should have been and then continues the rebuild. Then you go back and decide what to do about that drive that had the second error.

A lot of drive failures aren't full head crashes or motor errors but just single sector, track, bits of dirt on the platter style errors. Other than the affected area the drive can be read.

With RAID 6 you can fail two disks completely and still access the data. You're still reading from the same ten 10TB disks in your example and if the implementation of RAID 6 is optimal (RAID-DP) you aren't having to read additional data from the same physical disks.

In the world you describe with 10TB drives it sounds like you'd just not be able to use the disks at all since any process that reads from the disks will kill them. There are a few things that could happen:

1. Disks get more reliable. Hasn't happened much yet but...
2. We switch to different packaging. Instead of making disks larger we cram more of them into the same space similar to CPU cores - same MTBF per disk but lots of them presented out by one physical interface.
3. We change technologies completely. SSD (interesting failure modes there too... needs RAID)

I guess we'll find out in only a few years...

Comment Re:simple idea (Score 3, Informative) 444

They do to varying degrees of success but just because a disk can't read a particular sector doesn't mean that the drive is faulty - it could be a simple error on the onboard controller that is causing the issue.

FC/SAS drives mostly leave error handling up to the array rather than doing it themselves because the arrays can typically make better decisions as to how to deal with the problem and helps cope with time sensitive applications. The array can choose to issue additional retries, reboot the drive while continuing to use RAID to serve the data, etc.

Consumer SAS drives on the other hand try really hard to recover from the problem - for example retrying again and again with different methods to get the sector and while admiral that leads to behaviours we see in consumer land where the PC just "locks up". The assumption here is that there is no RAID available and so reporting an error back to the host is "a bad thing". The enterprise SAS drives we're seeing on the market are starting to disable this automatic functionality to make them behave correctly when inserted into RAID arrays.

Usually ;-)

Comment Re:Hardware RAID is dead (Score 2, Informative) 444

> First of all, "Hardware RAID" is still software, just executed by dedicated circuits. The distinction is kind of moot.

I'm not sure where in my post you saw anything about a comparison between Hardware RAID or Software RAID.

> So my guess is that you're not working for a storage vendor. I haven't seen many people switch to SW RAID recently.

I work for NetApp. I didn't think it mattered much in the post I made though. To your second point, as all of the NetApp Enterprise storage systems use software based RAID I can happily confirm that many hundreds of thousands of customers have switched to software RAID.

As you mentioned earlier though the point is moot since when you're delivering an enterprise array to a customer it doesn't matter if the array uses RAID cards provided by a 3rd party vendor, uses RAID cards built in-house, or uses software RAID to write the data that the customer gives you. The ingress point for the customer is a physical port (IP/FC typcially) and that port provides RAID capabilities. Maybe that's also hardware RAID?

Comment RAID is here to stay (Score 5, Insightful) 444

Disclaimer: I work for a storage vendor.

> FTA: The real fix must be based on new technology such as OSD, where the disk knows what is stored on it and only has to read and write the objects being managed, not the whole device
OSD doesn't change anything. The disk has failed. How has OSD helped?

> FTA: or something like declustered RAID
Just skimming that document it seems to claim: only reconstruct data, not white space, and use a parity scheme that limits damage. Enterprise arrays that have native filesystem virtualisation (WAFL for example) already do this. RAID 6 arrays do this.

Lets recap. Physical devices including SSDs will fail. You need to be able to recover from failure. The failure could be as bad as the entire physical device failing, or as bad as a single sector being unreadable. In the former case a RAID reconstruct will recover the data but you'll hit RAID recovery errors due to the raw amount of data that needs to be recovered. Enterprise arrays mitigate the risk of recovery errors by using RAID 6. They could even recover the data from a DR mirrored system as part of the recovery scheme.

And when RAID 6 has a high enough risk that it's worth expanding the scheme everyone will start switching from double parity schemes to triple parity schemes since their much less expensive in terms of spindle count than RAID 6+1.

One assumption is, at some point in the future, reconstructions will be a continual occurring background task just like any other background task that enterprise arrays handle. As long as there is enough resiliency and performance isn't impacted then it doesn't matter if a disk is being rebuilt.

Slashdot Top Deals

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...