Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Data Storage

Best Solutions For Massive Home Hard Drive Storage? 609

i_ate_god writes "I download a lot of 720/1080p videos, and I also produce a lot of raw uncompressed video. I have run out of slots to put in hard drives across two computers. I need (read: want) access to my files at all times (over a network is fine), especially since I maintain a library of what I've got on the TV computer. I don't want to have swappable USB drives, I want all hard drives available all the time on my network. I'm assuming that, since it's on a network, I won't need 16,000 RPM drives and thus I'm hoping a solution exists that can be moderately quiet and/or hidden away somewhere and still keep somewhat cool. So Slashdot, what have you done?"
Data Storage

Software SSD Cache Implementation For Linux? 297

Annirak writes "With the bottom dropping out of the magnetic disk market and SSD prices still over $3/GB, I want to know if there is a way to to get the best of both worlds. Ideally, a caching algorithm would store frequently used sectors, or sectors used during boot or application launches (hot sectors), to the SSD. Adaptec has a firmware implementation of this concept, called MaxIQ, but this is only for use on their RAID controllers and only works with their special, even more expensive, SSD. Silverstone recently released a device which does this for a single disk, but it is limited: it caches the first part of the magnetic disk, up to the size of the SSD, rather than caching frequently used sectors. The FS-Cache implementation in recent Linux kernels seems to be primarily intended for use in NFS and AFS, without much provision for speeding up local filesystems. Is there a way to use an SSD to act as a hot sector cache for a magnetic disk under Linux?"
Data Storage

Open Source Deduplication For Linux With Opendedup 186

tazzbit writes "The storage vendors have been crowing about data deduplication technology for some time now, but a new open source project, Opendedup, brings it to Linux and its hypervisors — KVM, Xen and VMware. The new deduplication-based file system called SDFS (GPL v2) is scalable to eight petabytes of capacity with 256 storage engines, which can each store up to 32TB of deduplicated data. Each volume can be up to 8 exabytes and the number of files is limited by the underlying file system. Opendedup runs in user space, making it platform independent, easier to scale and cluster, and it can integrate with other user space services like Amazon S3."
Linux

Best Backup Server Option For University TV Station? 272

idk07002 writes 'I have been tasked with building an offsite backup server for my university's television station to back up our Final Cut Pro Server and our in-office file server (a Drobo), in case the studio spontaneously combusts. Total capacity between these two systems is ~12TB. Not at all full yet, but we would like the system to have the same capacity so that we can get maximum life out of it. It looks like it would be possible to get rack space somewhere on campus with Gigabit Ethernet and possibly fiber coming into our office. Would a Linux box with rsync work? What is the sweet spot between value and longevity? What solution would you use?'
Data Storage

Build Your Own $2.8M Petabyte Disk Array For $117k 487

Chris Pirazzi writes "Online backup startup BackBlaze, disgusted with the outrageously overpriced offerings from EMC, NetApp and the like, has released an open-source hardware design showing you how to build a 4U, RAID-capable, rack-mounted, Linux-based server using commodity parts that contains 67 terabytes of storage at a material cost of $7,867. This works out to roughly $117,000 per petabyte, which would cost you around $2.8 million from Amazon or EMC. They have a full parts list and diagrams showing how they put everything together. Their blog states: 'Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us.'"

Building a 10 TB Array For Around $1,000 227

As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "Über RAID Array." While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping. Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment. "Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don't consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20."
Netscape

Netscape Alums Tackle Cloud Storage 62

BobB-nw writes "A new cloud storage vendor is entering the market, promising an enterprise-class file system with snapshots, replication, and other features designed to simplify adoption for existing users and applications. Zetta, founded in 2007 by veterans of Netscape, has $11 million in funding and is coming out of stealth mode Monday with Enterprise Cloud Storage, a Web-based storage platform that will compete against Amazon's Simple Storage Service and a growing number of cloud vendors. Zetta's goal was to build a Web-based storage system that would be accepted by enterprise IT professionals for storing primary data. 'Data growth rates are staggering. In businesses you see growth rates of 40 to 60 percent year over year,' says CEO Jeff Treuhaft, a Zetta cofounder and formerly one of Netscape's first employees. Another Zetta cofounder is Lou Montulli, an early Netscape employee who invented Web cookies."
Data Storage

The Hairy State of Linux Filesystems 187

RazvanM writes "Do the OSes really shrink? Perhaps the user space (MySQL, CUPS) is getting slimmer, but how about the internals? Using as a metric the number of external calls between the filesystem modules and the rest of the Linux kernel I argue that this is not the case. The evidence is a graph that shows the evolution of 15 filesystems from 2.6.11 to 2.6.28 along with the current state (2.6.28) for 24 filesystems. Some filesystems that stand out are: nfs for leading in both number of calls and speed of growth; ext4 and fuse for their above-average speed of growth and 9p for its roller coaster path."
Data Storage

Why Mirroring Is Not a Backup Solution 711

Craig writes "Journalspace.com has fallen and can't get up. The post on their site describes how their entire database was overwritten through either some inconceivable OS or application bug, or more likely a malicious act. Regardless of how the data was lost, their undoing appears to have been that they treated drive mirroring as a backup and have now paid the ultimate price for not having point-in-time backups of the data that was their business." The site had been in business since 2002 and had an Alexa page rank of 106,881. Quantcast said they had 14,000 monthly visitors recently. No word on how many thousands of bloggers' entire output has evaporated.
Data Storage

Long-Term Personal Data Storage? 669

BeanBagKing writes "Yesterday I set out in search of a way to store my documents, videos, and pictures for a long time without worrying about them. This is stuff that I may not care about for years, I don't care where it is, or if it's immediately available, so long as when I do decide to get it, it's there. What did I come up with? Nothing. Hard Drives can fail or degrade. CD's and DVD's I've read have the same problem over long periods of time. I'd rather not pay yearly rent on a server or backup/storage solution. I could start my own server, but that goes back to the issue of hard drives failing, not to mention cost. Tape backups aren't common for personal backups, making far-future retrieval possibly difficult, not to mention the low storage capacity of tape drives. I've thought about buying a bunch of 4GB thumb drives; I've had some of those for years and even sent a few through washers and driers and had the data survive. Do you have any suggestions? My requirements are simple: It must be stable, lasting for decades if possible, and must be as inexpensive as possible. I'm not looking to start my own national archive; I have less than 500GBs and only save things important to me."
GNOME

OpenSolaris 2008.11 – Year of the Laptop? 223

Ahmed Kamal writes "Is Linux getting too old for you? Are you interested to see what other systems such as OpenSolaris have to offer? OpenSolaris has some great features, such as ZFS and dtrace, which make it a great server OS — but how do you think it will fare on a laptop? Let's take an initial look at the most recent OpenSolaris 2008.11 pre-release on recentish laptop hardware."
Data Storage

Why RAID 5 Stops Working In 2009 803

Lally Singh recommends a ZDNet piece predicting the imminent demise of RAID 5, noting that increasing storage and non-decreasing probability of disk failure will collide in a year or so. This reader adds, "Apparently, RAID 6 isn't far behind. I'll keep the ZFS plug short. Go ZFS. There, that was it." "Disk drive capacities double every 18-24 months. We have 1 TB drives now, and in 2009 we'll have 2 TB drives. With a 7-drive RAID 5 disk failure, you'll have 6 remaining 2 TB drives. As the RAID controller is busily reading through those 6 disks to reconstruct the data from the failed drive, it is almost certain it will see an [unrecoverable read error]. So the read fails ... The message 'we can't read this RAID volume' travels up the chain of command until an error message is presented on the screen. 12 TB of your carefully protected — you thought! — data is gone. Oh, you didn't back it up to tape? Bummer!"
Data Storage

Ext4 Advances As Interim Step To Btrfs 510

Heise.de's Kernel Log has a look at the ext4 filesystem as Linus Torvalds has integrated a large collection of patches for it into the kernel main branch. "This signals that with the next kernel version 2.6.28, the successor to ext3 will finally leave behind its 'hot' development phase." The article notes that ext4 developer Theodore Ts'o (tytso) is in favor of ultimately moving Linux to a modern, "next-generation" file system. His preferred choice is btrfs, and Heise notes an email Ts'o sent to the Linux Kernel Mailing List a week back positioning ext4 as a bridge to btrfs.
Data Storage

What To Do With a Hundred Hard Drives? 487

Makoto916 writes "In five years with my current employer as the IT administrator, I've amassed a sizable cabinet of discarded hard drives; just shy of 100, in fact. All of the drives range in size from 20GB up to 300GB. They've all been stored in anti-stat bags, and spot checks of even the oldest ones show that most of them still work. Individually, they're mostly useless for our line of work, which is digital video production. However, the collective storage potential is quite significant. They are of varying size and speed, but the one commonality is they're all IDE. What is the best way to approach connecting all of these devices and realizing their storage potential? On a budget, of course. Now, I'd never use such an array for critical data storage, but it certainly would be useful as a massive backup array to our existing SAN that does store critical data. I have several spare and functioning PCs, but not nearly enough to utilize their internal IDE controllers; even with multiple add-in controllers, it still wouldn't be enough. Not to mention the nightmare of managing a bunch of independent PCs. I've looked into ATA Over Ethernet and there's a lot of potential there, but current 15 to 20 bay AoE cabinets are expensive, and single device enclosures are so rare that they're also expensive. Are there any hardware hackers out there who have crafted their own home-brew AoE systems? Could they scale to 100 drives? Is there a better way?"

Slashdot Top Deals

All life evolves by the differential survival of replicating entities. -- Dawkins

Working...