A Look at FreeNAS Server 214
NewsForge (Also owned by VA) has a quick look at FreeNAS, an open source network attached storage server that can be deployed on pretty much any old PC you have sitting around the house. From the article: "The software, which is based on FreeBSD, Samba, and PHP, includes an operating system that supports various software RAID models and a Web user interface. The server supports access from Windows machines, Apple Macs, FTP, SSH, and Network File System (NFS), and it takes up less than 16MB of disk space on a hard drive or removable media."
I know it costs money.... (Score:5, Informative)
Dedicated solutions are often better. (Score:5, Informative)
Dedicated storage systems are often designed in such a way so as to minimize the amount of power they consume. Some use several ARM or MIPS CPUs, which can offer suitable processing capabilities without the immense energy consumption of even a single x86 chip. The dedicated hardware itself is designed so as to eliminate unnecessary circuitry.
When it comes to users who have hundreds of these machines, the energy savings of a dedicated system often far outweigh the initial savings of going with a PC/FreeNAS-style combination. Even smaller-scale users, who may only have a single machine, will notice the savings if they choose to use their system for several years.
Re:NAS (Score:3, Informative)
Re:NAS (Score:3, Informative)
Re:NAS (Score:5, Informative)
A lot of the SOHO NAS boxes run off of ARM processors, which are both power efficient but also able to handle the basic I/O needs of a NAS box. Granted, SOHO NAS boxes aren't meant for large companies or large workgroups, but would fit in as a departmental file server for testing or near-distance storage.
Higher end NAS boxes due use more powerful servers to handle 1+ Gigabit Ethernet connections, iSCSI or Fibre Channel, multiple PCI-X busses or multiple 4-8x PCI Express drops, and large amounts of RAM for caching and such. For instance, the latest corporate NAS boxes fron Snap/Adaptec use Opteron processors.
I've ran a small workgroup file server off of a Pentium Pro 200/256K with 256MB of RAM and several 9GB SCSI drives in RAID-5 and the bottleneck was definitely the two 100Mbps Ethernet connections. Of course, YMMV.
Re:I know it costs money.... (Score:4, Informative)
It runs linux out of the box, but I've flashed mine to run a full debian system, only 32MB of RAM is the main draw back. But its attached to 3 USB drives in a software RAID, and a CD storage device, and a thumb drive (for the main system - so the disks don't get hit by every cron job). Plus plugging my digital camera into it downloads all the photos into dated directories on the 'photos' share. It also serves some web pages, mainly a cgi interface to eject disks from the CD storage device.
Works well for me, and it's a reasonably cheap and pysically small (and very underpowered CPU/memory wise) linux machine with 2 USB ports and a network port.
Not necessarily.... (Score:4, Informative)
Arrrg! Samba is not acceptable for macs! (Score:5, Informative)
Look. Just because MacOS X supports SMB, does not mean that SMB is an acceptable solution for file-serving to MacOS X clients.
Netatalk has some of its own crankyness (and if you run Debian/Ubuntu, you need to rebuild the debian package with SSL support or passwords are transmitted in the clear, thanks to the OpenSSL/GNU idiocy), but it doesn't have nearly the basic functionality problems Samba does for Macs.
Sidenote: looks like they "borrowed" the complete user interface from m0n0wall...and it looks like they MIGHT use netatalk...googling turned up some hints that netatalk might be built-in.
Re:Dedicated solutions are often better. (Score:3, Informative)
If you have a "overclocking" mobo, you can probably quite easily underclock it as well. If, on the other hand, your mobo says "Dell" on it, then you probably don't have access to the BIOS screens necessary to do that. You can find Windows software that might be able to do the job (depending upon chipset), but who runs Windows for a NAS server?
But, with that being said, modern hardware is better. Taking an Athlon 64 and cranking the clock speed down by a factor of 10 and dropping the core voltage is likely to be a lot more efficient than taking an old 400MHz P-2 and reducing the clock speed by 1/2. So, you throw more expensive hardware at the problem in order to consume less power.
This is just and educated guess, though. I am not an expert.
Re:OpenFiler? (Score:3, Informative)
naslite (Score:2, Informative)
Re:OK, a serious question (Score:4, Informative)
This one [buffalotech.com] for an example. I should know, I've written reviews for a dozen or so of these things...
samba doesn't do +2GB (Score:2, Informative)
...and useless because Samba has a maximum filesize of just 2GB, whereas Appleshare 3.1 supports a maximum [volume and file] size of 8 TB.
Re:samba doesn't do +2GB (Score:5, Informative)
Google is your friend (Score:4, Informative)
Knoppix and OpenAFS [kom.aau.dk].
Tell me how well it works.
Re:Cheap hardware anyone? (Score:1, Informative)
Re:Or the much better NASLite (Score:2, Informative)
Re:Dedicated solutions are often better. (Score:3, Informative)
Who told you that? Maybe for the little tiny junior-grade ones that you can buy from linksys and whatnot... but if you have more than a couple of hard drives, odds are that the product you're dealing with is made up of a bunch of commodity PC components in a custom rackmount case. This is no less true if it's a 1U box than 4U.
For instance I've got a 1U Maxtor server that came with maybe a terabyte (it's got four drive bays) and it's a 1U Socket 370 Celeron system.
Nothing except the cutesy little baby NAS devices is designed to be especially low power - and if you're using software RAID, you really want a grip of CPU power, especially if you're doing RAID5.
Re:NAS (Score:3, Informative)
Firstly, make sure your rebuild isn't being throttled. cat /proc/sys/dev/raid/speed_limit_max will print out the maximum speed (in kb/sec) the array will rebuild at. Use something like echo 100000 > /proc/sys/dev/raid/speed_limit_max to set it suitably high so that the throttling won't occur.
Secondly, the limiting factor in your rebuild speed will be (in decreasing order of likelihood) bus bandwith, individual disk performance, disk controller or driver bugs/quirks/limitations, CPU speed. Although if you have PCIe or PCI-X disk controllers (unlikely) you may hit limits in individual disk performance before you run out of bus bandwidth. With a 2.4Ghz P4 I can pretty confidently say your bottleneck will never be the CPU.
In your case, your machine almost certainly has all the drives hanging off a single 33Mhz, 32 bit PCI bus. So the absolute upper limit on your array's performance is going to be around 120M/s, and that's assuming the machine isn't doing anything else except rebuilding the array. You're only ever likely to see this sort of performance off the array from long, sequential reads, however (dd if=/dev/md0 of=/dev/null type of thing).
(This is a rough overview). With 4 drives, you have roughly 30M/s per drive maximum, so your best-case RAID rebuild speed will be about 30M/s - this is assuming your drives can sustain 30M/s for both reads and writes across their entire surface. Rebuilding a RAID5 array involves reading data and parity from N-1 drives and writing it to the Nth drive - in other words you have to completely reconstruct a single disk. At 30M/s, it should take about 250000/30/60 ~= 138 minutes to copy the 250G necessary to reconstruct that disk.
(Tech-savvy readers should realise at this point why hardware RAID is theoretically faster than software RAID - particularly for average PCs and/or large numbers of drives - and why it has nothing to do with the "overhead" of calculating RAID5/6 parity.)
Note that 138 minutes is a best case scenario, so it taking 36 hours could be explained by you using the system at the same time, automated system maintenance occurring, less-than-stellar drivers, etc, etc. With such a massive difference between "should be" and "was", however, I'd be examining the individual components pretty closely to see where the bottleneck is. What's the maximum performance you can get from each individual drive (use hdparm and dd). How about from the entire array ?
Re:::shakes head:: (Score:3, Informative)
I haven't dealt with software RAID enough to know how accurate that statement is.
Speaking as someone who has moved a single array between about 5 machines over its lifetime, including from kernel 2.4 machines to kernel 2.6 machines, I'd say that if you've got a Linux software RAID array, it'll probably work on any Linux machine you can find to plug it into (assuming that machine has appropriate drives for the disk controllers and RAID level).
Re:OK, a serious question (Score:3, Informative)
You missed that one. I have a foster home. I load the MP3's, drivers, and photos on a NAS instead of on some local drive. I also put the My Documents on the NAS. The shares are password protected and I can use any machine handy to access it. Sometimes I use a laptop. Sometimes I use a desktop with a CD burner for a change of tunes for the car. All the special drivers for the various machines are on the NAS. It helps in a rebuild as I don't have to find all the driver disks for everything. Sometimes I'm out in the garage and want some tunes other than what the local DJ wants to dribble all over. A laptop with wireless brings the tunes out. A NAS makes a lot of sence if you are not a bachlor with a single PC. It is a whole lot cheaper than buying a bunch of 160 Gig drives for all the PC's. I can keep the machines running on the 15-60 Gig drives they already have.
My NAS draws 15 watts and the hard drive powers down after 20 minutes of inactivity. It has no fan. Why would I want to leave a PC on 24X7 to share a few files. The NAS is also encrypted. If it is stolen, the removal of power unmounts the encrypted partition. It can only be remounted by providing the encryption key through a password protected web interface. It is much safer than data on a local drive. The NAS box is blocked from the WEB by my NAT router. It can't be directly attacked from the WEB. An attacker would have to compromise one of the other machines first. It adds a layer of security to the data.