Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

A Look at FreeNAS Server 214

NewsForge (Also owned by VA) has a quick look at FreeNAS, an open source network attached storage server that can be deployed on pretty much any old PC you have sitting around the house. From the article: "The software, which is based on FreeBSD, Samba, and PHP, includes an operating system that supports various software RAID models and a Web user interface. The server supports access from Windows machines, Apple Macs, FTP, SSH, and Network File System (NFS), and it takes up less than 16MB of disk space on a hard drive or removable media."
This discussion has been archived. No new comments can be posted.

A Look at FreeNAS Server

Comments Filter:
  • by j2crux ( 969051 ) on Tuesday May 30, 2006 @01:46PM (#15429541)
    But I fell in love with something called a Kuro-box. Here's a link, http://kurobox.com/revolution/what.html [kurobox.com] From the site: The KuroBox is a small-footprint Linux-based embedded platform for a personal server. The current incarnation of the KuroBox, the KuroBox/HG, sports a 266Mhz PowerPC processor, 128MB of RAM, 2 USB 2.0 Ports, and a 10/100/1000Mbit network interface. I got mine off ebay (with a 250 hdd) for ~$200, and I couldnt be happier!
  • by Anonymous Coward on Tuesday May 30, 2006 @01:46PM (#15429544)
    What most people forget about these kinds of systems is that they have fairly hefty power consumption. Until the past year or so, desktop manufacturers placed very little emphasis on truly minimizing power consumption. They do manage to hold it within reason, but often that's no enough.

    Dedicated storage systems are often designed in such a way so as to minimize the amount of power they consume. Some use several ARM or MIPS CPUs, which can offer suitable processing capabilities without the immense energy consumption of even a single x86 chip. The dedicated hardware itself is designed so as to eliminate unnecessary circuitry.

    When it comes to users who have hundreds of these machines, the energy savings of a dedicated system often far outweigh the initial savings of going with a PC/FreeNAS-style combination. Even smaller-scale users, who may only have a single machine, will notice the savings if they choose to use their system for several years.

  • Re:NAS (Score:3, Informative)

    by nharmon ( 97591 ) on Tuesday May 30, 2006 @01:52PM (#15429581)
    Normally, no. The article mentioned setting up a software RAID 5 array. This still probably wouldn't overwhelm a half-decent processor (400mhz+), unless one of the drives had to be replaced. Then the processor will be swamped while it recovers.
  • Re:NAS (Score:3, Informative)

    by Orange Crush ( 934731 ) on Tuesday May 30, 2006 @01:54PM (#15429607)
    Why? All it's doing is serving up files via Samba shares. I have 20 clients connected to a Debian/Samba box with a 1 ghz P3, 1 gig Ram, and a couple 80 gig IDE drives (no RAID or anythign) . . . not under much strain at all, actually. I know intensive IDE transactions need a lot of CPU, but we're talking about shared office docs. I can't imagine drive operations getting all that intensive when the major bottleneck in this case is going to be to 100mbit ethernet card.
  • Re:NAS (Score:5, Informative)

    by questionlp ( 58365 ) on Tuesday May 30, 2006 @01:56PM (#15429626) Homepage
    I think the bottleneck will first be with your network connection (primarily if it's 100Mbps). With Gigabit Ethernet, your hard drives or drive array would be the next bottleneck (mostly if your network and storage controller are on the same PCI bus).

    A lot of the SOHO NAS boxes run off of ARM processors, which are both power efficient but also able to handle the basic I/O needs of a NAS box. Granted, SOHO NAS boxes aren't meant for large companies or large workgroups, but would fit in as a departmental file server for testing or near-distance storage.

    Higher end NAS boxes due use more powerful servers to handle 1+ Gigabit Ethernet connections, iSCSI or Fibre Channel, multiple PCI-X busses or multiple 4-8x PCI Express drops, and large amounts of RAM for caching and such. For instance, the latest corporate NAS boxes fron Snap/Adaptec use Opteron processors.

    I've ran a small workgroup file server off of a Pentium Pro 200/256K with 256MB of RAM and several 9GB SCSI drives in RAID-5 and the bottleneck was definitely the two 100Mbps Ethernet connections. Of course, YMMV.
  • by sholden ( 12227 ) on Tuesday May 30, 2006 @02:03PM (#15429678) Homepage
    There are a bunch of consumer level devices designed to have a USB hard drive plugged into them and export SMB shares from it. They are all around $80 or so. I have this one: http://www1.linksys.com/products/product.asp?prid= 640 [linksys.com] but there are a bunch of ones by other companies.

    It runs linux out of the box, but I've flashed mine to run a full debian system, only 32MB of RAM is the main draw back. But its attached to 3 USB drives in a software RAID, and a CD storage device, and a thumb drive (for the main system - so the disks don't get hit by every cron job). Plus plugging my digital camera into it downloads all the photos into dated directories on the 'photos' share. It also serves some web pages, mainly a cgi interface to eject disks from the CD storage device.

    Works well for me, and it's a reasonably cheap and pysically small (and very underpowered CPU/memory wise) linux machine with 2 USB ports and a network port.
  • Not necessarily.... (Score:4, Informative)

    by PainBreak ( 794152 ) on Tuesday May 30, 2006 @02:05PM (#15429723)
    I can't comment on FreeNAS, because I have never used it, but Quantum Snap NAS devices (which were later rebranded as Dell PowerVault NAS devices) handle decent loads (100+ users at a time), and utilize a proprietary *nix OS with 32MB onboard ram and a MASSIVE Pentium 233 MMX. It's also doing software RAID. I'd say "Any Old Box" is probably a good fit.
  • by SuperBanana ( 662181 ) on Tuesday May 30, 2006 @02:11PM (#15429784)
    The software, which is based on FreeBSD, Samba, and PHP, includes an operating system that supports various software RAID models and a Web user interface. The server supports access from Windows machines, Apple Macs

    Look. Just because MacOS X supports SMB, does not mean that SMB is an acceptable solution for file-serving to MacOS X clients.

    • SMB is absolutely glacial at file metadata/folder retrieval compared to Appleshare. Do the following test: back up a large volume via SMB using Retrospect or a similar tool on the Mac. Then repeat using Appleshare. Using SMB, the file/folder scan will progressively slow down and take hours to finish.
    • SMB does not support the character set or file-name lengths Macs REQUIRE. Yes, I said, REQUIRE. You'll discover this when you go to make an emergency backup of a mac to a SMB share and get errors about filenames that are too long, or have characters that aren't valid. A lot of applications contain files in their internal structure that violate SMB naming restrictions.
    • When Samba runs across a file that it can't display the name for...IT IGNORES IT!
    • Samba requires a lot of tweaking to get it to perform decently, and despite the usual recommended config changes, I've never been able to get Samba to perform as well as a "stock" Appleshare client.

    Netatalk has some of its own crankyness (and if you run Debian/Ubuntu, you need to rebuild the debian package with SSL support or passwords are transmitted in the clear, thanks to the OpenSSL/GNU idiocy), but it doesn't have nearly the basic functionality problems Samba does for Macs.

    Sidenote: looks like they "borrowed" the complete user interface from m0n0wall...and it looks like they MIGHT use netatalk...googling turned up some hints that netatalk might be built-in.

  • It is called "underclocking"...

    If you have a "overclocking" mobo, you can probably quite easily underclock it as well. If, on the other hand, your mobo says "Dell" on it, then you probably don't have access to the BIOS screens necessary to do that. You can find Windows software that might be able to do the job (depending upon chipset), but who runs Windows for a NAS server?
    But, with that being said, modern hardware is better. Taking an Athlon 64 and cranking the clock speed down by a factor of 10 and dropping the core voltage is likely to be a lot more efficient than taking an old 400MHz P-2 and reducing the clock speed by 1/2. So, you throw more expensive hardware at the problem in order to consume less power.

    This is just and educated guess, though. I am not an expert.
  • Re:OpenFiler? (Score:3, Informative)

    by un1xl0ser ( 575642 ) on Tuesday May 30, 2006 @02:19PM (#15429890)
    OpenFiler is based on CentOS 3. It does NOT fit in a small footprint. The point of FreeNAS wasn't to have a different installer and a web interface to polish it up, it was for a small footprint.
  • naslite (Score:2, Informative)

    by coconutstudio ( 446679 ) on Tuesday May 30, 2006 @02:27PM (#15429989) Homepage
    Naslite [serverelements.com] (free version) worked great on my salvaged P-100 32MB system running quiety and headless with nothing but a floppy drive and a 300GB HD. Luckily, it recognized the large HD (since Linux/etc bypasses bios) and I didn't need an IDE card. Performance was acceptable (good but not great) for small base of users but I wouldn't want to stick a RAID in it or have more than 5 nodes. The total system consumed total of 25-30watts (a little high compared to NSLU). Freenas looked good except for higher amount of ram(96m which I didn't have.
  • by Jim Buzbee ( 517 ) on Tuesday May 30, 2006 @02:36PM (#15430076) Homepage
    Plenty of places do sell them. I don't know about Wallmart, but they are available. See:
    This one [buffalotech.com] for an example. I should know, I've written reviews for a dozen or so of these things...
  • by SuperBanana ( 662181 ) on Tuesday May 30, 2006 @03:00PM (#15430301)
    If you need to do this, setup a sparse disk image on the SMB share and mount it. Copy files to the disk image. Slow but flawless.

    ...and useless because Samba has a maximum filesize of just 2GB, whereas Appleshare 3.1 supports a maximum [volume and file] size of 8 TB.

  • by pehowell ( 442247 ) on Tuesday May 30, 2006 @03:25PM (#15430506) Homepage
    It's actually the Samba client that limits you to 2GB or less. Use CIFS to mount the Samba volume, if you have files over 2GB in size.
  • by twitter ( 104583 ) on Tuesday May 30, 2006 @04:05PM (#15430817) Homepage Journal
    I can set up Samba, etc. on just about any box. What defies me is setting up OpenAFS.

    Knoppix and OpenAFS [kom.aau.dk].

    Tell me how well it works.

  • by Anonymous Coward on Tuesday May 30, 2006 @04:33PM (#15430983)
    Yea - this company sells a variety of NAS and SAN solutions but also sells hardware. They are Authorized NetApp Resellers [berkcom.com]. Disk drives, filers, chassis, servers, computer hardware to the max with most brands like EMC, NetApp (of course), Cisco, Seagate, etc. Anyway, these guys offer refurbished hardware as well as new, so you should be able to get a good price.
  • by lostatredrock ( 972881 ) on Tuesday May 30, 2006 @05:09PM (#15431191)
    Unless you ever want to use more than 4 drives in which case naslite can't help you at all. I have used NasLite OpenFiler, FreeNAS, as well as a full blown Fedora install at various times for file sharing. For me in the end FreeNAS was the way too go. NasLite was nice, but I had more than 4 drives I wanted to put in so thaty one was a no go as it does not support PCI controller cards. OpenFiler was designed to be a much more robust and wide ranging installation than what I needed. It is based on a full distro and it shows in the number of features, but also in the complexity. The real killing point for me there was the lack of any built in usser support. Any authentication had to go through an authentication server, since it was a full distro I could have set this up on the same box...but I am running this on an internal network and security was not a big thing. FreeNAS on the other hand gave me everything I needed: suport for as many drives as I wanted, basic authentication, and a web interface that does literaly anything I want it to. Since install it has been up without incident with the exception of power outages (UPS a friend of mine got me at discount should help with this) and upgrades.
  • What most people forget about these kinds of systems is that they have fairly hefty power consumption. Until the past year or so, desktop manufacturers placed very little emphasis on truly minimizing power consumption. They do manage to hold it within reason, but often that's no enough.

    Dedicated storage systems are often designed in such a way so as to minimize the amount of power they consume.

    Who told you that? Maybe for the little tiny junior-grade ones that you can buy from linksys and whatnot... but if you have more than a couple of hard drives, odds are that the product you're dealing with is made up of a bunch of commodity PC components in a custom rackmount case. This is no less true if it's a 1U box than 4U.

    For instance I've got a 1U Maxtor server that came with maybe a terabyte (it's got four drive bays) and it's a 1U Socket 370 Celeron system.

    Nothing except the cutesy little baby NAS devices is designed to be especially low power - and if you're using software RAID, you really want a grip of CPU power, especially if you're doing RAID5.

  • Re:NAS (Score:3, Informative)

    by drsmithy ( 35869 ) <drsmithy@gmai[ ]om ['l.c' in gap]> on Tuesday May 30, 2006 @11:55PM (#15433007)
    How big of an array though. I have 4 250 gig drives in a RAID 5 config. It seriously took 36+ hours to rebuild on a Pentium 4 2.4 gigahertz with 1 gig of ram.

    Firstly, make sure your rebuild isn't being throttled. cat /proc/sys/dev/raid/speed_limit_max will print out the maximum speed (in kb/sec) the array will rebuild at. Use something like echo 100000 > /proc/sys/dev/raid/speed_limit_max to set it suitably high so that the throttling won't occur.

    Secondly, the limiting factor in your rebuild speed will be (in decreasing order of likelihood) bus bandwith, individual disk performance, disk controller or driver bugs/quirks/limitations, CPU speed. Although if you have PCIe or PCI-X disk controllers (unlikely) you may hit limits in individual disk performance before you run out of bus bandwidth. With a 2.4Ghz P4 I can pretty confidently say your bottleneck will never be the CPU.

    In your case, your machine almost certainly has all the drives hanging off a single 33Mhz, 32 bit PCI bus. So the absolute upper limit on your array's performance is going to be around 120M/s, and that's assuming the machine isn't doing anything else except rebuilding the array. You're only ever likely to see this sort of performance off the array from long, sequential reads, however (dd if=/dev/md0 of=/dev/null type of thing).

    (This is a rough overview). With 4 drives, you have roughly 30M/s per drive maximum, so your best-case RAID rebuild speed will be about 30M/s - this is assuming your drives can sustain 30M/s for both reads and writes across their entire surface. Rebuilding a RAID5 array involves reading data and parity from N-1 drives and writing it to the Nth drive - in other words you have to completely reconstruct a single disk. At 30M/s, it should take about 250000/30/60 ~= 138 minutes to copy the 250G necessary to reconstruct that disk.

    (Tech-savvy readers should realise at this point why hardware RAID is theoretically faster than software RAID - particularly for average PCs and/or large numbers of drives - and why it has nothing to do with the "overhead" of calculating RAID5/6 parity.)

    Note that 138 minutes is a best case scenario, so it taking 36 hours could be explained by you using the system at the same time, automated system maintenance occurring, less-than-stellar drivers, etc, etc. With such a massive difference between "should be" and "was", however, I'd be examining the individual components pretty closely to see where the bottleneck is. What's the maximum performance you can get from each individual drive (use hdparm and dd). How about from the entire array ?

  • Re:::shakes head:: (Score:3, Informative)

    by drsmithy ( 35869 ) <drsmithy@gmai[ ]om ['l.c' in gap]> on Wednesday May 31, 2006 @12:03AM (#15433025)
    [Software RAID is documented. If it fails, you can plug the drives into another system that understands the RAID format and get at the data.]

    I haven't dealt with software RAID enough to know how accurate that statement is.

    Speaking as someone who has moved a single array between about 5 machines over its lifetime, including from kernel 2.4 machines to kernel 2.6 machines, I'd say that if you've got a Linux software RAID array, it'll probably work on any Linux machine you can find to plug it into (assuming that machine has appropriate drives for the disk controllers and RAID level).

  • by Technician ( 215283 ) on Wednesday May 31, 2006 @12:27AM (#15433081)
    Frankly the only reason I think that network hard drives are so popular is that people are terrified of cracking open their PCs to install a hard drive, and they don't really understand the difference between the various external types.

    You missed that one. I have a foster home. I load the MP3's, drivers, and photos on a NAS instead of on some local drive. I also put the My Documents on the NAS. The shares are password protected and I can use any machine handy to access it. Sometimes I use a laptop. Sometimes I use a desktop with a CD burner for a change of tunes for the car. All the special drivers for the various machines are on the NAS. It helps in a rebuild as I don't have to find all the driver disks for everything. Sometimes I'm out in the garage and want some tunes other than what the local DJ wants to dribble all over. A laptop with wireless brings the tunes out. A NAS makes a lot of sence if you are not a bachlor with a single PC. It is a whole lot cheaper than buying a bunch of 160 Gig drives for all the PC's. I can keep the machines running on the 15-60 Gig drives they already have.

    My NAS draws 15 watts and the hard drive powers down after 20 minutes of inactivity. It has no fan. Why would I want to leave a PC on 24X7 to share a few files. The NAS is also encrypted. If it is stolen, the removal of power unmounts the encrypted partition. It can only be remounted by providing the encryption key through a password protected web interface. It is much safer than data on a local drive. The NAS box is blocked from the WEB by my NAT router. It can't be directly attacked from the WEB. An attacker would have to compromise one of the other machines first. It adds a layer of security to the data.

"I shall expect a chemical cure for psychopathic behavior by 10 A.M. tomorrow, or I'll have your guts for spaghetti." -- a comic panel by Cotham