I wrote about the latest storage server I built back in 2008, and a lot of my thoughts at the time are written up in http://www.tummy.com/Community/Articles/ultimatestorage2008/
However, to answer a few of your questions...
External disc enclosures? Avoid them like the plague. My initial experience with the 5 bay eSATA enclosures was pretty good -- sometimes it wouldn't pick up the external drives, but usually I could get it to find them after some tweaking, rebooting, etc... I ended up getting 3 of them, the AMS DS-2350S, which at the time were well reviewed, etc... I have since pulled all 3 of them out of active use and have them just sitting around. I don't know exactly the mode of the failures, but eventually after replacing some with others, I finally put them in internal SATA enclosures, which have been very reliable (I used these Supermicro CSE-M35T-1.
Also note that eSATA connectors don't really hold on that well. If anything, they're not as robust as internal SATA connectors, despite being outside the case where they can get banged around.
If I were to do it over again, I'd probably stick with the case I started with, with 5 internal 3.5" bays, and 3 front 5.25" bays, and put the Supermicro in there. I'd also probably go with fewer big drives rather than more smaller drives like I did previously (even though at the time the drives were free, I had them from another project).
As far as running it in the garage, don't even think it, unless your garage is not where you store your cars. I have some computers that I've run in the garage for the last 9 months, and they are filthy, I've had a lot of fan failures, lots of dust, insects, and random other crap. I put mine in our furnace room, which has enough extra space.
As far as using a server case? Hard to see the payback there unless you have a cabinet. Most server cases are HUGE, heavy, and expensive. A 3U case with 12 drive bays likely costs $500, plus you usually have to deal with special form-factor power supplies, expect to spend another $200 on one of those. I wouldn't do it, and I have a 3U 12-bay Chenbro case just sitting at my office that I could re-purpose.
As far as the file-system, I selected ZFS (via zfs-fuse under Linux) and I've been VERY happy with it. The primary benefit is that it checksums *ALL* data and can recover from some types of corruption or at least alert about corruption if it can't correct it. So, if you are storing photos or home videos that you may not be accessing very often, that's good peace of mind to have, I know in 10 years I won't go to look at some photographs I've taken and find they were silently corrupted. Of course, you could get similar benefits by saving off a database of file checksums and checking and alerting if they are bad. Really the only downside of ZFS that I've seen is that if you need to do a RAID rebuild it is a seek-heavy task rather than just streaming. I have a 8x2TB drive array that I'm currently rebuilding (drive failure, at work), and it's 33% done after 31 hours. A normal RAID-5 array would have rebuilt that in what, 10? The system is idle except for the rebuild.
If you care about the data going into it, make sure you checksum and verify the files regularly.
The 8 port PCI SATA card I got is fantastic, it's a Supermicro with the Marvel chipset and is very well supported (even supported by Nexenta).
Finally, all this data is encrypted, so if someone were to burgle us I only have to worry about them getting the hardware, I don't have to worry about them now having scanned bills and other documents and other personal and private data, etc... This is why I'm running ZFS in Linux, it gave me encryption plus ZFS (not available otherwise in 2008), as well as being an OS I'm very familiar with.
As far as OS, I am personally running CentOS on my system because that means I can install and set it up and then forget about it for quite a few years, except for regularly running "yum update". Debian should be fine, but you will get/have to track upstream changes more frequently.