If you run some form of Linux, or BSD Unix, or Solaris on the hardware, you can readily use ZFS, which can outperform hardware RAID solutions, and can do a lot of different data protection schemes. You can add SSD's to an array of spinning rust, to have read and write cache layers on SSD, and ZFS will use significant caching in RAM as long as it has RAM available. It also allows for doing things like replacing all the drives in the main array with larger ones.. once the last one is replaced, the pools's size will expand. (This can be set to wait until you issue a command telling it to expand, or to happen automatically when the last disk's size is upgraded... and the pool's size won't change until all the disks in it
It also allows for extremely easy addition of new filesystems, because ZFS manages all that stuff within the pool, and all filesystems have access to all the space in the pool unless you limit the size. You can also create block devices out of the pool (virtual disks for VM's etc..) if you want to, and all filesystems can be snapshotted and sent off somewhere else, to be imported into arrays later, including sending incremental snapshots, so if you want to have the data replicated somewhere, you can periodically send a snapshot from the "source" to the "destination" machine... keep a few snapshots "back" on each machine, and you can only send the delta between locations. (great if you REALLY want your data to be multi-site, but probably WAY overkill for what the original poster wants to do.)
If you can throw memory at it, I highly recommend ZFS, for performance, data protection, and ease of use.
(and.. it's easier to pronounce it if you use the pronunciation used everywhere but the U.S. for the letter Z... Zeddeffess, is easier to say than ZeeEffEss.. go ahead..try it.) ;)