Where I work, we are running EMC's Isilon platform. We have ~4PB of data replicated between two data centers.
The platform supports the traditional CIFS/SMB and NFS for client connectivity.
It also has Hadoop support (HDFS). The great thing about the HDFS support is that you do not have to spin a separate file system for it. The same files that your clients access via CIFS or NFS can be accessed via HDFS. Isilon was built with Hadoop in mind and the Isilon nodes act as Hadoop "compute nodes".
The OneFS file system presents a practically unlimited in size, single file system. There are some interesting tuning options that can be leveraged depending on your data type and IO patterns. If you need to get REALLY crazy, the system has support for tiering data based on a whole slew of different factors (last accessed date, file date, file size... basically any file metadata attribute you can think of can be used for tiering purposes).
This probably does not matter for you, but the system also supports AES256 at-rest encryption. We deal with a lot of financial and other highly sensitive data for clients that demand at-rest encryption, so that was a must have for us.
The only downside is that since it is from EMC, you can plan on paying through the nose for it. (But never pay full retail for EMC, ever. Threaten them with NetApp if you have to. ;) )
We still leverage a SpectraLogic tape library to archive data off of the system. With a moderately specced NetBackup system we get a consistent ~35000kb/s restore rate off of a single drive. That lets us provide reasonable RTOs back to the business.
On the subject of backup, another great thing about Isilon is that you can dedicate certain nodes to specific tasks. In the Isilon architecture, the NL nodes are the slowest nodes that they have. We leverage those for backup to keep the network IO off of the faster X and S-nodes.