Disclaimer: I work for a storage vendor. Also a long time Slashdot reader though, so this isn't mean as a sales pitch.
Half of a petabyte is not really a lot of data in today's world. I talk to people every day that are trying to find ways to manages many PBs (into the hundreds) and are having challenges doing this with traditional storage. The trend that was started by the big Internet companies is to get rid of the fibre-channel SANs and instead solve the problem of storage using standard x86 servers. They use Linux as an abstraction layer from the hardware, and applications acting as storage systems too pool many servers together.
One of the challenges you need to get over is stretching a namespace that big without filesystem limitations like maximum inode counts. This is generally accomplished using some type of key/value store (object) under the hood. Single flat namespaces with no practical size barrier.
Some options that are available today are Swift from OpenStack and Ceph from Red Hat if you want to go the open source route. These can be good choices if you have the engineering staff on hand to piece it all together and the talent to keep it running. GPFS is also making a come back in this area, and there are a ton of startups looking at this space now.
My company has a commercial solution for this stuff. Pretty cool - it's a Linux app and runs on the server of your choice. I'l save you the sales pitch, and if you want you can try it for free on your own here:
http://scality.com/trial
Whatever you choose, best of luck to you!