Some company was doing this in the Bay area in 2000.
Hotplug is expensive. Cases are expensive. Making room for human access is expensive.
Design for nothing but airflow and drive density, keeping pieces as absolutely cheap as possible. Gigabit instead of 10G.
At exabyte scale, why do you care about the loss of 4TB? Using Super Micro boxes w/4TB Drives, you can have over 6 petabytes of raw storage in a 72u rack / cabinet
Metadata servers keep track of where the copies of blocks are.
Put copies of the blocks on completely disparate systems. If there is heavy read usage of a block, make more copies.
Head servers scale and have some beef to them. They are all about getting info from the commodity stuff and packaging it for (subscribers, clients, whatever).
If a drive dies or has issues - mark it bad and leave it at that. Ignore it.
If a server dies, mark it as bad. Leave it.
In 4 years you are forklifting the equipment and replacing it with new storage.
There is no "RAID", other than there are multiple copies of blocks throughout the system.
I met with a company in the bay area doing this in 2000 (I don't remember which one). It was dealing with Filesystems and not block, but with NFS, VMDKs, VHD, etc, who cares. I don't see anything new here at all.