Facebook uses MySQL for their main data. I can pretty well guarantee it's bigger than any Oracle install, handling 2.5 billion shares and 2.7 billion likes per day, and back in 2010, with half the current user base and much fewer mobile users who are constantly on, they were already seeing peaks of 13M queries per second, reading a peak of 450M rows per second and updating a peak of 3.5M rows per second. Early Dec, 2011 reports show 60M queries per second, and the number was dated then. Oracle wouldn't scale to that size if only because of license costs (cpu licenses for how many thousands of cores?), and Oracle has way too many bugs that they're too lazy or incompetent to fix. Just for the tech support, Oracle may have to hire more people than Facebook employs for MySQL. At Facebook scale, bugs show up rather quickly, compounded by Facebook's motto: move fast and break things. If Facebook hits a bug or limitation in MySQL, it gets fixed, not documented. Otherwise, complaints show up in the twitterverse.
Their messaging system runs on HBase, which stores 6+ billion messages a day, handling a peak 1.6M ops/sec (compression enabled), with 45% write ops that average 16 records across multiple column families (all as of 10/2011). That only looks small when you compare it to their MySQL system. It's still faster than Oracle's SPARK SuperCluster (fastest on tpc.org's tpmC with 30 million), but HBase does it on 3.5", 7200 RPM disks, with probably an order of magnitude (or 2) more storage and potentially lower cost.
They also archive 500+ TB/day of web logs into a Hadoop cluster, and batch process the data. It's quite a system, which only uses high capacity, 3.5", 7200 RPM drives and treats each computer as redundant, instead of having any redundancy (RAID, power, etc.) in the nodes themselves. Oracle hasn't even attempted to compete with Hadoop, and is instead offering instructions on using Hadoop and Oracle together.