Most companies choose b) as their core business is not in the design of their own server equipment and they don't have the resources to do a).
This is somewhat of a false dilemma. There are very few companies in (a) who are willing to invest in resources to a truly thorough engineering job designing their own customized applications and servers for basic business needs, and there are plenty of companies who are in (b) which do not have the resources to design their own applications, let alone server equipment.
There are nevertheless numerous companies in (b) with IT management and staff who would like to at various times treat random projects as if the company were in (a); so Dell didn't spec the equipment with SATA drives, but now that our X application has new servers for it, we'll take this old storage chassis and toss some consumer drives in it.
The key message is THAT will probably be a lot less reliable than the storage chassis outfitted with the disk drives that the vendor qualified, and what's more, even if the storage chassis doesn't do a firmware check on the drives to try and reject 3rd party drives to protect the customer from themselves; it is still likely to be completely unsupported by Dell when it eventually fails catastrophically.
And yet some of those companies have published individual drive data showing the exact reliability.
Yes, and they have a specific measurement of reliability and performance that applies to their environment, but not to most of mine or that of most enterprises. A hard drive has a reliability issue if it causes the storage system it is used in to fail; even if the hard drive itself is performing perfectly. Component failure is not the only reliability issue, so are bugs and unexpected behaviors.
In their environment; Backblaze would be concerned if a hard drive fails completely and stops reading or writing data with integrity while not idle, and a read/write test of the hard drive surface would fail, this would be how they define hard drive reliability: since they don't consider a hard drive to have failed if the entire disk can still be read or written.
In my environment I am concerned if a hard drive does anything or fails to do anything that causes it to be ejected from the RAID subsystem, or falls below a performance threshold, or accumulates bit rot, or causes a failure in the firmware-based health monitoring on the drive or in the storage chassis.
If a hard drive power cycles or resets itself just once unexpectedly and therefore shows up as "Ejected" or "Failed"; then I consider the drive to be unreliable, even though it would not meet Backblaze or Google definition of an unreliable or failed component, they would simply keep using it, as long as the drive continued to pass their tests.