An Enterprise RAID array isn't strictly about redundancy (although it sounds like that was the point Score Whore was trying to make). It is also about performance. Let's say you are trying to make a 100TB SAN. You can do this using the strategy you outlined, by using 3TB drives and doing a RAID 1 on them. So, 100TB / 3TB = 34 drives * 2 (RAID 1) = 68 drives. Each spindle on a 7200 RPM SATA drive only delivers about 75 IOPs, so that gives you 5100 IOPs Total.
In an Enterprise environment, you are probably going to need a lot more than 5100 IOPs in a 100TB SAN. So, let's say you decide to use 300GB 15k SAS drives. Those give you about 175 IOPs per spindle. If you use the RAID 6 strategy you outlined, which I am fond of myself, (6+2, or 2 failures out of every 8 drives), that would put you around 448 disks total (448 / 8 disks per RAID set = 56 sets * 6 usable drives per set = 336 usable drives * 300 GB = 100800GB). With the 448 spindles, 448 * 175 IOPs = 78400 IOPs. That's a little bit closer to what we're looking for. Throw in a few spares at 30:1 (15 drives), to put you to 463 drives.
How many SATA drives would it take to match the IOPs in a RAID 1 configuration? 78400 IOPs / 75 IOPs per drive = 1046 drives. Spares at around 30:1 means another 35 disks, for 1081 total.
Next we factor power into that. With a Google search, I averaged typical power consumption from 8 7200 RPM 3TB SATA drives (8.6875 W), giving you 9391.188 W for the SATA array. For the 15k 300GB 3.5" SAS drive, it seemed like the most common Google results came back to the Samsung Cheetah, and the data sheet for that one says 7.92 W typical, or 3666.96 W for the SAS array, which means that the SATA array would require 2.5 times the power. More drives, more power means more cooling (and obviously more space as well).
It all depends on what you are trying to accomplish. In an enterprise environment, space, cooling and power are often big concerns. Depending on your environmental limitations and other factors (i.e. regulations, compliance, etc), money isn't always the primary motivator - that all depends on the nature of the business. If you work in a business that is heavily regulated, then you will likely not bet your job on a bunch of SATA disks to store your 5 (or 7, 10, or more) years worth of data that must be searchable, discoverable, highly available, etc (ok, you might bet your job on it, but I'm not going to bet mine on it). Most likely, you are going to tell your company that to protect that data (and potentially your job, depending on your responsibilities), they need to shell out for a costly SAN. Perhaps even 2 geo-redundant SANs that are replicated. Then, you might put a bunch of SATA disks behind that with a backup agent for another layer of protection. Then you might also dump that data to tapes. Which you then ship offsite. Because if things get ugly, you don't want to be the decision maker or recommender who proposed the SATA disks because they were the good enough solution.
Or maybe you do want to be in that position. But I sure don't want to be there. I'm a big fan of well-developed DR/BC plans and highly available infrastructure. When things are working, there are many solutions that can work well. However, when things stop working, you have to have a well-formed plan in mind to recover from the failure. And "we'll just get replacement drives from COSTCO" isn't a particularly well-formed plan (in fact, where I work, even suggesting that would probably result in termination). If you have to wait more than 4 hours to get replacement drives from HP, you should probably look at another storage vendor. Besides, your array should have enough hot spares for the array to rebuild itself even in the event that you don't get those drives in a timely manner.
TL;DR Higher performance disks may be required over cheap disks. It's not always just about redundancy. The same shoe doesn't fit everyone!