A 10 Gb connection is an incredible amount of bandwidth, even when we're discussing storage. Disk IO will run out well before bandwidth becomes a consideration unless we're talking about data that is striped across 100+ disks.
Bandwidth used = IO/s * size of transaction.
or, basic algebra can reveal how many drives it takes to fill a given pipe by the following formula:
IO/s = Bandwidth / Size of Transaction
Most file systems use relatively small blocks and as such an average disk transaction tends to range from 4KB to 16KB. 15k SAS drives can realistically sustain 180 IO/s.
1 Gbit ~= 120 MB (allowing for some overhead) of bandwidth = 122,880 KB
So, dividing our available bandwidth by our transaction size (and in this case, we'll assume high at 16KB... 8KB average is much more common in the wild) will reveal how many IO/s we'd need to fill that pipe. Dividing that number by 180 (the IO/s of our SAS drive) will tell us how many drives are needed in a RAID 0 (in order to optimize for performance).
122,880 KB / 16 KB = 7680 IO/S =~ 43 SAS Drives. With no redundancy, we'd need 43 of these drives to saturate even a single gig-e connection.
While I disagree with you about where the bottleneck on a SAN is likely to be, the SAN is only one cost to consider. Backups and replication can easily triple the cost of the SAN's storage itself, as reliable bandwidth is an expensive recurring cost to the IT organization.
That said, the IT organization should be able to provide much more affordable storage to you ($1 per GB is reasonable) if it is sitting on a SAN that is built primarily for space rather than for speed.