That USED to be true. It's not the hard drive, all the layers that get put in between when you access a disk over the network. Modern hard drives can easily do 60MB/s sustained.
For instance, I have a couple raid6 arrays which clock in at about 250 MB/s and 150MB/s natively. If I hook that machine up directly to another machine's ethernet port I only get about 30MB/s sharing the device w/ iSCSI. SMB and NFS yield similar results. This is true even though I can get over 900Mbps using iperf.
Sharing disks over gig-e sucks when you actually need throughput. It's great for when you just need to expand a SAN and speed is secondary. I've heard that bonding two Gig-e cards doesn't realize much of an improvement FWIW, so I assume latency is part of the reason it's slower.