It's not always the fault of the controllers, it can also be the way they're connected to the system.
These onboard controllers are connected to the system using PCI Express x1 - it's literally just like plugging them into a x1 slot only they're directly on the motherboard. The problem is there are two versions of PCI Express - the older PCI Express 1.0 provides 250 MB/s in each direction, while PCI Express 2.0 provides 500 MB/s in each direction.
AMD motherboards only had PCI Express 2.0 lanes but Intel had a mix of 2.0 lanes and 1.0 lanes - the most common was 32 x 2.0 lanes (for 2 x x16 lanes for graphics cards) and about 6 x 1.0 lanes coming from the southbridge. So motherboards manufacturers had to either use 1 lane from southbridge and get only 250 MB/s in each direction or resort to using some multiplexing chips that take 2 or more lanes and create a x4 path for the controller. More recently, motherboards detect if there is a card on the second pci express x16 and if there's nothing there, they "borrow" a few of those unused lanes to improve the performance of the various controllers integrated on the motherboard.
See this Anandtech article, it explains better than I can explain: http://www.anandtech.com/show/2973/6gbps-sata-performance-amd-890gx-vs-intel-x58-p55/2
But the point is even if the pci express 2.0 is used, there's only 500 MB/s in each direction, SATA 6 gbps means that a maximum of 750 MB/s should be reachable - very few motherboards connect the controllers to more than one 1x lane so even if the controller could reach 750 MB/s, you won't get it.
This is nothing new - remember the gigabit network cards on PCI? The whole PCI system on your computer can do 133 MB/s and a gigabit link can do about 110 MB/s - would you sue anyone if you plug 4 pci cards in your system and can't reach a throughput higher than 133 MB/s ?