I have 90Mbit up * 10Mbit down lightning service from Brighthouse and it is quite real. I can say for certain it is real because I have a co-located machine at Terramark on a 1GBit link running SNMP and I move enough data both ways to be able to do the math to validate. The fact is that they deliver well over rated speeds as well as I routinely push 11Mbits sustained up and pull over 100Mbits sustained down. Sustained to me means over at least an hour down with some of
my sustain up running 8 or more hours(a lot of cameras, a lot of data pushed offsite everyday).
One thing you really need to understand in this battle for bandwidth is that you are absolutely owned by your network transit path. An interior network (you are part of your ISP's interior network in the context I refer to) may have plenty of capacity while their edges may be grossly inadequate (as in Comcast and AT&T the last time I was on their pipes) and this fact can thoroughly convolute your test results because they can (and some definitely do) divert bandwidth test traffic onto a better path than you will ever see with real traffic.
The short answer IMHO is that you can only really determine true bandwidth with a real, uncongested validation point that you can trust. Bandwidth tests are circumvented other ways too. One trick is traffic shaping with a burst that gives you full rated pipe for a minute then hacks you down step by step until you get what they decide you get sustained. That will show high bandwidth in a test but the ISP chosen rate will surface when you actually move some traffic around.
Personally, (and Larry Ellison may want to kill me for this) I have used various Oracle image downloads (not little Java tarballs but ISOs for Solaris and other various big Oracle stuff) as a basis for occasional test in the past. My trust in this methodology stems from the fact that I can routinely pull over 300Mbits from Oracle to my co-located host and I can nearly always saturate my inbound to well above spec on Brighthouse.