Likely example: the horrific clusterfuck of an abomination known as "little-endian binary". I don't know for sure it came about due to patent reasons, but I can't think of any sane reason why it would have ever come into existence otherwise.
From a purely machine theoretical standpoint, having the low order byte in the lowest memory location makes as much or more sense than the other way around.
Streaming transmission is a different matter, and in some instances can benefit from being able to receive the MSB first. This is especially true if the data gets acted upon in real time and the MSB is required earlier during the calculation. However, in may other cases, LSB first network byte order can be more advantageous (or at most at least not a disadvantage). So the decision to use either is really based on the algorithms chosen for the network traffic itself.
In creating interface code to opposite-endian systems, it's easier to think about avoiding translation and keeping both in the same format. But, I've personally never had trouble with this since I've always used reversed buffers where direct use of reversed multi-byte arithmetic was useful.
However, it stands to reason that the designers of the first little-endian processors didn't consider this a problem, as most byte traffic needs to be buffered and checked before it can be used in calculations, and that obviates the need for having network byte order being same-endian. Since these were all designed in the early days, I see no reason to assume that the choice to go with little-endian would have been any sort of compromise to the state of the art.