Seems we are talking about two different things.
The file limits.h always was compiler/vendor or at least CPU specific.
You misunderstood me. What I quoted was the wording of the ISO C Standard which sets the requirements for what the values of limits.h must be, and not any particular header. Of course, any specific implementation can have different values in that header, but the Standard requires them to be at least as large as the numbers that I've quoted.
Regarding C++ standards, very long they never where followed (at least usually not complete, especially not on windows), so it was always wise to actually read the .h files or the manual from the vendor.
I'm actually not aware of any C++ implementation that would not follow the requirements I've quoted. C++ inherited it from C since the very first version of the ISO standard back in 1998, and compilers have generally been agreeable with it largely because they have themselves usually evolved from C compilers, and have already implemented that part of the spec.
However it is nice to know that finally some basic agreements on sizes (how helpfull they might be is still a question imho) where made. I mean, the latest C code I had to read was still full with ugly #define u_int32 unigned int and other stuff like this. I wonder why they never simply agree on types/typenames like int16 / int32 when every serious programm is full with either typedefed or #define'd helpers like that. Ofc, it would limit the vendors abilities to create a compiler for very odd word sizes, like 9 or 11 bits, but how likely is it that such architectures will come up again?
It has been dealt with in C99 and in C++11. We now have a new header, stdint.h, which defines typedefs like int32_t and uint16_t.
They did, in fact, account for odd word sizes, as well. The header actually defines several families of types, like so:
int8_t, int16_t, int32_t, int64_t, ...
Now, types from the intN_t family are optional - an implementation is not required to provide them. However, it must provide a specific typedef if it does have an integer type with the exact matching number of bits. So if e.g. long is 32-bit, then it must provide int32_t. Or, for some exotic architecture with, say, a 24-bit word, if int corresponds to that word, then it must provide int24_t.
int_leastN_t and int_fastN_t, in contrast, are required to be always available for 8, 16, 32 and 64 bits (and optionally available for any other random N). As the names imply, those aren't actually guaranteed to have N representation bits - int_leastN_t is the smallest integer type with at least that many bits, and int_fastN_t is the fastest integer type with that many bits (usually, this is the native word if N is smaller than word size).
There's also [u]intmax_t, which is guaranteed to be able to hold any value of any other integer type provided, and intptr_t, which is guaranteed to be castable to a pointer and back without losing any data. And then there are a bunch of macros to specify literals of those types, to obtain max/min values for them etc - which in case of intN_t types can also be used to query for their existence with an #ifdef. Of course, most people don't bother and just assume that those are always available, which is true on all but the most exotic platforms (like the aforementioned SHARC, where the smallest addressable memory unit is the 32-bit word, and hence it cannot provide int8_t or int16_t efficiently).
gcc had all these for ages. VC++ was lagging quite a bit (since it was only following the C++ spec, and kept C at C89 level) - it only got the header in VS 2010. But for the last 4 years, it is possible to use all this to write code that is fully portable across all major platforms and compilers.