A lot of the 1960's big iron had 9 bit bytes - eg the venerable DEC-10. They need 36 bits to represent the range of data they wanted (typically floating point). Those 36 bits could only be divided according to 36 = 2x2x3x3 (ie chunks of 1, 2, 3, 4, 6, 9, 12, 18). Some used 6 bits to represent a character in a very limited character set, some used 9 bits to a character (not all bit values necessarily corresponded to a printable character). The big change to 8 bit bytes came from the IBM 360 when it choose 32 bits (32 = 2x2x2x2x2) and it's easy division into 1,2,4,8,16 chunks. The 9 bit byte was the standard and the 8 bit byte was the incompatible new kid on the block. Which is why the ITU says the unambiguous 'octet' instead of the ambiguous 'byte'.