As I am sure we all know, a digital computer represents all information by either an on or off state, which is typically represented numerically as 0 or 1, respectively. As the digital state is often implements as an analog current, there is often some firm threshhold value, above which the state is said to be on.
Therefore to represent a peice of information, such information must first be encoded in as a number, then the number encoded into a series of off or on states to represent that number. This is where binary notation comes from. Using only 0 an 1, in principle we can represent any number as easily as using the 0-9. For instance, using base 2, the number representing in decimal form as 4 would be 100. Perhaps a bit verbose, but quite adequate when one can complete thousands of operations every second.
The verbosity, however is a problem for humans. For instance, to represent the decimal number 9 requires us to write 1001. While a digital device has no problem with this, and humans working to hardwire code have no problems, as the amount of information to encode becomes greater, humans wish to have more information density.
Which is where Octal, or base 8 representation emerges. Octal notation groups three states, or bits, in one. In octal instead of only using the digits 0 and 1, we use 0-7. This means that to write the decimal number 7 instead of writing 0b111, we write 0o7, i which the 0o prefix means octal.
Octal was nice when bits were base of the computers, but soon information grew so much that we began to group bits together. The smallest traditional grouping of bits is the nibble, which contains 4 bits. This means the biggest number that can be held is 0b1111 or the decimal number 15. This lead to the idea that we might want a numbering system that can represent numbers up to decimal 15, and the hexadecimal system was used. In this system, digits go from 0-F. Therefore the decimal number 7 is written 0x7. The decimal number 15 is written 0xF, 0o17 or 0b1111. One can see that even though the computer does not care, it is easier for people.
Hexadecimal was quite used prior to the mid 80's. While programming tasks were easily handled through the alphanumeric keyboard, with minimal special keys, formatted text processing required copious use of the entry of special codes. Even in programming, it was useful to direct many function directly through the hardware using hex.
So, obviously, with the huge bit capacity, it is quite easy to see why we use hexadecimal to represent numerical values. What is not so obvious is why we represent using the longer form hex09f911029d74e35bd84156c5635688c0 rather than 0x09f911029d74e35bd84156c5635688c0.