Folks, we live in an age where programmers declare integers that are going to count from 1...10 as LONG INTEGERS, eating 8 bytes of RAM, where only 1 byte is needed.
Well, does it matter? On a modern system, RAM is allocated in chunks of 4kiB in most architectures. Your variable is going to be either on the stack or BSS section, and really, unless you're really using that page up, using 1 byte or 8 bytes is going to matter not at all because you're really using 4096 bytes and if you're not using it all, it makes zilch of a difference. Loading 1 byte of 8 bytes from RAM to registers still causes a cache line of bytes to be read (16 bytes on a lot of architectures) and fitted into a while 8-byte wide register in the end.
Depending on your needs, using a 64-bit variable to hold 4 bits of data may be more efficient if using 1 byte access causes significant slowdowns because of misalignment.
Hell, the most constrained I've been was using an ARM microcnotroller. It's quite a strange feeling working with 8K of RAM and 16k of flash and yet having full 32-bit pointers and integers