It's the other way around.
Even though we call it high dynamic range in videos and photographs, it's actually just compressing all the extra information from multiple exposures into a LOWER dynamic range, so we can manipulate/display it on our 8-bit screens.
Games, however - such as the Source engine after it got the HDR update with Half-Life 2: Lost Coast and Day of Defeat: Source, actually do increase the dynamic range of a scene beyond what your monitor can display. They underexpose and overexpose parts of the scene when transitions between light and dark places occur, just as your eyes would before they adjusted to the new light, or as a video camera would depending on what exposure the videographer chose. This makes it look more realistic - just take a look at a bright outdoor scene in Half-Life 2: Episode Two and check out how shiny objects in the sunlight have blown-out highlights that gleam brilliantly, and then look at the same scene in the original Half-Life 2, where that object would look flatly-lit and fake. The "non-HDR" looks more fake because the dynamic range is compressed so you can see all the detail everywhere, which also gives it that flat "game" look.
Of course, that last part is just my opinion - but I believe that in order to look more realistic, CGI needs to simulate the behavior of traditional cameras with a lower dynamic range (or that of your eyes before they've adjusted properly to bright/dim light). The everything-is-exposed-properly, compressed-dynamic-range look just appears fake to me, even though my eyes could probably perceive that range at the actual scene. I'm not sure why.