Can't you just use several cameras?
I have a Panasonic sub-compact camera with a mixed-light setting that that takes multiple exposures of the same scene and processes them together into one properly exposed image.
In this brief PDF they do mention the innovative part, though without details sadly:
Typically, HDR images are developed using multiple cameras or multiple exposure sequencing. The game changing approach implemented here is to create high-speed HDR video imagery utilizing a single camera without time sequencing.
Camera exposure will instead be controlled at the chip/pixel level and then integrated into a high-speed video camera. The resulting HDR capability will be easier to install and operate within the SSC test stands because the entire system will be contained within a single camera; this is a completely revolutionary and innovative means to generate HDR capability with high-speed video when compared with the labor-intensive steps associated with the careful alignment required when multiple cameras are used to generate similar imaging results.
So it seems they have per-pixel exposure control, rather than a full-frame exposure control.
Not sure how that works, perhaps instead of letting the charge build up on a cell and then read all the cells after time T, they time how long each cell takes to charge, and after a cutoff time T, measure the charge of the remaining cells?
The idea being you use the time-to-saturation as a measure of brightness for over-exposed areas, while the traditional charge level for the well-exposed areas.
First thing that popped into my mind, but then I don't really know the area so yeah... may be a very stupid idea :)