I'd bet that you could use that many megapixels to seriously boost dynamic range by averaging several adjacent pixels into one.
Simply put: no. Software "averaging" may smooth out noise, but it will not add information that was not present in the first place. Missing dynamic range at the hardware is just not there to be recovered in software. In digital camera sensors, dynamic range is limited by saturation of the sensor's photosites. Once a photosite has collected enough photons, it registers maximum charge -- information about any further photons collected at that photosite during the exposure is lost. In fact, adding more photosites per unit area increases the per-photosite noise and chip areal overhead. Noise reduces dynamic range at the low end, and less charge capacity per photosite reduces dynamic range at the high end.
As another poster notes, you might change the effective exposure received by each photosite (perhaps by Bayer-array like neutral-density filtering). Or you can do what Fuji did with the S3 pro: make a matrix of photosites of different sizes/sensitivites to improve dynamic range. Fuji's sensor, while nice, has hardly taken over the digital imaging world.
On a more constructive note, Ctein wrote up a nice exposition on The Online Photographer about both near-term sensor technologies entering production and long-term avenues for advancement in digital imaging technology.