Your eyes are far better at matching light frequencies between both eyes to get the depth mapping correct. Your standard camera can only distinguish 24 bits of light frequency. At that level you get somewhat of a depth map but not a very good one.
Lasers try to get around that limitation by using a frequency the camera can easily pick up and compare between the two images. If you could use the whole image and any frequency, you'd be a lot better off.
That's ultimately the challenge: getting cameras that are not only incredibly sensitive to light frequency, but also very high resolution. Or they'll need to get the cameras looking around just like your eyeballs.
In a 3D mapped world, all the depth information is 100% accurate.
They'd need to render 48-64 bit color to emulate what might be possible in the real world to get accurate depth information.