I respect geohotz, but I don't think a 26-year old kid completely grasps the ramifications of building and shipping a technology whose goal is arguably to "kill fewer people". Even if he does kill fewer people, people are going to die using this or any autonomous driving platform. Is he ready to deal with the fallout from that?
Also, we won't know until millions or hundreds of millions of miles have been driven how the technology compares in terms of safety to human drivers. It sounds like he's using low-end electronics, so chances are he's just banking on deep learning solving all the driving problems. This is a very, very dangerous approach to take, when deep learning has a strong tendency to exhibit pathological failure cases (see "adversarial nets"), and when it's usually impossible to fully understand, explain, justify or characterize the strengths and weaknesses of a network trained on a given task.