It's actually not that hard to do. I worked on UT Austin's robot car a while back. Our biggest problem wasn't obstacle avoidance - the LIDAR was very good at differentiating "object" from "ground". It also wasn't driving logic - that took us all of 4 weeks to complete. Our hardest problem was figuring out the edge of the road. It used to rely on the painted lane markings, which is a problem in most of rural Texas. We finally figured out a really fast way to process a huge point cloud from the LIDAR and send it to the vision algorithms to overlay onto the camera feed. Marvin (the car) has no problem finding the road edge and assuming lane markings now
I should also note that I'm talking about the edges of those rural roads that kinda just gradually fade into ground. The LIDAR's vertical sensitivity is ~2". We'd have no problem with off-road driving. It's staying within the poorly defined lanes that we were going for.