I don't think it's that crazy. I did some robot navigation work for a thesis project, years ago, when compute power was abysmal and sensor capability was very limited as well. The lesson learned was that navigation and mapping is relatively easy regardless of the data source (we did nav and mapping with an original 128KB Macintosh), but spatial sensor processing is hard and unreliable (nothing that we had available could keep up, and the raw sensor data sucked too).
The key is reliability; you can certainly do optical/sonar/laser etc. sensors quite effectively these days, but it takes a lot of processing horsepower to unambiguously convert what you're seeing into a map. Note that the amazing flying UAV demos frequently posted on here are not doing the sensor processing on the UAV platform itself; that's all being done by offboard visual equipment against a clean white background.
Some combination of bouncing around and low-quality sensors is probably quite a decent approach.