There are several big problems with AR in the real world. These are well known in the head-up display (HUD) community and are going to surface in consumer AR scenarios too. The biggest problem is cognitive capture, where you ignore important details in the real world in favor of AR imagery. I've seen this in research studies and it is a nasty piece of work. Thankfully, these were simulator lab studies.
The next problem is more subtle but still problematic. AR imagery can mask things in the real world, effectively blinding you even if you are looking. If the pop-up window covers the oncoming car, you're out of luck even if the image is see through. A similar problem is focal length. In wearable tech, the AR image is likely to be hovering some fraction of a meter away from you in terms of focal length. Very few things are at that focal distance so you'll have to refocus constantly when looking at the real world. This takes time and reduces awareness of things out in the world. Refocusing is not an issue for pilots since the HUD image is set close to optical infinity and nothing should ever get that close when you're flying. In cars, the focal length is often near the front bumper. That's not ideal, but you can't spend commercial jet or fighter plane money on consumer automotive HUDs.
In short, you're going to miss threats and react slowly to them when you actually see them.