The article talks about synchronization of time between systems and processes, not accurate time, as in my watch is 5 minutes fast.
If a self driving car is seeing something in front of it and launches an app to determine what that object is, then that app needs to return an answer before the car hits the object and in time to brake to a stop, if necessary. It needs a time signal to understand how much time it has left. The problem, in this situation, is that without some sort of accurate time signal and time synchronization, the object recognition app could take more than the remaining time to develop an answer. Of course, you could launch a second app that acts as an emergency braking program that will hit the brakes in time, even if the object recognition app hasn't returned a result. The problem here is that you still don't know within a rigid level of certainty that the emergency baking app will complete in time.
In many ways you can see this exact same problem with inexperienced drivers. It takes them longer to process what's in front of them and decide to hit the brakes or not. An experienced driver almost has an automatic awareness ("muscle memory") that gives them an advantage when reacting to situations that they have encountered before.
My thought is that as these scenarios become "learned", they can be moved to "muscle memory". For example, most firewall devices rely on application-specific integrated circuit (ASIC) for real-time firewall rule evaluation. It seems to me that self-driving cars will require their own version of ASICs that contain "rules of the road" and evaluation shortcuts to handle real-time events without having to rely on time signaling.