There's a difference between elapsed time locally and globally. Locally (i.e. on a single processor), you can have some meaningful concept of absolute time (i.e. whenever the timer interrupt fires). The moment you introduce a second processor, you run into issues where they can be ticking at different rates, and the non-trivial delay in communications between them means that you can't ever hope to synchronize them to the extent you can assume they are the same.
For most applications, you can get away with fairly large tolerances. e.g. wifi is probably fine with 1% variations in your concept of how long a second is. But that only works because you're using local elapsed time. If you need to compare timestamps from multiple nodes (global absolute time), you need to use approaches like vector clocks, because otherwise there's no meaningful way to reason about whether two events happened before each other.
tldr; protocols like wifi just need elapsed time to be roughly correct. Comparing timestamps between systems is much, much more complicated, and you can't even infer an ordering for the general case without casuality. (In that sense, there are a lot of parallels with relativity.)