Shouldn't be allowing language in kernel that doesn't have a standard.
Even before Rust, the kernel wasn't written to ISO C, but to GNU C (there's plenty of quotes from Linus to that effect).
Please re-read what I wrote: Smearing works perfectly well in an environment where you control both the (smearing) server(s) _and_ the clients!
If you can make sure that all your clients reduce their poll interval before the smearing starts, then you can track any reasonable smearing trajectory, without ever getting more than a ms or so away from your reference, and consequently, all your peers in the same environment.
For a global/pool server, the NTP Hackers would prefer all this to just work, with all clocks agreeing what the time is, without any single client dropping out of sync because it suddenly realizes that its local clock is more than 128 ms away from the reference(s).
As soon as you need this to also work in a real-time process control environment it becomes a lot harder: Are you sure that all your processes can handle a temporarily non-stable time reference? Can you even apply smearing to some of those process-internal clocks/frequency references?
Terje
NTP has absolutely nothing to do with local time and/or daylight savings: It works only in UTC, but with corrections to handle and propagate leap second events.
Terje
Smearing the leap second is a solution Google came up with for their own data centers when they realized that they had too many protocols that didn't know how to handle UTC leaps properly, it really cannot be applied generally unless everyone can agree on exactly how to do it.
In my own test code, I did the smearing over a 24 hour period, centered around the leap event. The main arguments here are on how to determine an optimal smearing function: You want a gradual increase, then a mostly constant slope period, before a gradual decrease near the end.
There are however several potential problem areas with smearing:
a) ntpd works within a maximum of 500 ppm adjustment rate, of which the majority must be reserved for correcting the local clock, leaving maybe 100 ppm as the maximum smearing, so at that point it will take about 10000 seconds (or ~3 hours) to smear a second. Reducing the max smear rate to around 20 ppm is compatible with a 24-hour adjustment.
b) Very stable clients will only poll the server(s) every 1024 seconds, or even less (every 2K/4K/8K seconds), and to detect a change in the reference clock, a client needs 4 consecutive polls showing a drift from the previous stable value.
c) ntpd considers an offset of 128 ms to be infinity, at that point it will restart the protocol engine and losing sync until everything has been stabilized against the current smearing rate. It should be obvious that a smearing setup which drops sync at both ends of the process would be really bad.
d) If you can force the protocol to drop the sync interval, from 1024 s down to the standard minimum of 64 sec, then it becomes much easier to track/follow a smearing server.
Terje
No. You are thinking of Harlan Stenn who took over the lead role more than 15 years ago. He does not live in Nebraska though, instead he migrates a bit between Oregon and California (Silicon Valley). Otherwise that comic is _exactly_ right!
Terje
An adequate bootstrap is a contradiction in terms.