Well, it depends on how you define "accuracy". A clock can only ever be accurate in its own reference frame. As soon as you reach outside of the local reference frame, though, there's nothing directly tying the ticking of this clock to any other. So while atomic clocks are great for knowing how much time has passed locally, they are (in and of themselves) generally pretty useless at knowing what time it is.
"What time it is" is effectively a fabrication. UTC (the most common version of "what time it is") combines the measurements of several hundred atomic clocks around the world to get an "official" time. Several hundred clocks that are all accurate to parts-per-billion, but all existing in different reference frames, and thus all ticking slightly differently. (And as a bonus, those reference frames change as materials deep in the earth move, underground water tables change, etc, so you can't even just program an offset into each clock so that everything lines up...)
GPS clocks are actually corrected. There's at least three different corrections and compensations going on:
Anyhow, the best way to look at the long term 'accuracy' of an atomic clock is to consider the accuracy to be the amount of uncertainty existing in passage-of-time measurements in the clock's local reference frame. And that, in and of itself, has almost nothing to do with actually knowing what time it is.
"Unusable" is a standard field in the DECOM template, see https://celestrak.com/GPS/NANU...
And doing a little browsing, I see that SVN32 had an earlier notice: http://www.navcen.uscg.gov/?Do...
The earlier notice was of type FCSTUUFN -- Forecast Unusuable Until Further Notice: Scheduled outage of indefinite duration. And that notice says that the start time of that unusability period was 025/1500. And the start time of the unusability period in the DECOM notice you linked was 36 minutes after that: 025/1536. So, they said that it was going to be unusable around 15:00 and it was actually unusable at 15:36. And the notice itself was posted on Jan 20.
So I'm going to update my response from "I don't read that as a failure" to "definitely not a failure", barring an explicit statement otherwise by someone actually running the GPS constellation.
Not in this case, because the particular error was a configuration error that multiple satellites were broadcasting (and they agreed with each other). RAIM works by noticing that a satellite differs a lot from what is expected based on what the rest of the constellation is doing... when a chunk of the constellation is all saying the same (wrong) thing, RAIM can't really do anything about it.
This was not a problem where "one or two satellites" had something bad happen. Even well-designed GPSDOs had a problem with this one, since large chunks of the constellation were broadcasting a bad A0 parameter.
The best-designed, of course, went "uh, something really weird just happened with time, I'm gonna stop tracking GPS and throw an alarm," but that had nothing to do with getting disagreeing data from satellites and everything to do with good clocks realizing that a 13s jump in time meant something somewhere was wrong in ways that it's impossible to recover cleanly from.
You *could* assume that SVN23 was removed because of a hardware failure. But since SVN23 was scheduled to be decommissioned right about now (or, at least, before the launch of a new satellite next month), I'm not sure that's the assumption *I* would make.
Really, if it was a satellite failure, I'd expect the official statement to say "there was a satellite failure" rather than "the configs got screwed up when we decommissioned something". There's nothing anywhere that says there was any kind of failure (other than in process), so I'm not sure how "there was a failure" is any kind of valid interpretation of the available information.
It looks like the actual problem was a bad data upload; Specifically, some satellites were transmitting incorrect parameters for UTC offset correction. https://www.febo.com/pipermail... is the posting from a gentlemen at Meinberg that has the details. http://www.usno.navy.mil/USNO/... has more information about the time offset parameters (A0 and A1) and how they interact with GPS and UTC time.
According to another message (https://www.febo.com/pipermail/time-nuts/2016-January/095686.html), PRNs 2, 6, 7, 9, and 23 got hit. It is interesting to note that the satellite that was taken out of service this morning (PRN 32) is not in this list. It looks like the decommissioning of PRN32 was quite possibly scheduled (see http://gpsworld.com/last-block...), and even if not, a failure of that specific satellite could not have caused multiple satellites to start broadcasting incorrect offset data.
I'm really looking forward to the postmortem on this.
Fair enough on the specifics, but you did miss my entire point.
Actually, the Ariane 5 loss was caused by an overflow, not a divide by zero (see first paragraph at https://en.wikipedia.org/?titl...).
Even if it were a divide by zero problem, though, the error was in the code handling flight trajectory computations; I dare say an uncaught error in the code computing your trajectory is going to put the rocket somewhere that you don't want it, regardless. So you fail in one way or you fail in a different way. I doubt there's any case where ignoring a divide by zero error (or an overflow error) would actually keep the rocket on a correct trajectory.
This can be the right thing to do *if the entire (sub)system is designed to accommodate it*. But I certainly would not want, say, ICBM targeting code to drop a missile on my house because someone decided that continuing to run in the face of an error was better than aborting...
"Rather than failing when an unexpected condition arises, I want all software on my system to continue running with a possibly invalid or meaningless internal state."
Sure, what could go wrong?
No negotiation, replace the suite on both ends once per decade.
So, what... the Internet gets together and decides that January 1 of every year ending with '0' we'll upgrade every server, client, and embedded system in the world to the latest security protocol while disabling the previous decade's? And people whose systems are out of support or can't be patched (which would only be, what, 80% of the current internet?) are just SOL?
I think I see some flaws in your plan.
I would *love* to see a summary of the types of problems the video stream has, and the techniques used to recover them. Anyone feel like sorting through the ~70 pages of thread and cataloging them?
So, um, you do realize that there's not actually a technical differentiation between an ISP and anyone else peering with someone on the Internet, yes? None. A peer is a peer is a peer. There's a lot of companies that don't "pay an ISP for their bandwidth" because they're peering directly with all the big (and plenty of small) network players. The idea that a small handful of companies are "internet service providers" and everyone else must buy from them has never been an accurate representation of how the Internet actually works. And *I* most certainly *do* know the details.
Do you also realize that even if Netflix doesn't have "an ISP," that they still have to transit their own traffic to whatever peering points they use, right? That's far from free. The only reason Netflix would pay "their ISP" to start with would be to move Netflix's traffic from wherever Netflix originates it, to one of their peering points where they peer with Comcast. Not having "an ISP" do that for them doesn't negate the need. The data just doesn't magically appear at a peering point somewhere.
Also, do you realize that it's quite possible that Netflix would actually peer with Comcast in places that were actually *good* for Comcast? Netflix, in general, seems to want to offload their data onto end user's ISP's networks as close to those users as possible, since that's how their users get the best quality service. Doing so means that transiting Netflix's traffic is actually *cheaper* for Comcast, because they don't have to haul it as far across their network to deliver it.
(This is why Netflix actually offers, to major ISPs, *free* servers that the ISP can put on their network in whatever locations they like, which will originate a large portion of Netflix's traffic. This means that the ISPs could put the sources of that traffic in the places that are cheapest and best for the ISP, at virtually no cost to them, and save them lots of money in the process (since they wouldn't have to transit the traffic from wherever they peer at. Hell, shove one of those in the same buildings that terminate all your customers in a major metro area, and you practically eliminate Netflix as a source of traffic on that ISP's backbone in that area...)
Now, I realize you're just trolling, but I'm posting just in case someone out there doesn't realize that and tries to take you seriously.
Nothing is faster than the speed of light ... To prove this to yourself, try opening the refrigerator door before the light comes on.