Indeed not. Top pay for a scientific/technical position in the Commonwealth public service is about $130K.
And back in the 1990's we had BMRT (a free renderman clone); until they came and paid/threatened the guy to stop making the free clone available.
Sorta. Larry Gritz, the author of BMRT, went to work for Pixar and then left to start his own company, Exluna, whose main product was a Renderman competitor called Entropy. Unfortunately Pixar's lawyers jumped on Exluna and Exluna was vaporised. BMRT and Entropy were no longer available after this. Larry Gritz went to work for Nvidia after that on a GPU-accelerated renderer, I think.
They are the official timekeepers for the US, along with the U.S. Naval Observatory (which also operates the timescale that GPS satellite clocks are steered to) but they are not timekeepers for the world.
The international standard for time is UTC, a 'paper' clock which is the average of atomic clocks from all around the world.
Standard OS clocks only tick at about 100hz, so you're always out by an average of 5ms anyways.
Nope. Although the system interrupt is only between a few hundred Hz and a kHz, other, faster counters are used to interpolate between these ticks. So on Linux, eg the Time Stamp Counter in the CPU can be used to improve the timestamp resolution to a microsecond, or even nanoseconds, with the nanokernel patch (which is standard in the BSDs, I think).
In my experience of operating a network of geographically-dispersed stratum 1 NTP servers, there is frequently asymmetry at the 1 to 2 s level and occasionally worse. An NTP implementation like ntpd filters out these outliers but the simple protocol you are suggesting would not.
PTP cannot account for network asymmetries any more than NTP can. It can only guarantee symmetric paths when all the hardware between two endpoints is PTP-capable, meaning that each boundary device has to implement a PTP clock.
In the end, it seems silly not to synchronise device clocks to a universal reference . There are many local applications which require a timestamp that must be compared with a time stamp on another device at some time down the line.
"You should try to keep up.
I think the poster can be forgiven for not knowing about an alpha-release NTP client that only works on *nix at the moment (and was only released 3 months ago).
"network latency adjustment is automatic" - I don't understand this statement.
If you are only taking two time stamps - client transmit and server receive - then you have no information about the network latency.
>>Nope. I don't know anyone over 50 who knows how to build or repair a steam engine.
Amusingly, I know three, and one of them is under 50. But my sample is probably biased: the people I work with are all physicists or engineers.
I just read a SF story that used an idea like this - that analogous to the time dilation experienced as you fell into a 'normal' black hole, you would see spatial dilation as you fell into a black hole 'in time'. the idea that the universe is inside a black hole has been bandied about but I can't find a reference for the 'temporal black hole' idea ( I feel sure that the story I read was based on a scientific paper).
"They may not be able to illiterate why "
We had a high resolution, full-body scanner at work that was being used to build a database of body shapes. I volunteered but was rather dismayed when I loaded the 3-d model to see what shape I really am
Two dollars is a lot - paperbacks I bought in the late 1970's were under a dollar so your theory about an inflated price does look plausible.
One more thought before I go off to sleep
The spacecraft is the source of the magnetic field so that means the field lines have to terminate on it. Which means hot plasma is continuously blasting into you.
The paper is a one pager of introductory plasma physics. It isn't a serious calculation and it wasn't meant to be. Anyway
Their model is as follows. A plasma will reflect all electromagnetic radiation below a certain frequency, determined by its density. The plasma exerts a pressure like a gas and they then assume confinement of the plasma with a magnetic field, balancing the plasma pressure with the 'pressure' that a magnetic field exerts on charged particles. They then say that we can make magnetic fields in the range up to 100 T and working back, estimate the plasma frequency, which turns out to be in the UV. So great, you can deflect lasers into the UV with a modest confining field.
You need to look at some of the other numbers though.
First, what sort of plasma density do you need to reflect UV ? The answer is something like 10^28 per cubic m. This is enormous - fusion plasmas are about a million times less dense). It's getting close to solid state density eg if a solid has atoms 0.2 nm apart this is 10^29 atoms per cubic m. That is not going to be easy
The other thing to look at is the required plasma temperature. They assume a temperature of 1000 K, Unfortunately, the density of a plasma at 1000 K at thermal equilibrium is extremely low unless the background pressure is huge. So it has to be a lot hotter, in particular, comparable with the ionization energy which is roughly 100 000 K. And really, we need a fully ionized plasma because the magnetic field is not going to confine the neutral gas that we are using to make the plasma so that means we need a 100 000 K plasma. This means that the required magnetic field goes up by a factor of 10.
Would somebody else like to estimate how much power you need to dump into the plasma ?
Typically, there is no single clock that is "the reference". The international time standard, UTC, is an average of about 400 or so atomic clocks from all around the world. There are a number of caesium fountains currently contributing to UTC. Even in a single laboratory, where there a number of clocks, these clocks will usually be averaged and one clock then adjusted to keep to this average. There are various practical reasons for this. If you have a number of similar clocks, the average will have a more stable frequency than a single clock. Some clocks are better at very short averaging times, others better for the long term so you can make an average that takes this into account. State-of-the-art clocks seldom work continuously so you need something else as a flywheel between operating times.
The main use for very good clocks is as frequency references, or to measure the time interval between two events (in which case any offset cancels out) rather than time of day references. Sub-nanosecond synchronisation of distributed systems is only used in some very specialised scientific applications like Very Long Baseline Interferometric telescopes.