I just ask my kids. They and their peers know exactly who the good teachers and bad teachers are. The question is, how do you use that information? In the politically perverse education system it creates unhelpful drama to ask that bad teachers be replaced or that your child be moved to a class with a better teacher.
Although the summary and the article itself seem to take pains not to mention it, a visit to the RADclock homepage (http://www.cubinlab.ee.unimelb.edu.au/radclock/) will tell you that what's actually being offered here is an improved NTP client. No changes to the NTP servers, server software or NTP protocol are required or are proposed. The client improvements are in an improved filter topology (feed forward with quality assessment) and introduction of separate concepts of absolute and difference clocks optimally supporting the different ways that time is used by applications.
It is wrong to assume that price needs to be tied to cost. What determines the price is what your customer is willing pay for it. The question is not why the medical device company charges $18,000 for the device. The question is why are hospitals willing to pay this much.
You are right - you need to remove exactly as much heat as the equipment is generating. The energy savings with this scheme is due to the fact efficiency of chillers is lowest when asked to produce coldest output. Traditional data centers keep the hot parts cold by keeping everything very cold. Efficiency is improved if you can run your chiller at a higher output temperature and compensate for the reduced effectiveness of the warmer air by directing it where it is most needed.
You don't have to accept conventional wisdom. Online coverage maps are available - http://www.wireless.att.com/coverageviewer/, http://coverage.t-mobile.com/. Both AT&T and T-Mobile rely on "partners" for rural coverage. From the looks of the maps, they're largely using the same partners. There's no extra charge for these areas on either network.
Another more common example of this issue is the artifacts potentially introduced when an image is resized (resampled) - different resampling algorithms have differing quality.
A potentially intractable aspect of this problem is that there is no reference image supplied - your proposed algorithms have nothing concrete to be scored against so you have no way to objectively pick the best one.
A wast of time from your perspective but historians and archeologists have invested generously and patiently understanding dead languages and stone carvings from thousands of years ago. If they found a shiny disc from 1000 years ago, I think they'd be all over it.
Plain Cat5 has been deprecated and difficult to find these days. Cat5e is what you buy and install.
Both 100 Mb and Gb Ethernet were designed for Cat5. If you have true Cat5 it should work and continue to work.
100 Mb Ethernet does require better cable than 10 Mb Ethernet. Gigabit Ethernet uses the same cable as 100 Mb.
It is a common misconception that Gb Ethernet requires higher bandwidth cabling. Gb Ethernet gets its speed by using more wires (all 4 pairs are used), using the wires in both directions simultaneously and through more dense encoding. The carrier signals for both Gb and 100 Mb Ethernet are 125 Mhz.
Advanced digital signal processing in Gb interfaces actually makes them more tolerant of sub-optimal cabling than the less sophisticated 100 Mb.
Neutrinos are into physicists.