I meant 150 not 50
I meant 150 not 50
This is actually how we do it now, except the chalk line is measured by looking at the angular positions of various celestial bodies. This measurement determines the length of a sidereal year. We have been able to make it fairly accurately for the last 50 years or so, and extremely accurately for the last ~60, enough to know that our planet's rotation has slightly slowed during that time. But what we don't know is exactly how long a sidereal year was, say 100 million years ago. Perhaps the earth used to spin around 366 times during its trip around the sun instead of the current 365.25? It's mass and orbital period also change enough on a geologic timescale to affect this. These are problems we know about, but are difficult to solve because we just don't have the data.
This is not necessarily true. It largely depends on how the rotation of the Earth might change over the next hundreds of thousands of years. We have only been running with leap seconds for a bit over 30 years. And we have only had the ability to measure the orbital period accurately enough to worry about seconds for about 100-150 years. Just because we have always "lept forward" in the current system, we can also leap backward. There is simply not enough collected data to know how far "off" our definition of the second is with respect to the history of the earth nor how much "jitter" we are likely to experience with an unadjusted clock. It's entirely possible that the error would never accumulate enough to be a big societal issue. If we are able to determine the average length of a year over a large time span more accurately, it's quite probable that the easiest fix might actually be simply to redefine the second.
Oh just you wait, it will eventually be subjected to the slow burial. I guess I should not have said 'censored' since
I have been around here a long time.
I can honestly say that I am dissapointed to see
In the case of Sourceforge, I think it's much worse to sell out and betry the trust of an entire community. But let's not talk about it!
I haven't even read this article and I know the culprit exactly: JBIG2.
The compression algorithm operates on binary (2 color) images and has two modes, a lossless mode which is sort of like the love child of RLE and JPEG and a higher compression mode which operates by running the lossless blocks through a comparison routine and discarding and replacing any blocks that are sufficiently similar with references to the first copy. It's actually a good algorithm, but you have to understand how it works to implement it properly. When you have a perfect storm of certain fonts (especially small ones where a glyph can fit perfectly inside a block), have some noise in the bitonal images and have the compression threshold too high you can get some real zingers.. 9, 6, 0, 3, and 8 can all easily get muddled up, not to mention what happens to letters like e o c etc. The key to the whole thing is having good algorithms that can produce quality bitonal images from poor originals and scanning at sufficient resolution (or lowering the compression threshold enough) that blocks cannot hold an entire glyph.
As to why the copier is using the lossy mode of JBIG2 internally is mystery, especially in the "copy" pipeline. I can think of no good reason that it should use anything other than the lossless mode or uncompressed data.
I'm curious why you like that interview so much; reading that is when I realized that that dude is nothing more than a talking head. Why do you think he needed the Internet to do commentary for Iron Chef America? IIRC he not only got a question about cooking with lava completely wrong, but he insulted the person asking as a way to avoid answering it. When Google failed him, he just bailed.
AT&T is not issuing you an IPv6 on your residential DSL. I know this because they don't do it. Your computer is generating an IPv6 link local address. Depending on your router and a couple of other factors, you may (probably can) access IPv6 sites using a public 6to4 gateway.
The only advantage to you is that you at least have the ability to access Internet resources that are only available via ipv6, but currently I would imagine that there are probably not any that are particularly relevant to you.
I think this is great news. Maybe router manufacturers will now be smart enough to simply include DNS Update (RFC 2136) support instead of the proprietary dyndns garbage. Enter your domain name and a key and you're all set.
$1MM of iPads represents about 2500-3000 users depending on the discount they received. First, I'm presuming that these users already had mailboxes and it's just the additional load of ActiveSync that is causing the trouble. If that's the case, with the types of discounts that government and education receive from microsoft and hardware vendors this is like a $15,000 problem at best. In the scope of a million-dollar project a 1.5% budget problem represents poor planning, but I've seen much much worse.
Don't use high end GTX cards; twice as many lower end passively-cooled GPU cards will provide more than the equivalent performance with far less cost and failure rate. If your application really benefits more from additional threads vs single thread execution speed, this is the way to go. Most GPGPU clusters that aren't built using Tegra use this approach.
You would be surprised; JPEG2000 is used extensively by high-compression PDF. As a standalone image format it's pretty lousy but for scanned documents it's actually really great. We have literally millions of pages stored this way where I work.
Wait, you really don't believe this? I have a kill-a-watt and can assure you he speaks the truth. I don't have a ridiculously high end computer, but I can get its power consumption to vary by more than 300W between it being idle + LCD's in DPMS power save and me actively pushing the cpu and gpu with something. Putting it into S3 suspend will break off another 50W or so.
Now 10 year old hardware pushing 40W delta between unloaded and loaded not including a CRT going to sleep or something? Doubtful. Maybe 15-20W tops. But then again some of that school district's hardware was much newer and $0.06/kWh is a pretty decent utility rate too. I'd say $1 million is a pretty good round number here even though it probably represents a modest 10-20% increase over the bill were 5000 machines simply left on and idle. But consider if this guy who had enough control to install software on 5000 machines had simply set them to go to S3 after a couple hours of not being used? He could have saved the school district millions on power just as easily as he wasted it.
The real answer here is it depends a great deal on the GPS itself, then it depends on how whatever software is reporting and logging this information post processes it.
GPS itself is capable of reporting an instantaneous velocity vector calculated by measuring the doppler shift from each satellite. (Comes in as a GPVTG sentence in the NMEA data) So if the receiver is tracking a lot of satellites with a good distribution and there isnt a lot of multipath problems, the accuracy of this vector is ridiculously good. Also, a receiver may not support GPVTG.
Now you can also get velocity data from a GPRMC (ie normal position data) sentence too. According to the specification, the bearing here is supposed to be calculated based on position track angle (presumably so that you dont have to be moving to have a GPS bearing).. The spec seems silent to the origin of the speed reported in this sentence -- seems like it could be calculated as track speed (average speed over the interval) but could easily be reported as instantaneous speed as well.
Of course I haven't tested any, but I imagine in practice, GPS receivers would normally report track/position averaged data in GPRMC and instantaneous data in GPVTG. Any software that is supposed to present this data to a user would have to determine how to aggregate and filter it to provide for its intended purpose. If you really intend to beat a speeding ticket with GPS I would suggest that you need data points of either type (instantaneous or averaged) with at least 1Hz if not 5Hz granularity along with knowledge of what the data represents and how the raw data is filtered and processed. This 30s interval business in this case is just dumb, and nobody ever bothered to determine anything about the nature of the data it seems.
No problem is so formidable that you can't just walk away from it. -- C. Schulz