Flash on Chrome doesn't use NPAPI it uses PPAPI, is distributed and updated with the browser, and isn't going anywhere so far.
That said there are things Google might do to discourage Flash content, such as quit running / displaying it by default.
ASCII is the American Standard Code for Information Interchange, a 7 bit encoding system. The most common strictly 8 bit encoding is ISO-8859-1, slightly expanded by Microsoft as Windows-1252, also known as Win-ASCII.
Of course these days, everyone in their right mind should generally be using UTF-8 for transfer and storage. UCS-16 and UTF-16, though widely used internally, are basically a mistake for that kind of thing.
How can you tell that any of those pictures are from China? They all look like they are from Japan, a country that makes extremely heavy use of Chinese characters (much more than Korea for example), to me.
> Pretend, nothing. Those minutes do have a 61st second.
Civil minutes may or may not have any correspondence with dictionary minutes. In the measurement of elapsed time intervals with more than one second of accuracy, dictionary minutes rule.
> Huh? The POSIX time specification make it trivial to calculate dates using simple arithmetic
True, but the same property also makes the use of POSIX time for precision timing basically suicidal. POSIX time is a convenient and adequate encoding of civil time, as long as you do not need more than one second of accuracy.
If you want to reliably measure or timestamp anything with more than one second of accuracy you should be using a clock derived or offset from a reliable clock instead. The use of POSIX time for precision timekeeping - even use by such rudimentary applications as 'make' - was defective from the beginning. NTP is equally defective as a consequence.
Since the IETF saw that there was gonna be an industry-wide overhaul in any case, it did this complete overhaul, tossing in everything learnt in the years of IPv4, so that another IP transition won't be likely in the next 50 years, if ever.
By this point, even the luminaries at the IETF have realized that the design for IPv6 as a replacement for IPv4 is fatally flawed. How flawed? Flawed enough that there is a high probability that a worldwide transition to IPv6 will never actually happen.
Now sure, there are technical advantages to a clean slate design, but a clean slate design is also unfortunately almost useless as a replacement for IPv4 in the real world. There is no incremental advantage and extraordinarily high costs to adding a separate numbering plan to an existing network, so no cost conscious organization ever does it unless they are forced to, and probably never will.
At this point I would lay odds on an IPv7 eventually being developed that is a revision of IPv6 with the incorporation of the IPv4 address space in a routeable fashion, and which assigns each IPv4 address a network prefix that an entire subnet of devices may eventually be directly addressed behind, in addition to the default.
Why? Because doing anything else would be one of the biggest wastes of resources the world has ever seen.
Any downsides? An IPv7 router would have bigger routing tables than an IPv6 only router, but the routing tables could be used to route IPv4 packets, and as it is not likely IPv4 is going away anytime soon, the same overhead is there one way or another.
A wide scale deployment of IPv7 would require hardware upgrades in some cases, but for most people it could be deployed silently, without them ever needing to know or care. A simple software update would be all that was necessary, and a few years down the road nearly all IPv4 capable devices would handle the expanded address space in a usable fashion without any renumbering or other configuration changes. That would save billions of dollars a year in unnecessary administration costs worldwide.
There are only eight pages of new rules. The rest is explanation, history, legal justification, and commentary. More here: http://e-pluribusunum.com/2015...
since traffic from isp customers to those edge caches do not get counted against monthly caps
If that is true, the FCC is likely to frown upon that sort of thing.
The FCC doesn't have a problem with prioritization per se. The FCC has a problem with paid prioritization, which on a congested network can starve other traffic, and cause other problems, depending on how it is done.
Under any reasonable interpretation of the law, Internet access providers always have been common carriers. The FCC, in a classic example of regulatory capture, simply decided to interpret the law in a relatively perverse manner, by pretending that broadband Internet access providers were "information services" rather than "telecommunications services", which is flat out ridiculous, and the Supreme Court decided to defer to them.
This is the legal definition of telecommunications for example:
The term "telecommunications" means the transmission, between or among points specified by the user, of information of the user's choosing, without change in the form or content of the information as sent and received. (47 USC 153)
Sound familiar? Sounds just like Internet access. How about this one:
The term "telecommunications service" means the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used.
Justice Scalia pointed this out several years ago, but he was in the minority on this one. The justices in the majority said, well it may not make any sense, but we will let the FCC decide. Now rationality has returned to the FCC and they are revisiting the question.
Federal law states that:
"A utility shall provide a cable television system or any telecommunications carrier with nondiscriminatory access to any pole, duct, conduit, or right-of-way owned or controlled by it." (47 USC 224)
Comcast has this right by virtue of being a "cable television system". The major phone companies have it because they are "telecommunications carriers". But facilities based ISPs like Google Fiber are currently (and incorrectly) classified as neither so they are out of luck until sanity finishes kicking in at the FCC.
XFS is not a copy on write filesystem, by the way. ZFS and BTRFS should work definitely work better, but they might need some internal tweaks to make the best use of it.
Somebody needs a math lesson. 3000 miles * 5280 feet per mile / 78000 = 203 feet. That is a tad more than 40 cm.
The Apple IIgs was dramatically different from all other Apple II models. It was backward compatible, but came with a 16 bit processor (the 65816), much more RAM (256K or more), greatly improved sound and video, and a GUI shell much like that of the Mac, plus color, which nearly all Macs lacked at the time. It was a little underpowered compared to the 68000 based Mac, Amiga, and Atari ST, but a more than respectable upgrade to the Apple II series nonetheless.
As educational / entertainment devices even the older Apple IIs ran circles around the PC until EGA was widely deployed in the late 1980s. PC games inevitably were designed for CGA graphics, with a fixed set of four unimaginative colors at a time. The Apple II was better than that almost ten years earlier, to say nothing of the much less expensive Commodore 64. The PC was intended primarily for business purposes, and it showed.