Become a fan of Slashdot on Facebook


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:How Will The Naval Observatory Clock Handle Thi (Score 1) 233 233

> Pretend, nothing. Those minutes do have a 61st second.

Civil minutes may or may not have any correspondence with dictionary minutes. In the measurement of elapsed time intervals with more than one second of accuracy, dictionary minutes rule.

Comment: Re:choose what standard to violate (Score 1) 233 233

> Huh? The POSIX time specification make it trivial to calculate dates using simple arithmetic

True, but the same property also makes the use of POSIX time for precision timing basically suicidal. POSIX time is a convenient and adequate encoding of civil time, as long as you do not need more than one second of accuracy.

If you want to reliably measure or timestamp anything with more than one second of accuracy you should be using a clock derived or offset from a reliable clock instead. The use of POSIX time for precision timekeeping - even use by such rudimentary applications as 'make' - was defective from the beginning. NTP is equally defective as a consequence.

Comment: IPv6 is fatally flawed (Score 1) 595 595

Since the IETF saw that there was gonna be an industry-wide overhaul in any case, it did this complete overhaul, tossing in everything learnt in the years of IPv4, so that another IP transition won't be likely in the next 50 years, if ever.

By this point, even the luminaries at the IETF have realized that the design for IPv6 as a replacement for IPv4 is fatally flawed. How flawed? Flawed enough that there is a high probability that a worldwide transition to IPv6 will never actually happen.

Now sure, there are technical advantages to a clean slate design, but a clean slate design is also unfortunately almost useless as a replacement for IPv4 in the real world. There is no incremental advantage and extraordinarily high costs to adding a separate numbering plan to an existing network, so no cost conscious organization ever does it unless they are forced to, and probably never will.

At this point I would lay odds on an IPv7 eventually being developed that is a revision of IPv6 with the incorporation of the IPv4 address space in a routeable fashion, and which assigns each IPv4 address a network prefix that an entire subnet of devices may eventually be directly addressed behind, in addition to the default.

Why? Because doing anything else would be one of the biggest wastes of resources the world has ever seen.

Any downsides? An IPv7 router would have bigger routing tables than an IPv6 only router, but the routing tables could be used to route IPv4 packets, and as it is not likely IPv4 is going away anytime soon, the same overhead is there one way or another.

A wide scale deployment of IPv7 would require hardware upgrades in some cases, but for most people it could be deployed silently, without them ever needing to know or care. A simple software update would be all that was necessary, and a few years down the road nearly all IPv4 capable devices would handle the expanded address space in a usable fashion without any renumbering or other configuration changes. That would save billions of dollars a year in unnecessary administration costs worldwide.

Comment: Re:They brought it on themselves (Score 1) 379 379

Under any reasonable interpretation of the law, Internet access providers always have been common carriers. The FCC, in a classic example of regulatory capture, simply decided to interpret the law in a relatively perverse manner, by pretending that broadband Internet access providers were "information services" rather than "telecommunications services", which is flat out ridiculous, and the Supreme Court decided to defer to them.

This is the legal definition of telecommunications for example:

The term "telecommunications" means the transmission, between or among points specified by the user, of information of the user's choosing, without change in the form or content of the information as sent and received. (47 USC 153)

Sound familiar? Sounds just like Internet access. How about this one:

The term "telecommunications service" means the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used.

Justice Scalia pointed this out several years ago, but he was in the minority on this one. The justices in the majority said, well it may not make any sense, but we will let the FCC decide. Now rationality has returned to the FCC and they are revisiting the question.

Comment: Re:What rules prevent them from doing this already (Score 1) 221 221

Federal law states that:

"A utility shall provide a cable television system or any telecommunications carrier with nondiscriminatory access to any pole, duct, conduit, or right-of-way owned or controlled by it." (47 USC 224)

Comcast has this right by virtue of being a "cable television system". The major phone companies have it because they are "telecommunications carriers". But facilities based ISPs like Google Fiber are currently (and incorrectly) classified as neither so they are out of luck until sanity finishes kicking in at the FCC.

Comment: Re:the more things change... (Score 1) 130 130

The Apple IIgs was dramatically different from all other Apple II models. It was backward compatible, but came with a 16 bit processor (the 65816), much more RAM (256K or more), greatly improved sound and video, and a GUI shell much like that of the Mac, plus color, which nearly all Macs lacked at the time. It was a little underpowered compared to the 68000 based Mac, Amiga, and Atari ST, but a more than respectable upgrade to the Apple II series nonetheless.

As educational / entertainment devices even the older Apple IIs ran circles around the PC until EGA was widely deployed in the late 1980s. PC games inevitably were designed for CGA graphics, with a fixed set of four unimaginative colors at a time. The Apple II was better than that almost ten years earlier, to say nothing of the much less expensive Commodore 64. The PC was intended primarily for business purposes, and it showed.

Comment: Re:35 Days to write an OS (Score 1) 130 130

If you asked the creators, they would probably be embarrassed to call it an operating system at all. Apple DOS didn't handle keyboard support, video support, sound support, or printer support. That was all handled using either the monitor (a BIOS in ROM that was not part of DOS), peripheral card ROMs in some cases, or by direct access to the hardware.

MS-DOS was similar. It handled file I/O and that is it. A disk operating system, not a computer operating system. The BIOS was separate, not controlled by Microsoft, but rather by the PC manufacturer, and in ROM.

Comment: Re:what's happening with SCTP? (Score 1) 150 150

A stateful firewall doesn't need to block transport layer protocols it doesn't understand in order to provide a meaningful level of security. All it needs to do is block packets from IP addresses that corresponding interior address has not recently communicated with, with a reasonable time out. UDP is handled much the same way today.

If the developers of stateful IPv6 firewalls do not ship devices with such a reasonable configuration by default, they will block the deployment of new transport protocols indefinitely - at least all those that do not resort to the awkward expedient of running on top of UDP.

Blocking new transport protocols developers can reasonably handle with a standard policy is bad for efficiency, power consumption, latency, user experience, and so on in the long run - TCP is far from ideal as a transport protocol goes. In a number of ways it is outright backwards. If you want to impede the long term development of the Internet, degrading the end-to-end principle unnecessarily is a good place to start.

UNIX enhancements aren't.