There are only eight pages of new rules. The rest is explanation, history, legal justification, and commentary. More here: http://e-pluribusunum.com/2015...
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
since traffic from isp customers to those edge caches do not get counted against monthly caps
If that is true, the FCC is likely to frown upon that sort of thing.
The FCC doesn't have a problem with prioritization per se. The FCC has a problem with paid prioritization, which on a congested network can starve other traffic, and cause other problems, depending on how it is done.
Under any reasonable interpretation of the law, Internet access providers always have been common carriers. The FCC, in a classic example of regulatory capture, simply decided to interpret the law in a relatively perverse manner, by pretending that broadband Internet access providers were "information services" rather than "telecommunications services", which is flat out ridiculous, and the Supreme Court decided to defer to them.
This is the legal definition of telecommunications for example:
The term "telecommunications" means the transmission, between or among points specified by the user, of information of the user's choosing, without change in the form or content of the information as sent and received. (47 USC 153)
Sound familiar? Sounds just like Internet access. How about this one:
The term "telecommunications service" means the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available directly to the public, regardless of the facilities used.
Justice Scalia pointed this out several years ago, but he was in the minority on this one. The justices in the majority said, well it may not make any sense, but we will let the FCC decide. Now rationality has returned to the FCC and they are revisiting the question.
Federal law states that:
"A utility shall provide a cable television system or any telecommunications carrier with nondiscriminatory access to any pole, duct, conduit, or right-of-way owned or controlled by it." (47 USC 224)
Comcast has this right by virtue of being a "cable television system". The major phone companies have it because they are "telecommunications carriers". But facilities based ISPs like Google Fiber are currently (and incorrectly) classified as neither so they are out of luck until sanity finishes kicking in at the FCC.
XFS is not a copy on write filesystem, by the way. ZFS and BTRFS should work definitely work better, but they might need some internal tweaks to make the best use of it.
Somebody needs a math lesson. 3000 miles * 5280 feet per mile / 78000 = 203 feet. That is a tad more than 40 cm.
The Apple IIgs was dramatically different from all other Apple II models. It was backward compatible, but came with a 16 bit processor (the 65816), much more RAM (256K or more), greatly improved sound and video, and a GUI shell much like that of the Mac, plus color, which nearly all Macs lacked at the time. It was a little underpowered compared to the 68000 based Mac, Amiga, and Atari ST, but a more than respectable upgrade to the Apple II series nonetheless.
As educational / entertainment devices even the older Apple IIs ran circles around the PC until EGA was widely deployed in the late 1980s. PC games inevitably were designed for CGA graphics, with a fixed set of four unimaginative colors at a time. The Apple II was better than that almost ten years earlier, to say nothing of the much less expensive Commodore 64. The PC was intended primarily for business purposes, and it showed.
The Apple II wouldn't be more than a footnote in history if those prices didn't go way down, which they did. By the mid 1980s virtually every school in the country had a classroom full of them.
If you asked the creators, they would probably be embarrassed to call it an operating system at all. Apple DOS didn't handle keyboard support, video support, sound support, or printer support. That was all handled using either the monitor (a BIOS in ROM that was not part of DOS), peripheral card ROMs in some cases, or by direct access to the hardware.
MS-DOS was similar. It handled file I/O and that is it. A disk operating system, not a computer operating system. The BIOS was separate, not controlled by Microsoft, but rather by the PC manufacturer, and in ROM.
He didn't write an OS, he wrote a disk operating system, i.e. a system that operated disks. They called it DOS for a reason.
A stateful firewall doesn't need to block transport layer protocols it doesn't understand in order to provide a meaningful level of security. All it needs to do is block packets from IP addresses that corresponding interior address has not recently communicated with, with a reasonable time out. UDP is handled much the same way today.
If the developers of stateful IPv6 firewalls do not ship devices with such a reasonable configuration by default, they will block the deployment of new transport protocols indefinitely - at least all those that do not resort to the awkward expedient of running on top of UDP.
Blocking new transport protocols developers can reasonably handle with a standard policy is bad for efficiency, power consumption, latency, user experience, and so on in the long run - TCP is far from ideal as a transport protocol goes. In a number of ways it is outright backwards. If you want to impede the long term development of the Internet, degrading the end-to-end principle unnecessarily is a good place to start.
Work is underway for concurrent multipath transfer for SCTP as well. Also known as CMT-SCTP. There are significant challenges in doing this sort of thing though. SCTP wasn't designed for CMT, and probably needs much more radical changes than the current architects are proposing to do it well.
Changes like subflows with independent sequence numbers and congestion windows, to start with. SCTP is much further ahead in the connection handling and security department, but MPTCP has the odd advantage of resorting to independent subflows to begin with, and if it can handle path failure properly, it might well be ahead in the CMT game, if byte stream semantics are all you need.
On the contrary, SCTP is a transport protocol just like TCP, except with a large number of added features. The main problem with SCTP has nothing to do with SCTP at all. It is that NAT devices do not support any transport protocol that they haven't been programmed for in advance. This makes SCTP next to impossible to deploy on a broad scale - NAT, that wart upon router-kind, is ubiquitous.
TCP would have exactly the same problem if it were a new protocol. A NAT device requires relatively deep knowledge of TCP to support it at all. It play games with both ports and addresses, keeps track of connection state, and so on. Ordinary routers do no such thing. A NAT device is a transport layer proxy by another name.
The late 1950s would be more accurate. Computers in the late 1940s predated compilers. If you were extraordinarily lucky, you might have an assembler.