Don't be facetious. Echoing the traffic would make neither Verizon nor Cogent, nor Netflix's customers happy. Verizon is under no obligation to receive every byte that a paying customer has solicited at the customer's marketed rate unless that paying customer is a business customer that has a service level agreement. The peering agreement between Verizon and Cogent is purely voluntary and Netflix is free to contract with another ISP that has better interconnects to Verizon. The requests coming from Verizon's network are part of the traffic that Verizon sends to Cogent.
Selling internet access with a particular level of general service, and selling access to a particular destination with a guaranteed or dedicated level of service are two different things. Consumers deal with the former, and get aggregate access with little to no guaranteed quality of service to any particular endpoint. Business consumers deal with the latter and pay tremendous amounts of money to get the level of service that they need.
The Internet is not just some big magic box in which traffic goes into and comes out of. Ted Steven's "series of tubes" comparison was heavily mocked by detractors, but it is a shockingly accurate comparison. ISPs provide access to the internet through a large number of backbone networks. Routing traffic from A to B and from B back to A may traverse several different networks with traffic changing hands at various exchange locations, typically located in big cities. Disconnect one particular peering arrangement and data that normally traverses that edge will simply find its way to its destination through another route, but this route will usually be less efficient to most consumers.
Right now Verizon peers directly with Cogent all across the country and that's how Netflix traffic is delivered to Verizon customers. Peering agreements are usually constructed to be fair to both parties and rarely involve actual cash payments and that is indeed the case between Verizon and Cogent. It's rare for a peering agreement to be perfectly balanced, but the growth of Netflix means that Verizon is taking far more traffic from Cogent than they are sending back to Cogent and as such are incurring the cost of accepting that uneven amount of bandwidth (which is a huge amount) while as the sender Cogent is keeping the revenue that it gets from Netflix.
Right now, Netflix accounts for almost a third of all downstream Internet traffic, but less than 5% of all upstream traffic. This is a pretty big discrepancy, wider than the usual uses which, even when they do place a large load on the network, tend to be less time sensitive and burstier.
No they are not. They are promising a certain link speed between the customer and the point where the customer's traffic leaves the ISPs network; if they don't deliver on that, that's a different matter entirely. They are not promising a certain level of access to services outside of the ISPs network. When external services are aggregated the marketed rate should be attainable, but aggregation involves making certain assumptions about traffic, one of which is that traffic is more or less evenly distributed across the interconnects. If thousands of customers all try to draw traffic across the same interconnect, that interconnect gets bogged down. The rest of the network is fine, and the customer should still be able to obtain whatever their normal data rate is, but only by aggregating traffic appropriately.
The congestion isn't on the customer-facing side of the ISP's network, it's at the peering exchange with Netflix's carriers. They make no promise about that.
Nothing at all. I quite liked Doom 3.
I made no comment about the varying architectures, I made a comment about the build servers themselves.
2.5 racks of ancient equipment is too much equipment. Modernizing those servers would cut the physical and electrical load down by at least 80%. Moving it to a more effective location would go even further. There are some places in the states that rent out a full 42u rack with a 20 amp supply and an unmetered gigabit link for around $700 a month. Colocation in Canada is more expensive across the board but there are comparable services in the GTA.
It's purely poor management. From what I understand, the build servers are absolutely archaic beasts that had they been replaced long ago would not have led to such astronomically high bills. It also doesn't help that they seem to be located in Theo de Raadt's basement.
fuck that, bring back crucifixions!
Copyright protects software as a particular implementation whereas patents protect software as an innovative or inventive approach (that's the idea anyway). Anyone who copies a piece of software verbatim without authorization is going to fall afoul of copyright law. However, anyone who performs clean-room reverse engineering on that piece of software and reimplements it from an abstract level will not. In the latter case, the party performing the reverse engineering may still infringe on one or more patents in the process even if they successfully avoid any breach of copyright.
It's most likely heat related
I'm of the opinion that this is "too bad so sad" for Google, they had their opportunity to bid for the patents but didn't want to shell out for them. The billions of dollars in proceeds generated from that bid allowed my father to recoup some of his pension that he lost when Nortel collapsed. They didn't buy Nortel as a whole, they just purchased some of the IP that was auctioned off.
Cost sensitive embedded systems use ARM based microprocessors to which this is not applicable.
I used to have pet rats. Rats will eat anything
That's only when the AVX2 and FMA instructions are used. I do not know of any JS engines that vectorize code to allow the use of vector extensions. Most of them just make use of the basic scalar FP instructions or even use old x87 code.
Ideal virtual machines are indistinguishable from networked servers. Most x86 VMMs don't quite reach this level of isolation, but the VMMs used on IBM's PowerPC based servers and mainframes do.
From the perspective of system security, a single compromised application risks exposing to an attacker data used by other applications which would normally be outside of the scope of the compromised application. Most of these issues can be addressed through some simple best practices such as proper use of chroot and user restrictions, but those do not scale well and do not address usability concerns. A good example is the shared hosting that grew dominant in the early 2000s while x86 virtualization was still in its infancy. It was common to see web servers with dozens if not hundreds of amateur websites running on them at once. For performance reasons a web server would have read access to all of the web data; a vulnerability in one website allowing arbitrary script execution would allow an attacker to read data belonging to other websites on the same server.
From the perspective of users, a system designed to run 100 applications from 20 different working groups does not provide a lot of room for rapid reconfiguration. Shared resource conflicts, version conflicts, permissions, mounts, network access, etc... it gets extremely messy extremely quickly. Addressing this requires a lot of administrative overhead and every additional person that is given root privileges is an additional person that can bring the entire system down.
Virtual machines on the other hand give every user their own playground, including full administrative privileges, in which they can screw around with to their hearts content without the possibility of screwing up anything else or compromising anything that is not a part of their environment. Everyone gets to be a god in their own little sandbox.
Now, that doesn't mean that the entire operating system needs to be duplicated for every single application. Certain elements such as the kernel and drivers can be factored out and applied to all environments. Solaris provides OS level virtualization in which a single kernel can manage multiple fully independent "zones" for a great deal of reduced overhead. Linux Containers is a very similar approach that has garnered some recent attention.