Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Good idea (Score 4, Insightful) 175

Kernel crashes occur when the kernel enters an inconsistent or invalid state from which it cannot recover.

When a user program fails, the kernel maintains consistency, can cleanly terminate the process, and can accurately report the cause of the failure if need be (illegal instruction, deadlock, access violation, etc...).

When a kernel fails the very systems that it relies on to report failures may very well be compromised by whatever caused the kernel to fail in the first place. As such, any kernel fault reporting needs to be incredibly robust and as independent of other kernel mechanisms as possible. Dumping text to a serial terminal is the preferred method because it's incredibly simple and relies on nothing else, meaning that barring a failure of the system memory it should always act as a reliable fallback.

Dumping kernel memory to a disk might fail if the state of the file system is compromised, if the storage controller is compromised, or if any number of intermediary systems are compromised by the inconsistent state of the kernel. Many operating systems do attempt to dump crash memory to the swap file / swap partition as this is less likely to cause data corruption than writing to a particular file in the file system.

It "can" be done, but that does not necessarily make it a good idea.

Comment Re:Physical Stores (Score 2) 323

You can't pay $1 / movie to stream any movie whenever you want because that's not a sustainable revenue model.

Financially successful movies typically demand big budgets, and these same financially successful movies typically make the bulk of their return during the initial cinema run. This is followed by the home-theatre / pay-per-view release which aims to reach both diehard fans and untapped markets that value it enough to actually pay for it. Once that's been worn out it'll head for a second theatre release if there's demand for it, and finally head to broadcast syndication and public availability like Netflix. High budget programs that reach Netflix have already made over 95% of the revenue that they will make from program viewership; further revenue comes from milking bargain bin sales, re-releases, and branded merchandise.

Were media to go straight from cinema to general availability many titles would miss out entirely on very important sources of revenue, and this would render many niche and cult-classic films entirely unprofitable. Many high-budget films would survive (albeit at a much reduced profit) provided that they have a long and successful cinema run, but quality titles that don't generate significant consumer awareness due to smaller initial market demand will simply fail miserably.

Comment Re:It's not arrogant, it's correct. (Score 2) 466

>AT&T, and other providers, should have no right to put up walls. If there are issues of peering, those should be working out at the peering level, and not at the application/service or individual business level.

It is strictly a peering issue and it is being worked out at the peering level.

In order to keep their operating costs low, Netflix decided to go with the lowest cost bandwidth around, Cogent. Cogent is great for getting data between datacenters on the cheap, but not-so-great for getting huge volumes of data to end consumers. As a result of Netflix growth and the rollout of higher bandwidth video streams, Cogent began dumping tons of traffic onto other ISPs with which they have peering agreements. Peering agreements are mostly informal and involve exchanging X quantity of traffic for Y quantity of traffic where X and Y are reasonably close to each other. As long as the peering agreement is fair, there's typically no money involved, so the sender of the traffic keeps all the revenue. The amount of traffic originating from Cogent has smashed into the limit of the peering agreements that Cogent has with various consumer ISPs. The bulk of this traffic just happens to originate from Netflix.

The cost of expanding the interconnects to handle all the added cogent traffic would fall entirely on the consumer ISPs, but the revenue for doing so would end up entirely in Cogent's hands. The consumer ISPs perceive this as being rather one sided and unfair (a matter of debate and opinion), so they're dragging their feet and refusing to bolster the exchanges or expand the peering agreements with Cogent unless Cogent foots some of the bill.

This is not a net neutrality issue at all. Netflix is not being treated any differently than any other traffic source. The dispute is between Netflix's upstream ISP and consumer ISPs. Netflix only got involved directly because Netflix is Cogent's biggest consumer, and is indeed the largest source of internet traffic by volume in the world right now.

I'm certain that many consumer ISPs would love to create "Internet + Netflix" packages, but doing so would be a flagrant violation of network neutrality. Instead, they have to bite the bullet and treat it the same as all other traffic which means downloading all of the costs of delivering it onto all of their customers equally regardless of whether or not they actually use it. The workaround is to shift some of those costs onto the source of the traffic (Cogent) who in turn would shift those costs onto the source (Netflix).

Comment Re:Network vs Content providers (Score 1) 289

Don't be facetious. Echoing the traffic would make neither Verizon nor Cogent, nor Netflix's customers happy. Verizon is under no obligation to receive every byte that a paying customer has solicited at the customer's marketed rate unless that paying customer is a business customer that has a service level agreement. The peering agreement between Verizon and Cogent is purely voluntary and Netflix is free to contract with another ISP that has better interconnects to Verizon. The requests coming from Verizon's network are part of the traffic that Verizon sends to Cogent.

Comment Re:Network vs Content providers (Score 1) 289

Selling internet access with a particular level of general service, and selling access to a particular destination with a guaranteed or dedicated level of service are two different things. Consumers deal with the former, and get aggregate access with little to no guaranteed quality of service to any particular endpoint. Business consumers deal with the latter and pay tremendous amounts of money to get the level of service that they need.

The Internet is not just some big magic box in which traffic goes into and comes out of. Ted Steven's "series of tubes" comparison was heavily mocked by detractors, but it is a shockingly accurate comparison. ISPs provide access to the internet through a large number of backbone networks. Routing traffic from A to B and from B back to A may traverse several different networks with traffic changing hands at various exchange locations, typically located in big cities. Disconnect one particular peering arrangement and data that normally traverses that edge will simply find its way to its destination through another route, but this route will usually be less efficient to most consumers.

Right now Verizon peers directly with Cogent all across the country and that's how Netflix traffic is delivered to Verizon customers. Peering agreements are usually constructed to be fair to both parties and rarely involve actual cash payments and that is indeed the case between Verizon and Cogent. It's rare for a peering agreement to be perfectly balanced, but the growth of Netflix means that Verizon is taking far more traffic from Cogent than they are sending back to Cogent and as such are incurring the cost of accepting that uneven amount of bandwidth (which is a huge amount) while as the sender Cogent is keeping the revenue that it gets from Netflix.

Right now, Netflix accounts for almost a third of all downstream Internet traffic, but less than 5% of all upstream traffic. This is a pretty big discrepancy, wider than the usual uses which, even when they do place a large load on the network, tend to be less time sensitive and burstier.

Comment Re:Network vs Content providers (Score 2) 289

No they are not. They are promising a certain link speed between the customer and the point where the customer's traffic leaves the ISPs network; if they don't deliver on that, that's a different matter entirely. They are not promising a certain level of access to services outside of the ISPs network. When external services are aggregated the marketed rate should be attainable, but aggregation involves making certain assumptions about traffic, one of which is that traffic is more or less evenly distributed across the interconnects. If thousands of customers all try to draw traffic across the same interconnect, that interconnect gets bogged down. The rest of the network is fine, and the customer should still be able to obtain whatever their normal data rate is, but only by aggregating traffic appropriately.

Comment Re:Serious Questions about OpenBSD infrastructure (Score 1) 209

I made no comment about the varying architectures, I made a comment about the build servers themselves.

2.5 racks of ancient equipment is too much equipment. Modernizing those servers would cut the physical and electrical load down by at least 80%. Moving it to a more effective location would go even further. There are some places in the states that rent out a full 42u rack with a 20 amp supply and an unmetered gigabit link for around $700 a month. Colocation in Canada is more expensive across the board but there are comparable services in the GTA.

Comment Re:Serious Questions about OpenBSD infrastructure (Score 5, Insightful) 209

It's purely poor management. From what I understand, the build servers are absolutely archaic beasts that had they been replaced long ago would not have led to such astronomically high bills. It also doesn't help that they seem to be located in Theo de Raadt's basement.

Slashdot Top Deals

To invent, you need a good imagination and a pile of junk. -- Thomas Edison

Working...