Comment Proof by axiom (Score 1) 182
By defining computation to be X and not Y, I have proved that X is computation and Y is not.
Yes, terribly helpful, that.
By defining computation to be X and not Y, I have proved that X is computation and Y is not.
Yes, terribly helpful, that.
Actually, it was 90% at 24Mbps, and 100% at 2Mbps.
Skype/Facetime over wifi? Having a mobile phone conversation?
Agreed latency can suck. Does it *have* to suck? Seems like that's an implementation issue.
The promise that they will struggle to cover is 2Mbps to 100% of the population (since the 95% of the population promise is largely a 'sort out urban areas' thing and can be done by wiring and/or better modems). Bandwidth may not be their issue.
You don't have to go far into the countryside for availability to be a major problem, particularly if there are trees or hills in the landscape (true of just about anywhere apart from the fens.
And this smacks of a solution to the broadband promise along the lines of 'well, we promised fast internet to 95% of the population, and see, you have it! We didn't say it wouldn't cost 200 quid a month?'
Apparently we need a nice high level 3D presentation library but we don't want to work out how to use libxml2. I shall leave http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags here and leave you to consider the error of your ways.
(Also, what language did you base that on? It's surprisingly hard to read.)
FTTC would be a damn sight better than what we have. If we actually *mean* cabinet, then most people are a few hundred metres away from their nearest cabinet. However, rural users are typically several km from their local exchange. The wiring is set up for 1960s phone systems, with long runs of many many pairs from the exchange out to neighbouring villages - runs that could be replaced with something as lowly as a single gigabit link for drastically improved connections over an all-IP network.
Because we have a universal service agreement?
My family lives in a small village.
BT had an advertising campaign a couple of years back saying that anyone on BT could vote to get their exchange upgraded to BT Infinity. And yet, because their exchange was so small, it was impossible to reach the necessary 1000 person threshold to be counted.
They're 25 miles away from two large cities, and yet their broadband runs at somewhere between 500 and 1000kbps, despite being well within the 14km ADSL line length limit - and that's when the wind isn't blowing, because it's apparent that the overhead lines are no longer intact.
Their provider is charging them more than you'd pay in a city - not only for a 20Mbps connection, but also extra because they're outside the areas where the provider has (cheaper) local coverage.
BT won't fix the problem. The providers in the area all use BT cables to give broadband, so there's no competition.
And here's no service guarantee - if you complain, nothing happens, and if you complain more, you get told that you have the choice between shitty broadband or no broadband at all.
Until the government appreciates that the network have-nots in the UK are so, so far removed from the other 90% of the population, it's hard to see how anything will happen, or how anyone will actually able to calculate how much it will cost to fix.
No-one mentions this, and it always annoys me. Aside from the software failings, there's an obvious systematic one caused by internet voting at home.
Elections should be secret to avoid the sale or compulsion of votes. So you go to a secured place and vote in a booth so that no-one can tell how you voted (and try not to think too hard about those tracking numbers on your slips, but hey). You cannot leave an identifying mark on your ballot - sign a ballot, for instance, and it is invalid and not counted.
Vote at home, or postally, or by proxy, and secrecy is lost. You can sell your proxy to someone. You can have someone watch you while you vote. This may not matter to you, but hypothetically (and there have been cases of this) if you live in a less-than-free country your employer or your commanding officer might check your ballot to ensure you voted patriotically.
*This* should be sufficient reason to insist on voting at a controlled location. If you worry about people being simply too idle to vote - or prevented from attending - then you should go the way of Belgium or Australia, where you must turn out and vote on pain of being fined, even if you then choose to spoil your ballot. But you should never neglect the principle of secrecy in the name of expediency.
It all depends on the nature of the loss on the path the packets traverse.
Correlated (i.e. simultaneous) loss will be *worse* to the many-connection case.
I think you'd need to test that to prove it. In my head your simultaneous loss would cause some (but perhaps not all) of many connections would stall for retransmit and they would concurrently wait for the missed packets (1RTT on some percentage of the data), whereas one connection would stall for retransmit (1RTT on 100% of the data).
You deny yourself possibilities for optimisation by putting data with a low ordering requirement through a channel with a high ordering guarantee. You can't pause only one stream for a lost packet when it's within a TCP multiplex; data is being buffered up in the kernel where you can't access it while it waits for a packet that may represent a chunk of a stream you could live without for the moment. (This is not to say that multiple-stream TCP is a better answer, mind you; in truth, there are disadvantages to using either method and some third method might be more appropriate, for instance some protocol that was reliable but did not attempt to preserve ordering).
Ideally, fewer TCP connections should result in fewer dropped packets.
It's not obvious that there's a huge difference there. There will be more packets for more TCP connections and therefore potentially more drops, but perhaps only a tiny percentage more.
Also, with (say) 10 connections, each drop only stalls one of them while the other 9 continue. With one, 100% of the data stalls. So the number of drops may increase but the increase would have to be drastic to have the same magnitude of effect.
But lost packets is not the sort of problem SPDY is trying to solve.
Indeed, it makes things worse.
If you miss a packet in HTTP you stall one connection. Other data is still being received on other TCP connections.
If you miss a packet in SPDY you stall all the multiplexed downloads running over that connection until the retransmit.
This may not affect bandwidth, because modern TCP is quite good at recovering from a packet loss without stalling the transmission of packets, but it will stall the browser's receive thread because it can't be given any incoming data until the missing packet turns up, which is at least one round trip.
This would be true if one page used a single keepalive connection to its server. But typically it uses several and achieves multiplexing by having several TCP connections open. You'd make a request for the image on the first available idle connection - if the JS is still being served there may be another idle connection available nevertheless.
If you're purely looking to give more value to new works, then you don't have to change the copyright term for old ones. The creators knew what they signed up to when the work was created: changing that deal now cannot possibly be fair.
"The medium is the message." -- Marshall McLuhan