Strange as it may seem, the parent is right - a lot of modern ad-hoc routing algorithms don't automatically keep their routing tables up-to-date -- instead, they flood the nework with a "where is so-and-so" message when they need to send a message to a certain host they don't already know about. As the reply is flooded back from the destination node, every other node learns how to reach it, and the path is built up by the forwarding nodes in the reply, so that when it finally gets back to the initiator, it knows the full path to get to the destination. Data packets are not flooded, only route requests and replies. This is how DSR works. AODV works a little differently but I don't remember the details. By contrast, OLSR is a link-state protocol rather than a distance-vector protocol -- every node tries to keep a current map of the whole network. This is expensive for large networks, but it's reasonably efficient for small ones, like you might see popping up in a disaster area in order to re-establish local communication. The nice thing about OLSR is it runs at the IP layer, so it doesn't have any kind of weird hardware dependency -- it's easy to set up on all kinds of computers (Linux, Windows, WRT54Gs...), or at least that was the state of things a couple years ago when I was using it.
I have used OLSR in small networks of wireless routers (running OpenWRT) and laptops, and it seems to work well. I haven't done any large-scale testing, but some people have.
Here are some options that Virgin Mobile isn't doing:
Network capacity is a finite resource. It looks like Virgin Mobile is dealing with that in the most reasonable way. Good for them.
Here's what I want -- a netbook with an e-ink style non backlit (or optionally backlit) screen readable in direct sunlight that can be swiveled around so it can function as either a tablet or a laptop, and with a non-locked-down OS. I want one device that I can use to write code, read books, browse the web, and connect to a projector to display slides, and do so with a long battery life.
What I've just described is essentially the OLPC XO, which would be quite tempting if the keyboard were easier to use, and maybe the screen were a bit bigger. I would be happy with a black and white screen, especially if it had an external vga or dvi or hdmi cable for connecting a color display. I am surprised no one is making such a device.
Maybe I ought to just buy a netbook and a Kindle, but it seems like a waste not to combine the best features of both into a single device.
The burden of proof is to prove a negative?
Absolutely. This isn't a matter of publishing a scientific paper, it's a matter of public policy. We consider this normal when it comes to other matters -- for instance, suppose a pharmaceutical company wants to market a new drug. The burden of proof is on them to demonstrate to the FDA that the drug is safe. The burden of proof is not on the FDA to prove that the drug is dangerous or harmful.
Furthermore, consider what we know about carbon dioxide in the atmosphere:
None of the above statements should be controversial. What is controversial, is how the increased carbon dioxide is going to interact with the rest of the environment, and for that we use computer models. These models may be flawed, but the worst models will always be better than sticking your head in the sand and hoping for the best. The best computer models are saying that we can expect temperatures to rise.
Unlike the pharmaceutical example above, we already know from the non-controversial facts of the matter that any result other than an increase of temperatures would be surprising. If, say, a pharmaceutical company wished to market a drug containing cyanide, the FDA would and should certainly be interested whether there were some mechanism in place that is expected to render the cyanide harmless.
If you need an additional reason why we should not demand positive proof, consider the form which positive proof would take: simply, to let things be as they are, and to wait and see if an environmental catastrophe ensues. If we were to take that course of action, and a catastrophe did ensure (as we have every reason to believe it would), we would have lost much time in implementing a solution, and the harm would be far greater.
"Noticeable impact" is not the bar that was raised. "Cause significant and irreparable harm to life on earth" is the bar.
What constitutes "significant harm"? The IPCC is predicting average surface temperatures to go up by between 2.0 to 11.5 degrees F in the 21st century (according to wikipedia). If we're to assume the impact is on the low end, would 2 degrees be significant? If you live in a northern climate, probably not. The weather might actually be nicer. If you live in Africa or India, it may be very significant, as in widespread famines and droughts. Even if things get bad, we the rich will still be able to buy ourselves food, but it isn't our place to think about ourselves only, but also the poor who won't be able to buy food when their crops fail, and can't afford to move to a nicer climate. Just because America isn't going to become one big Death Valley, doesn't mean we shouldn't be concerned.
Also, quite aside from global warming, there is also the ocean acidification that goes along with increased carbon dioxide.
It's (mostly) the politicians (both sides!) I blame. They decided to fool the people to agree with their respective intractable positions, instead of engage them and bring consensus about an issue that may or may not be politically treatable. We still don't have good public information, that can be digested by laypeople
You will never get consensus from people who are religiously opposed to the idea that human activity might cause environmental harm, and who believe that global warming is just a socialist plot to institute a world government. (Not that all global warming sceptics are like that, but many are.) There may be many who just need convincing, but for them the information is out there.
The first three examples given are, to me, fairly straightforward errors. Perhaps you'd rather that ghc said, "Hey dummy, Char doesn't belong to the typeclass Num, so don't try to use it as if it was."
The last one is questionable. Fortunately, ghc 6.12.1 (which I just tried it on) no longer refers to the monomorphism restriction in its error message. The inferred typeclass is still confusing, but the message tells you that you're using the wrong type for that context, from which it should be straightforward to diagnose the problem, or at least add some type annotations to give the compiler a better shot at providing a better error message.
I will concede that some ghc errors can be confusing, and the example given is certainly not the worst, but overall I'm pretty happy with the errors I get. If you consider Haskell errors to be particularly bad, perhaps you could provide us with an example of a language with clear, concise errors, such that all languages should aspire to?
That does not compute.