The whole reason it's called a "cellular" network is because they are "cells" - one for each tower (give or take multiple carriers sharing a tower). These cells overlap, and operate in different parts of the frequency spectrum. The important point about understanding this "cell" nature is that there is no reason a cell in rural Kansas has to be the same size as a cell in downtown NYC. In fact, they generally aren't the same. That's related to why they previously prohibited (and how they now can technologically allow) cell phones on planes - there is a "femtocell" placed on the plane so that cell phones use their minimum power. The use of higher power levels can cause an overlap of a flying cell phone with multiple distant cells (using the same frequency) on the ground... and cause a whole chain of confusion (for a system effectively designed to be 2-dimensional).
You make it out like additional bandwidth (in the form of more parallel downloads) is not an economic problem. Additional bandwidth has a cost: More tower density (or at least more radio antenna if you can mount on existing buildings) using smaller cells, and that's absolutely an economic problem. (It's also a permitting problem, and in suburban areas likely a ditch-digging problem, both of which are likely worse than buying the equipment)
A few other notes:
You need to calculate 3X/Y, not X/Y, as 4G Cell towers will likely use Tri-sectored directional antenna. It's widely deployed in 3G environments, and is basically a requirement in any dense area (and also facilitates cellular 911 location when more accurate location isn't available)
There are also additional technologies that could be deployed that change the rules of spectrum usage. MIMO comes to mind. (The major problem of MIMO is much like other things in the cellular world, in that signal reflections (and interference in general) are somewhat of a royal pain and the resulting demand of processing power makes the basestations and phones infeasible to deploy at this time.)
Net is that smaller cell sizes and using additional technologies could absolutely reduce congestion, but the expense in doing so is enormous. It doesn't really change why the cap needs to exist at some level (they are up against wall, even if a financial loss wall rather than a physics wall), but we're nowhere near the fundamental laws of physics.
I don't know if their algorithm/data takes in account height, but if it does, or if they add it (and it wouldn't be hard at this point), it would be ENORMOUSLY useful in my opinion. It gives resources to the population to get an accurate rendition that isn't limited to the two or three (very carefully chosen) views that have to be provided by the owning company in the permitting process.
There is no such thing as cheating, only getting creative with your sources. The real world, whatever your career will be, relies on the same behavior that is punished in school that they call "cheating."
The students agreed to a certain limits when they enrolled in the school, and committed that they would NOT cheat. That should be sufficient to impose punishment if those limits are unreasonably breached.
The behavior that is being discouraged is not sharing work. The behavior that is being punished is unreasonably breaching an agreement. The real world also has that type of limit. Most companies have some form of code of conduct. Part of that is likely to avoid breaking laws.
As a specific example from the 'real world', look at what SAP did with Oracle's code. It would be a breach of section 1 of SAP's Employee Code of Conduct. Seems that creativity will result in something between $40 million and $1.6 billion in punishment. That's not creativity, it's illegal.
The fact that the limits may be at different points (one set by a student's contract with the school, the other set by law) doesn't mean they shouldn't be enforced as written.
These rules happen to be one of the main reasons I still have cable. The Stanley Cup playoffs are nationally televised, thus blacked out online.
The thing is that copyright in its current form is a social contract - you are provided a temporary monopoly in exchange for producing that thing... with the underlying principle and power being "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;". (emphasis added)
If I am prevented from EVER copying an item (and remember that DRM has NO provision to provide an end to its protection), then I ask a more fundamental question: Given the social contract and the power provided to Congress in the Constitution, should an item that is protected with DRM actually be protected by copyright? If they want to take away the return of that item to the public domain, then shouldn't they have to give up some of the protection?
The fact is, the Constitution hasn't been fundamentally broken, but DRM changes the equation, because it impacts the underlying social contract in copyright. Congress and the courts haven't caught up to that tradeoff (which isn't a surprise, as the law generally lags technology by years or decades), and may not until something released solely with DRM enters the public domain (by which point the solution may be to legalize the breaking of such decades old [and computationally irrelevant] DRM)
Kensington will sell you cell phone connectors that will allow you to charge a cell phone from a laptop or other USB power source. It also has a portable battery that can provide an additional charge for your cell phone. Or step up to a fully universal laptop battery if you want to power that netbook
Some cameras can also be charged from USB, allowing you to use the Kensington portable battery or your netbook. Google to find out if yours can be charged that way.
There are at least half a dozen systems to charge a laptop (or in your case, a netbook) from solar power, effectively making it your portable power station, using solar power as the source.
Wow, over a runtime of 204 years, the DNA copying process has an accuracy of 99.99988%, or an error rate of only 0.00012%.
While I agree that the level of change is reasonably slow, I think you've taken the conclusion a bit too far in inferring the observed rate of change matches transcription accuracy.
The reason I would be cautious about extending observed mutation rate to infer transcription accuracy is that there is likely to be significant selection bias, similar to how "old furniture" always appears to be great quality (because anything that isn't great quality is in a landfill). Any fatal mutations would never progress and therefore can't be detected by this method. Thus, the 0.00012% is a (very) loose lower bound on the transcription error rate.
To follow your computer analogy, it's like saying a program running for 204 years only produces a wrong answer 0.00012% of the time *that it produces an answer*. What you may be missing is the 50% (making up a number) of the time that it dumped stack because a bounds check failed due to an error.
What did they invent?
OLE (1990) was an extension of Microsoft's Dynamic Data Exchange, introduced in 1987. CORBA was 1991. CORBA standardized (and made more flexible) the types of transactions Microsoft defined in DDE/OLE.
I seem to recall that tabbed browsing took years to make it into IE.
I have not - and will never make - the assertion that Microsoft innovates well or consistently. Microsoft frequently is not an innovator, but rather is chasing others. My point was on tabbed spreadsheets. "Microsoft has yet to innovate anything, ever." is a strong (and IMHO incorrect) statement.
On-the-fly spell checking in word processor
Trivial leveraging of improved processor speeds does not equal innovation, but nice try
It is innovation. It provides benefit to the end user that reduces the amount of effort a user has to make in checking a document. It may seem trivial in hindsight (many innovations are), but it was innovative when introduced.
People often refer to the inventors of technology and fret that they are not sufficiently recognized for their invention. (e.g. Xerox PARC on the GUI). What matters to users, however, is not the idea or the invention, but the successful application of that idea or invention. (e.g. Apple Macintosh)
This distinction between invention and innovation is why you will see companies refer to "innovation" as a key area where they need to spend effort. Ideas are a dime a dozen. Innovation refers to an invention that is successfully applied.
In that sense, Microsoft was an "innovator" in many areas because it was often the first to successfully apply a technology.
I challenge anyone to cite an innovation from M$
XBox Live (more generally a console w/ services and playability across the Internet)
Pivot Tables in Spreadsheets
On-the-fly spell checking in word processor
All first successful applications by Microsoft.
Actually, it's not even that simple. Most "high" quality furniture is still only about mid-range IMHO and subject to weaknesses in design. Ethan Allen and others who "mass produce" furniture - even "high quality" furniture will use jigs that result in shortcomings in the final structure. DerekLyons is correct. While they use a dovetail joint on corners, their half-blind dovetails tend to be rounded on the inside, and not completely square. Look at the half-blind illustration and then look at how it's done with a jig. Note in picture 5 how the insides of the pins are taken out with the use of a jig. That makes it harder for the glue to grip and makes the joint weaker in the long run.
Really good furniture only needs glue to secure it for long periods of time (if at all). Screws are typically used to hold on the top, in order to allow the wood to expand/contract with different moisture levels and avoid cracking. I have a desk & filing cabinet I built by hand and the glue is a formality. I could sit on them before they were glued and they didn't budge.