Reduce those megawatts of power being sucked up by video cards: do high-resolution rendering only on the bit being looked at. Of course, it won't work that way; we'll use the same power to render better graphics.
Opinion: The true long-term way forward is oil-free fuel (all-electric) at the point of use, but this needs a higher order of support than hybrid technology. A cynical view is that this [article/policy] might only practically contribute to the subsidy of hybrid cars, which maintains oil industry interests. This interest could be safeguarded by spacing the charging stations at intervals greater than is practicable for electric-only vehicles (which have shorter range).
A standard can be openly documented, but heavily patented and licensed. A competing standard can be almost documented and a work-in-progress, but free to use. Which is better? H.264 would be a poor choice going forward; not because of openness or technical capability, but because the IP owners are luring implementers in, in the hope that early adopters will be irrevocably committed to a patented technology when the usage terms start to become a cash cow. What we need is good abstraction, so that we can freely switch between adopters of the standard interface: like having a graphics API that lets you use Direct-X or OpenGL just by flipping a switch.
It's all very well designing the perfect search engine (and the rest of the baggage that sits in the right margin), but interested parties will always try to break it to their own ends.
Yes, it would eliminate some of the 'landscapes', or reduce the possible variables of working landscapes. I'd use the word disproof reservedly - but it's good to see papers that say "it can't be X" against the many that say "it can be Y"; while the innovation of the latter is needed, it's nice to see the former, especially in the recent climate of string theory (and its variants) becoming institutionalized.
If this work checks out, then it's "good science" (yay, a disproof!), and tells us a lot more about current ideas than the typical run-of-the-mill publications that exist today. At the risk of trolling: we have many broken or fudged models at the moment, and we need new ideas!
Cash set aside for lawyers, THEN leftovers to NPOs? 1. How much will the NPOs see? 2. Will the chosen NPOs be specially selected as sympathetic to the Google view on privacy? 3. Was this money already pre-allocated for NPOs before the settlement? [not taking sides; asking questions!]
Recent games are often linear stories, loaded with artistic media, pretending to be free-to-roam games. Given the linear game sequence, it _doesn't pay to make some parts really difficult_, because it would close off the remainder of the game, spoiling the satisfaction. I used to develop small games: usually procedural, without story, where the difficulty just keeps going up, no end! While my approach challenged every player, and offered replayability, it wouldn't result in the type of 'formula' game that gets published nowadays.
I'm late to the party, but would the following work? A new apps API which publishes session-based hashes for user IDs and query results. The app-processed results are then passed back through Facebook API to be published. It won't answer all concerns, but it would allow a class of 'non-identifying' apps to thrive. Slashdotters might find a clever way of finding repeating patterns to identify users and linking through to known clusters, but it should be better than the 'open access' that apps currently enjoy just to function.
It assumes that all Internet users are pirates: not so! Is this a statistical "taxation at the point of use", which assumes that the population has pirates in it, so we charge the population for use? I don't understand how the entertainment companies can justify the many ways they are taking money, other than direct payment for consumption. Solution (perhaps impractical, but ethical) is to charge users, and prosecute pirates.
Any anti-bot/spammer/crook system has to work at a level that is not the same as the regular session. On joining a system, you should be able to set up a separate user/password that acts as admin for your account, and the admin account is used to control access. During regular use, you use your regular account, which means that there is less probability of having your credentials stolen, and less probability of having your admin account hacked. If your regular account is hacked, then disable the regular account; the admin account can then be used to unlock it.
Yep (apologies: accidental patronising). Makes one wonder: at what slit size would diffraction start to cause problems, and would a large frame overcome that particular problem (assuming the shutter engineering was achieved)?
An SLR shutter doesn't expose the whole frame at the same instant: It's like a scanning line running down the frame, so if your gap (between the separately-controlled curtains) is small enough, you can have _any_ shutter speed you want - just don't expect the whole frame to be recording the same instant in time. Also, you don't need to put the shutter immediately in front of the film/sensor plate (but it helps give a clear image).
I'll hazard a guess that clock speed excels in one particular case: tight-loop iteration. You can't do that with parallelism (ignoring some fancy pipelining to get part-way there). The fastest way to get the 1 millionth result in a no-shortcut iterative sequence is to get the loop processing at the highest frequency possible.
Try Joules (in context as a total), or watts (as a measure per unit time).