Many entities that seem to exceed light speed, are in fact multiple entities exhibiting a change in measurable state, in sequence, which looks like a single object. Take the example of a Mexican Wave: we can set up a large one, that seems to move faster than the speed of light along the crowd, but no single person exceeds light speed. Likewise, one may take the interference pattern between two combs, and make the highlights and shadows move faster than light speed. None of these examples break causality rules.
[warning: armchair comment] Looks like a leakage problem, degrading the cells, so reads must be retried sub-optimally. The fix would be for the drive to re-write/re-allocate old cells, which could become a maintenance task that does not noticeably degrade live performance nor lifespan. However, this does limit the drive's use as a portable or offline drive, where this maintenance cannot be performed routinely.
This will likely cost a lot to use: a competitive market for 'transactions' and licensing. Imagine each segment or corridor of airway being owned and sublet by someone who sets transit pricing. Imagine the licensing process itself being regulated like domain names. It's likely to be better if regulated exclusively by a central authority, on a not-for-profit basis.
Learning and intelligence are defined by the experience of having a body, so perspectives (AI/human) will be different. Also, lifespans and context might present difficulties. Some of this, and other issues are in "The T3 Report" (short fiction) http://johnvalentine.co.uk/fic...
Reduce those megawatts of power being sucked up by video cards: do high-resolution rendering only on the bit being looked at. Of course, it won't work that way; we'll use the same power to render better graphics.
Opinion: The true long-term way forward is oil-free fuel (all-electric) at the point of use, but this needs a higher order of support than hybrid technology. A cynical view is that this [article/policy] might only practically contribute to the subsidy of hybrid cars, which maintains oil industry interests. This interest could be safeguarded by spacing the charging stations at intervals greater than is practicable for electric-only vehicles (which have shorter range).
A standard can be openly documented, but heavily patented and licensed. A competing standard can be almost documented and a work-in-progress, but free to use. Which is better? H.264 would be a poor choice going forward; not because of openness or technical capability, but because the IP owners are luring implementers in, in the hope that early adopters will be irrevocably committed to a patented technology when the usage terms start to become a cash cow. What we need is good abstraction, so that we can freely switch between adopters of the standard interface: like having a graphics API that lets you use Direct-X or OpenGL just by flipping a switch.
It's all very well designing the perfect search engine (and the rest of the baggage that sits in the right margin), but interested parties will always try to break it to their own ends.
Yes, it would eliminate some of the 'landscapes', or reduce the possible variables of working landscapes. I'd use the word disproof reservedly - but it's good to see papers that say "it can't be X" against the many that say "it can be Y"; while the innovation of the latter is needed, it's nice to see the former, especially in the recent climate of string theory (and its variants) becoming institutionalized.
If this work checks out, then it's "good science" (yay, a disproof!), and tells us a lot more about current ideas than the typical run-of-the-mill publications that exist today. At the risk of trolling: we have many broken or fudged models at the moment, and we need new ideas!
Cash set aside for lawyers, THEN leftovers to NPOs? 1. How much will the NPOs see? 2. Will the chosen NPOs be specially selected as sympathetic to the Google view on privacy? 3. Was this money already pre-allocated for NPOs before the settlement? [not taking sides; asking questions!]
Recent games are often linear stories, loaded with artistic media, pretending to be free-to-roam games. Given the linear game sequence, it _doesn't pay to make some parts really difficult_, because it would close off the remainder of the game, spoiling the satisfaction. I used to develop small games: usually procedural, without story, where the difficulty just keeps going up, no end! While my approach challenged every player, and offered replayability, it wouldn't result in the type of 'formula' game that gets published nowadays.
I'm late to the party, but would the following work? A new apps API which publishes session-based hashes for user IDs and query results. The app-processed results are then passed back through Facebook API to be published. It won't answer all concerns, but it would allow a class of 'non-identifying' apps to thrive. Slashdotters might find a clever way of finding repeating patterns to identify users and linking through to known clusters, but it should be better than the 'open access' that apps currently enjoy just to function.
It assumes that all Internet users are pirates: not so! Is this a statistical "taxation at the point of use", which assumes that the population has pirates in it, so we charge the population for use? I don't understand how the entertainment companies can justify the many ways they are taking money, other than direct payment for consumption. Solution (perhaps impractical, but ethical) is to charge users, and prosecute pirates.
Any anti-bot/spammer/crook system has to work at a level that is not the same as the regular session. On joining a system, you should be able to set up a separate user/password that acts as admin for your account, and the admin account is used to control access. During regular use, you use your regular account, which means that there is less probability of having your credentials stolen, and less probability of having your admin account hacked. If your regular account is hacked, then disable the regular account; the admin account can then be used to unlock it.