New abstraction at a well-chosen part of a stack may result in better general functionality, but there's a risk that the calling code then becomes less readable, takes more skill to write, or takes a performance hit. Do you have a good example of how you decided a case for abstraction?
You glance sideways at an organisation, and inwardly sigh. Why? Is there a common pattern? (A very open question; we might get surprise answers)
Yes, that would be more accurate
This makes sense in the Apple ecosystem. It speeds up web browsing and streamlines the experience, and if ads are blocked at browser or OS level, it gives Apple a chance to create their own approved ad market. I think it's a step too far to assume that they can insert unintended content arbitrarily into a web page or existing ad slot.
Many entities that seem to exceed light speed, are in fact multiple entities exhibiting a change in measurable state, in sequence, which looks like a single object. Take the example of a Mexican Wave: we can set up a large one, that seems to move faster than the speed of light along the crowd, but no single person exceeds light speed. Likewise, one may take the interference pattern between two combs, and make the highlights and shadows move faster than light speed. None of these examples break causality rules.
[warning: armchair comment] Looks like a leakage problem, degrading the cells, so reads must be retried sub-optimally. The fix would be for the drive to re-write/re-allocate old cells, which could become a maintenance task that does not noticeably degrade live performance nor lifespan. However, this does limit the drive's use as a portable or offline drive, where this maintenance cannot be performed routinely.
This will likely cost a lot to use: a competitive market for 'transactions' and licensing. Imagine each segment or corridor of airway being owned and sublet by someone who sets transit pricing. Imagine the licensing process itself being regulated like domain names. It's likely to be better if regulated exclusively by a central authority, on a not-for-profit basis.
Learning and intelligence are defined by the experience of having a body, so perspectives (AI/human) will be different. Also, lifespans and context might present difficulties. Some of this, and other issues are in "The T3 Report" (short fiction) http://johnvalentine.co.uk/fic...
Reduce those megawatts of power being sucked up by video cards: do high-resolution rendering only on the bit being looked at. Of course, it won't work that way; we'll use the same power to render better graphics.
Opinion: The true long-term way forward is oil-free fuel (all-electric) at the point of use, but this needs a higher order of support than hybrid technology. A cynical view is that this [article/policy] might only practically contribute to the subsidy of hybrid cars, which maintains oil industry interests. This interest could be safeguarded by spacing the charging stations at intervals greater than is practicable for electric-only vehicles (which have shorter range).
A standard can be openly documented, but heavily patented and licensed. A competing standard can be almost documented and a work-in-progress, but free to use. Which is better? H.264 would be a poor choice going forward; not because of openness or technical capability, but because the IP owners are luring implementers in, in the hope that early adopters will be irrevocably committed to a patented technology when the usage terms start to become a cash cow. What we need is good abstraction, so that we can freely switch between adopters of the standard interface: like having a graphics API that lets you use Direct-X or OpenGL just by flipping a switch.
It's all very well designing the perfect search engine (and the rest of the baggage that sits in the right margin), but interested parties will always try to break it to their own ends.
Yes, it would eliminate some of the 'landscapes', or reduce the possible variables of working landscapes. I'd use the word disproof reservedly - but it's good to see papers that say "it can't be X" against the many that say "it can be Y"; while the innovation of the latter is needed, it's nice to see the former, especially in the recent climate of string theory (and its variants) becoming institutionalized.
If this work checks out, then it's "good science" (yay, a disproof!), and tells us a lot more about current ideas than the typical run-of-the-mill publications that exist today. At the risk of trolling: we have many broken or fudged models at the moment, and we need new ideas!
Cash set aside for lawyers, THEN leftovers to NPOs? 1. How much will the NPOs see? 2. Will the chosen NPOs be specially selected as sympathetic to the Google view on privacy? 3. Was this money already pre-allocated for NPOs before the settlement? [not taking sides; asking questions!]