Historically the act of publishing (making work available to readers) was tied to the quality control (QC) processes of academia. When you publish on paper this is a necessity, but where I see publishing headed is a separation of these two functions. In an online world there is really no reason to conflate the two. (My main reservation about open-access journals like PLoS One is that they are too much a replica of traditional journals.)
In my ideal world:
1. Everyone publishes their articles for free in an online repository (say Arxiv), starting as early as the preprint stage. If an author needs help with document preparation (typesetting, graphics, proofreading), they can contract with a freelancer through the repository. (I.e., you really don't need an editor at Nature to help you find typos.) An author can revise their paper at any time, but previous versions are kept. Additionally, data that is commonly not published today (code, complete datasets, analysis scripts) could be attached as reference, and publishing these supporting materials would be strongly encouraged.
2. Authors can submit their work to one or more editorial boards, for evaluation and (potential) selection. They pay for this service, likely several thousand dollars since the work is expensive. If an editorial board approves their work, it gets tagged in the repository in a very visible way, which can then be used for filtering/reading. Multiple tags could be attached to an article. Some editorial boards might for example only check a piece of work for accuracy (say in its statistical analysis, or simulation code). Others may focus on importance and potential impact. All of these tags together form one component of the article's "reputation" (see below).
3. All references to works submitted after the introduction of the system, are links to those articles in the repository. So the repository can easily track the number of citations a given article has, and from articles of what reputation. The number and reputation of citing articles forms a second component of the article's "reputation".
Initially the editorial boards would evolve out of the current journal hierarchy, so for example in physics there would be a "Physical Review Letters" editorial board. (Which may continue printing a hardcopy of the PRL journal, at their discretion, if they can make the economics work.) New editorial boards could come into being, for example on specific functions like fact- or accuracy-checking. The reputations of these editorial boards would likely be relatively persistent over time, like the perceived reputations of print journals today.
I would submit this would also be imminently practical for the academic community to move into. It builds on the publishing and QC mechanisms that currently exist.
I'm not saying that gaming led to the ideas behind the GUI; these came from the Alto and elsewhere.
I'm saying that gaming was what drove graphics price/performance to a point where GUI-quality graphics hardware could be present in most PCs. Some market force had to be present to drive the industry toward a $100 graphics card that was GUI-capable. That market force was gaming.
You mention the graphics workstation companies (Apollo, Sun, SGI, NeXT, etc.), but they were not a factor. Yes they had a lot of R&D and high-performance hardware, but they were targeting niche applications (CAD/CAM, imaging, research) where cost was not a factor. Perhaps some of their ideas filtered down, but we would never have seen a $100 graphics card come from these companies; the market forces were not present.
For as long as I've been involved with computing (early 1980s), two things have always held true:
1. Gaming has driven the performance envelope in many areas, which then filters down to other applications. For example, GUIs in the late 80s/90s would not have been possible if gaming hadn't pushed graphics technology 5-10 years earlier. More recently, GPUs led the way toward general multi-core processing, and game UIs led to the "tactile" interfaces that are now common on smartphones and tablets. Expect to see more recent gaming innovations like motion controllers and VR technology migrate into non-gaming applications over time.
2. People look down on gaming, and "gaming" machines. The C64 and Amiga were dismissed as "toys" by many, just as today a lot of people dismiss an Xbox 360 or PS3 in the same way. This I think is gradually changing, as people (and companies like Intel and AMD) realize that gaming is where the demand for higher performance is coming from. People only need their spreadsheet to go so fast, but gaming can always make use of more resources (for now at least).
Minimum wage does have the net effect of pricing certain jobs out of the market. The manufacturing jobs relocate overseas, and the service jobs just vanish (you mentioned ushering).
We do know that unemployment in the US is fairly low, so the people who would have had those jobs aren't just sitting around idle, by and large. The minimum wage forces lower-skilled workers to acquire skills. Everybody is required to provide at least $7.50/hr worth of value *somehow*. For this reason I think it's a positive thing. Speaking for myself, I don't need an usher or a full-time bathroom attendant; I'd rather see people investing to grow their skills.
The manufacturers must be going banannas trying to create a game for four different platforms.
In the next generation the Xbox and PS will each have standard x86-based PC architectures, with pretty mainstream GPUs. This will make it relatively easy for developers to target PC, Xbox, and PS (no more funky graphics pipelines, Cell processors, etc.). You could really just think of the Xbox 720 and PS4 as locked-down gaming PCs, packaged to be easy to buy and plug in.
In my opinion, they should be working with the Occulus Rift people to develop a box which can be worn as a backpack, which ties into the goggles.
From a marketing standpoint this would be really hard for Nintendo to pull off. They are pretty much synonymous with low-performance, casual gaming. The box you envision would appeal to hardcore gamers, and it would be relatively expensive at the outset.
Couldn't agree more. There's no evidence, just accusations without any basis.
I know you're trolling here, but there's a broader point people should understand.
There are strong disincentives for any organization to report hacking attempts on their systems. Factors at play are: (a) nobody likes to admit they have weak security, (b) nobody wants to go public with evidence that would reveal details about their internal systems, and (c) there is usually little or nothing positive to be gained from such an accusation. (What would the WSJ have to gain by giving the Chinese a bad name?) All of this means that these attacks are vastly under-reported, and when companies do report it is usually genuine.
You ask "why is rule of law important?" The answer is predictability. Businesses and individuals can make smarter decisions about their futures (where to invest, how to grow, what partnerships to engage in) if they have some measure of predictability about the future. If rules are arbitrary, or change every year, you lose predictability. And then decision-making is less than optimal.
Note that predictability is a potential outcome of the rule of law, but is not guaranteed. For example the US system today of arbitrarily-granted and unpredictably-upheld technology patents. "Predictable" is the last word you'd use to characterize it; it's become a dangerous minefield especially for small companies.
One of the best indicators of a bad regulatory environment is uncertainty of outcomes. When companies are uncertain about outcomes of things like patent litigation or corporate tax rules or future tax incentives, they cannot make intelligent business decisions. Bad decisions across an entire industry becomes a huge drain on business efficiency.
It's a Prisoner's Dilemma game: Everyone would be better off if nobody engaged in the bad behaviors (patent trolling, patenting trivial "innovations"), but unfortunately it's to everyone's unilateral advantage to engage in those behaviors.
Apple would love to get these injunctions, but really they have never behaved as though market share is their primary motivation. From the earliest days of the company they have opted for high-profit sales to a smaller niche audience. For iOS Apple could have easily (a) gone to multiple carriers much earlier than they did, and (b) produced cheaper models for the global market. But they didn't. Historically the high-margin strategy has worked for them.
What I think worries Apple now is that the high-margin strategy only works so long as there is a perceived quality premium. This time the competitor isn't MS-DOS. The best Android phones and tablets with Jellybean are genuinely good products, and in certain areas (LTE adoption, screen size, customizability) Apple is looking like a follower rather than leader. I think what they get from these injunctions isn't so much a market share boost, as it is a boost to their reputation as innovators.