No, it's the other way around. PC's are currently TERRIBLE compared to consoles. How can I say that? It's easy. There is no objective meaning of "terrible": it depends on what your goals are. Apparently you're one of the gamers who prioritizes eye-candy and/or processing power. I don't, and many others don't either.
Here's what I think is important:
1. I can actually play the f***ing game at all. The PC market has intentionally alienated used-games with copy-protection and "activation". If you already activated your old game and try to resell it, good luck if you're the new owner who can't install it on their computer. But let's say this is *my* old game, not a used one. Five years down the road, if I want to use it on my new PC with my new version of Windows (because it's going to *have* to be Windows), can I activate it to play? Is the company's servers even around? How do we deal with all the breakage due to OS updates, malware, driver bugs, etc...
2. I can actually play the f***ing game at all, without having to take out a bank loan. For under $300, I can buy a console off the shelf, pop in the disk for any game I own, and I can play it immediately. As long as I have this hardware, I can continue to have the *freedom* to play these games 10 years from now if I wish to. Let me see you play Crysis with a computer off the shelf for under $300. "Technically feasible" doesn't count. I'm referring to the ability to have a genuinely enjoyable gaming experience.
3. Consoles are dedicated to their jobs, with standardized hardware and software. PC's are for general purposes, but do not excel for special purposes like gaming (or high-end audio or video) unless you spend a lot of money to get *non-standardized* hardware and software. As a result of the predictability of the console platform, quality control is easier when you only have to worry about one hardware platform coupled with one software platform. (Note that I wouldn't condone this for PC's. They really are for general purposes and not specifically just gaming.)
1. According to the whistleblower article, the last names of the president and CEO of FairPoint is "Nixon" and "Johnson", where Nixon is the president of the company. Johnson was CEO before Hauser took over in June. Well there's your problem, the company's run by President Nixon!
2. The new CEO, Hauser responded concerning the fraud allegations, "We take these allegations seriously and will do a thorough investigation". To paraphrase: "We know we're busted, and we intend to do a very thorough cover-up considering billions of dollars are on the line."
Clarification: Female breasts.
Saggy man-titties are nowhere near as ergonomic...so I've heard.
I agree. Isn't "duplication of functionality" by a 3rd party just a fancy word for COMPETITION? We can't have that in a free-market economy, now can we?
There are two camps using copyright law as protection:
1. Copyright law keeps source code non-proprietary (e.g. GPL)
2. Copyright law keeps source code proprietary, so you have to pay to use the product (e.g. most commercial software)
Now apply a 5 year expiration of copyright:
Result to 1:
The source code is already visible, and nothing protects the code anymore from someone stealing it and making it proprietary, despite the intention of the authors for it to remain non-proprietary.
Result to 2:
The source code is NOT already visible. Lack of copyright protection makes the product free-as-in-beer, but mere expiration of copyright does not force the authors to release the source code. So the result is that no one else can steal the source code like they could for expired FLOSS copyright.
So yes, there IS an imbalance of power. In no way does this help authors preserve the freedom to keep software non-proprietary.
And no, it's NOT just a simple case of each side has a right to keep their code open or closed as they see fit. It favors proprietary software to remain proprietary, but removes protections for software to remain non-proprietary. Stallman is right: the only way to keep it fair is if both sides must make the source code available.
THINK PEOPLE.
IANAL, but I thought that *one* essential reason laws waive the expectation of privacy in "public places" is because by the *nature* of that place, it is essentially not private. For example, there is too much of a practical burden of enforcing privacy when I go walk outside, because that's actually *me* walking outside. There's only so much identity-hiding I can do.
But for a blog, by its very *nature* it works the other way around. Anonymity happens by the fact that the blog posting doesn't see who is actually sitting at the keyboard, so identity has to be proactively required by settling for something that substitutes, such as using a valid email for login registration. Here, regarding the enforcement burden, it's the other way around: there is more effort required to identify someone than not identify someone (e.g. you could allow anonymous posts, etc.).
The point:
Although I am sharing *data* that becomes public, *I* am not personally in a public place, so I should reasonably assume I can have anonymity.
I get tired of hearing the same old discussion about whether or not the relational database is going to die. They're not. But the new breed of *specialized* databases work well for their *specialized* purposes. Big surprise. But all of them inevitably make a trade-off. Anyone who works seriously with database design knows that it's all about trade-offs.
One of the main motivations for the new breed of databases is that the standard SQL database relies on things such as foreign keys and other constraints for data consistency, but that requires the data to be directly managed by that running DBMS process. When you require data to be distributed over a network (i.e. over many separate processes), then the only way a *foreign key* can work is if the DBMS process has some sort of link over the network to the separate DBMS process and then use that somewhat as if it were local. (Other strategies involve using external application code for consistency rather than foreign keys, etc.) Of course, the DBMS process can't use it's usual local low-level optimizations behind-the-scenes in order to handle that query efficiently over the network, so it doesn't scale. Specialized DBMS's for distributed data focus on optimizing being distributed, while the typical SQL DBMS optimizes storage and retrieval of data as if it were local. The bottom line is that the traditional SQL database scales well vertically, but not horizontally concerning hardware. Or rather, when you scale horizontally, you forgo a lot of its advantages. The new breed of databases trade-off consistency and other assurances for the sake of "good enough" consistency and really fast retrieval of domain-specific data.
But not everyone is trying to be Google or Amazon. Financial institutions such as banks can't tolerate "good enough" consistency. The biggest problem with relational databases I see nowadays is that people are ignorant about why "relational" is such a good idea, and how SQL only gets you part of the way to "relational" and that SQL's shortcomings are a different issue. The second biggest problem is that most people are used to only one or two data usage patterns, and if it "works for them", then they assume it should *always* be done that way. For example, the hordes of people who barely know Excel (i.e. not a relational database) or Access, and then like to give "expert" advice. Or a web programmer that believes that ORM's are the One True Way because they abstract away choices of DBMS in order to keep favorite language X, despite the needs of other people are the opposite: perhaps we want to abstract away the choice of programming language so that we can keep the same database, and so maybe it's a good idea if the database itself can ensure data consistency rather than relying on the ORM, etc.
Numeric stability is probably not all that important when you're guessing.