Where are you getting those prices? A quick check of newegg found the cheapest ssd at $160 for 240GB ($0.67/GB). On the other hand, a 10K RPM 1TB disk costs $200 ($0.20/GB). Are you comparing the cheapest consumer ssd to the most expensive enterprise hard disk?
I think you're over thinking this. Executive, Manager, and Secretary are just the names for styles of chairs, not some kind of hierarchy or (current) intended use.
Secretary chairs, I believe, are not named for the person currently known as an administrative assistant, but for the piece of furniture called a secretary. A secretary is a tall cabinet, the lower part is drawers, the upper part has glassed doors to display knick-knacks, china, whatever, and in between is a fold-down panel that makes a desk. This piece of furniture would be prominent in a house. When a person wanted to write a letter, etc, they would drag a small, lightweight stool to the secretary and fold down the desk.
In the days when most people worked in factories, the only person with a desk was the manager. Hence, a 'manager' chair is basically any desk chair.
The executive chair is mostly to show that the person sitting in it is important, hence the leather.
Say what? Median household income in the US is $51K. The poverty line for a family is about $2K/month, and 15% of the people are below that. There is no support at all for your claim that 'most families' in the US live on $1000/month.
There are no push/pull instructions. There have been 'program call' and 'program return' instructions since the 80's, but these are complex instructions used for switching between addressing modes, address spaces, etc. Just the description of how they work takes almost 17 pages in the POP.
Trust me, I know infinitely more about it than you do.
You said 'because of capacity on demand...'. This is, in fact, false. The thing that lets them control the performance and configuration of the machine is not 'capacity on demand', it is 'Licensed Internal Code Controlled Configuration.' The use of LIC CC also allows them to offer 'capacity on demand', but they are not the same thing, and LIC CC does not require COD. Also, notice the name of that facility, it should give you a clue as to what is actually licensed.
Having said that, I already explained why multiple performance levels are offered. Why would you pay (for hardware and software) for more performance than you need?
You have no idea what you are talking about. "Capacity on demand" has nothing to do with why a BC would run at 1/100 it's capacity (and there is no such thing as a 'base' model.)
In the mainframe world software is often priced by the capacity of the machine it is running on. Therefore, a customer who does not require speed can save significant money by ordering a machine that has had it's capacity reduced. That saves money on both the hardware and software.
One of the reasons IBM does not license z/OS to run on Hercules is because it breaks that pricing method. How would IBM and/or ISVs price their software, when the performance of the machine it is running on is completely unknown and changable? The other reason they won't license is z/OS is because Hercules infringes several of it's patents.
What are you talking about? What the heck is 'native mainframe tech'? z/OS? By that logic, x86 is also 'dying' because servers are moving from Windows to Linux. In 2012 IBM sold more mainframes, as measured in units, capacity, and dollars, than at any point in it's history. Over half of the capacity was in the form of 'new workload' engines. In other words, the market grew, not shrank.
And what do you mean by 'taking perfomance seriously again'? There has never been a time when they didn't take performance seriously. Mainframes have been on an 18-24 month release cycle for decades, and every new machine has been significantly faster than the previous generation. The only time this wasn't true was in the mid-90s, when IBM changed the technology from bipolar TTL to CMOS microprocessors. That change wasn't because they didn't care about performance, but because customers no longer wanted machines that cost $40M and took up an entire room and used enough energy to power a small town. CMOS technology finally caught up to the performance of the old bipolar machines around 2000.
What computer do you consider 'more modern' than an IBM EC12? What makes you think the technology in mainframes is 'dying'?
Instead of relying on Wikipedia, why don't you try reading the actual patents (which you obviously have not done)?
The wikipedia entry says nothing about either Mr Nakamura's or Boston University's work in relation to these two patents (Mr Nakamura US 5290393, Boston University US 5686738). The patents are about how to grow the semiconductors, not simply what material they are made of. And those two methods of growing are, wait for it, different. Mr Nakamura grows the layers 'at at temperature of 900 to 1150.' Boston U grows it 'at a temperature of 600 in a nitrogen plasma'. This allows for a purer lattice without nitrogen vacancies.
Well I am sure that all those companies will be very happy that you were able to find such obvious stuff to invalidate the patent when they couldn't. Or maybe you are just talking out your ass. Yeah, I'll go with that one.
I think you need to look up the definitions of knowledge. Not one of them has the condition that a thing must be able to be used in practice.
Too bad the patent is not for blue leds, but for a method of making them.
Uh, right. Too bad this patent was filed in 1991.
Patents DO NOT 'retard' the advancement of knowledge. Patents are open. Patents can be freely discussed. What you can not do is create a thing with that knowledge (without the permission of the patent holder). And I don't know of any university that claims it exists so any manufacturer can make stuff without having to spend money developing it.
Your 'funding' argument makes no sense either. You yourself said that universities (and especially research universities) are non-profit. That means that all of the money they make goes back into the university. So, if some university patents some things, and makes money from that, clearly not ALL of the money provided for funding the discovery came from the public. A university that has both public funding AND recovers some of it's costs through patent licensing has more money to spend doing more research. Where, exactly, is the problem with that?
Actually, it is nothing like OSS. First, and most obviously, is that there is almost zero cost associated with developing OSS. On the other hand, there is a tremendous cost associated with doing semiconductor research. The money to fund the research (including all the things that did not work) has to come from somewhere.
The second thing that needs to be considered is what motivates people. For OSS, there are basically two groups contributing - people who are not paid (hobbyists) and business. Hobbyists, of course, don't need a motive - they do it because they enjoy it. A vanishingly small number of people are going to be able to develop blue LEDs as a hobby. However, must OSS contributions are from businesses. So why do they do it? Because it has a good ROI. The money from their increased business more than offsets the cost of donating to OSS. And that is because nobody is making money selling OSS. They are selling some other product (support, hardware, whatever). The OSS itself is not the value they are selling.
Now take the example of blue LEDs. Does it make even the slightest bit of sense to think that Apple, for instance, is going to go to the major expense of developing a blue LED to make their product look cool, then just give that away to everyone else? Why would they (or anyone else)? The only people who are going to have an interest in developing a blue LED are those who either want to sell blue LEDs, or they want to sell the technology to make blue LEDs. And in neither case does it make even the slightest bit of sense that they would give that away.