Very impressive jump in 3 years from 16m to 2g. Those were the glory days for Cray.
500m harddrives were at the top end of mini-computer size (mini drives went up to 768m). Certainly huge for an imaginary car.
Well it was more than that. Cray X-MP (1982) was the model 30 years ago. 16m of RAM.
But good comment. Obviously Cray was picturing a machine with 512m+ of memory by 1983.
GP was arguing that Intel was pocketing huge profits and I was disagreeing.
You are arguing that CPU gains don't have much practical value anymore and so customers are right to focus more on price. I'd agree though argue that Microsoft, Cisco, Intel... let this happen after 2000 when they started allowing machines to shift down. There are plenty of functions that we don't have on our systems today that we could have if computers were (even not adjusting for inflation) back to: $1k for a piece of crap, $2k for a good but limited machine, $4k for the computer that really does what you want.
Apple does something like this with OSX and targets their better machines. So for example Apple can implement OS features that are painful on HDD but offer huge performance gains on SSD. Very soon they can stop support resolutions much below 220 PPI. Both of those features are huge quality upgrades.
Now in terms of CPU. Let's look at Apple. They've moved towards compressed RAM, very expensive in terms of CPU substantial battery life savings. Another thing they do is coalescing which means the CPUs need to be able to surge, get everything pending done fast and then shutdown to again save batter. Automated workflows are another huge suck of CPU resources. Fast really matters.
It was actually the opposite, a response to the cost of promotion getting too high. The old model had been a band gets a local following, then a record contract and the local experience plus contract gave them a platform. As the music scene became more national it was getting more difficult to cross over from local to national. So record companies had to do a national promotion for the groups they signed. MTV and videos were a cheap national promotion mechanism (comparatively).
30 years ago desktop systems from IBM could support up to 640k though very few did and 128-256k was more common.
30 years ago IBM introduces system/36 as a mini. That system could use up to 7m of RAM.
30 years ago IBM was switching mainframes to get beyond 32m of ram for the highest demanding customers
512m of RAM was unthinkably large.
Excluding things that track labor costs, what items are you seeing that have a high inflation rate?
20 years ago you couldn't watch video on a mainstream PC
20 years ago social media existed only for the technologically elite because it was complex
I don't see how your dad disproves the point.
As for upgrades and Windows. I agree. Microsoft in keeping XP a viable system as long as they did allowed applications to not improve. They've seen the error of their ways and are starting to drive an upgrade cycle. Touchscreens applications are going to drive the next round.
As an OSX user I just did an upgrade and the difference in my experience is massive:
a) Applications are "always on". They mostly load instantly and they preserve their state between runs. Most data doesn't need to be explicitly saved. State is preserved.
b) I'm using a 220 PPI display. Fonts are fantastic, regular monitors look blurry now. I'm using all sorts of virtualized sizing graphical effects to look at the system. So for example I can effective move windows between 10 virtual terminals are because of the clarify.
Intel's profits are published. Net income is down from about $3b a quarter to close to 0.
The problem is skin flint PC buyers who want cheap machines, not Intel being excessively greedy.
30 years ago stuff we do casually today like networking or multiuser transactional databases, resolution independence was considered really hard and esoteric. The real programmers from then had quite often far less complexity to deal with
They have. Functional programming. By explicitly avoiding side effects huge chunks of code can execute independently and in different orders. Moreover by organizing the code using functional looping constructors the parallel compilers can tell how to break things.
Functional makes parallelism much easier.
Copyright law comes from books and music and for software is mostly all just a collection of analogies. The analogies are what matters. There isn't law governing an API. Which is why I had said above that what we need is black letter law.
Why it wouldn't work is they haven't gotten there yet in getting a compatible library set working. The hypothetical is they get ReactOS finished by 2016 and then turn their attention fully to
Microsoft did a lot of work to backport XP and thus allow people to remain on XP. Those backports aren't part of current React.
footnote in history, IBM OS/2 would have dominated? Sounds to me like he was saying a monopoly would exist it would just have been IBM's.
A more heterogeneous environment is an entirely different situation. Microsoft was dominant even in the age where IBM compatibles weren't fully compatible. DOS offered a common platform that applications could target. I'd suspect that if hardware unification never happens then Microsoft would have quickly had to abstract the hardware details through the OS and applications would have tied themselves even more tightly to DOS / Windows than they are today. More of less what Android is doing for various phone systems. So yes I think they would have have had potentially an even bigger monopoly since such an abstraction system would have worked well for embedded starting in the 1980s the same way Android is working so well for embedded today.
You have to make a more substantial change to the structure to avoid a copyright violation. Think if you were copying over encyclopedia articles re-indenting them wouldn't solve it.