CPU speed hasn't improved much since the 3 GHz wall
Clock speeds haven't improved much, but instructions per clock (as well as work per instruction) has increased quite a lot. Compare benchmarks of a 3GHz P4 and a 3GHz turbo-bosted i7 on a single-threaded workload and you'll see a huge difference, and that's ignoring that core counts have been going up.
PC monitor resolutions have flattened out with the economies of scale of 1366x768 and 1920x1080 panels
The 4K display on my desk, which cost about £200, on my desk says otherwise, as does the 15" 2880x1800 panel in my laptop and the 10" 1920x1080 in my oldish tablet (the manufacturer's newer model has a higher resolution).
And the form factor for a PC with a preinstalled multi-window OS hasn't changed much because adult human hands haven't changed much
My current laptop is about half the thickness and a lot lighter than the one that I bought from the same manufacturer, with the same market segment about 6 years ago.
I'm sure there were a bunch of students genuinely doing research from the profiles' info
Unless you mean that in the sense of 'research, heh, nudge-nudge-wink-wink, say no more', I'd be very surprised. Any ethics committee that approves such an experiment would be seriously derelict in their duty (anonymity and informed consent? What are they?).
Well that just goes contrary to my understanding of what the main CPU is supposed to do, crunch data, and as much of it efficiently as possible in the smallest package available
Why? Especially in a desktop package, space isn't a constraint. Die area is cheap, heat dissipation is expensive. Your choices are either add some rarely-used coprocessors in the available space, or don't use the space. The cost is the same in both cases.
Specialized hardware that's rarely used (relatively speaking) should resides outside of it via PCIe bus assuming latency and bandwidth considerations are met
Latency is one big issue. Another is power. Off-chip communication is slow and very power intensive. The ARM GPUs, for example, compute a hash of each tile before writing it off to the frame buffer, and if the hash is the same as the last time then they don't write it. That extra computation, which only saves a fraction of the total writes is still a net win for power.
I find it slightly ironic that you use SIMD as a counter example. SIMD is precisely the kind of thing that I'm talking about: something that's a big win for some workloads and can be powered off most of the time.
Right now I can get a predictable ride with a car in front of my house in 10 minutes. When you're on a 7:30 AM flight, how much extra time do you have to shave $5 off a ride that cost $20 to begin with and is at least a third less than a cab ever was?
I don't know, how long in advance do you know that you'll be taking the flight? Since the flight is travel itself, did you decide to fly on it at 6:30 AM of the same day? If not, and you decided to fly at least a few days in advance, why not order both at once?
This isn't solar PV, this is solar thermal
Well, maybe that's the first mistake right there. PV has surpassed CSP in economy a long time ago. That's why so many long-planned CSP projects have failed in recent years.
Real Programs don't use shared text. Otherwise, how can they use functions for scratch space after they are finished calling them?