Either you're trolling or you have no frigging idea what you're talking about.
It is true that often the low-end cards are just crippled versions of the high-end cards, something which —as despicable as it might be— is nothing new to the world of technology. But going from this to saying that there is no competition and no (or slow) progress is a step into ignorance (or trolling).
I've been dealing with GPUs (for the purpose of computing, not gaming) for over five years, that is to say almost since the beginning of proper hardware support for computing on GPU. And there has been a lot of progress, even with the very little competition there has been so far.
NVIDIA alone has produced three major architectures, with very significant differences between them. If you compare the capabilities of a Tesla (1st gen) with those of a Fermi (2nd gen) or a Kepler (3rd gen), for example, you get: Fermi, has introduced an L2 and an L1 cache, which was not present in the Tesla arch, lifting some of the very strict algorithmic restrictions imposed on memory-bound kernels; it also introduced hardware-level support for DP. Kepler is not as big a change over Tesla, but it has introduced things such as the ability for stream processors to swizzle private variables among them, which is a rather revolutionary idea in the GPGPU paradigm. And 6 times more stream processors per compute unit over the previous generation is not exactly something I'd call "not that much different".
AMD has only had one major overhaul (the introduction of GCN), instead of two, but I'm not really spending more words on how much of a change it was compared to the previous VLIW architectures they had. It's a completely different beast, with the most important benefit being that its huge computing power can be harnessed much more straightforwardly. And if you ever had to hand-vectorize your code looking for the pre-GCN hotspot of workload per wavefront, you'd know what a PITN that was.
I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).
And hardware-wise, Intel is finally stepping up their game. With their HD4000 chipset they've finally managed to produce an IGP with decent performance (it even supports compute), although AMD's APUs are still top dog. On the HPC side, their Xeon Phi offerings are very interesting competitors to the NVIDIA Tesla (not the arch, the brand name for the HPC-dedicated devices) cards.