Christ, whilst it looks like you're trolling because you're one of Slashdot's premier PC master race guys and displaying a certain arrogance towards the guys who designed these consoles assuming they must just be less competent for only producing a primitive OS (when the reality is they're undoubtedly smart guys, making smart choices), I'll give you the benefit of the doubt and answer.
It's got nothing to do with one being more primitive than the other. On the contrary, because consoles can do away with all the legacy and general processing cruft that PCs have to be able to cater, coupled with the fact you're looking at dedicated hardware fulfilling a specific goal if anything the opposite is true - console OS are less primitive, because they don't have to cater to decades of previous software like Windows has to. It doesn't matter though because primitive is an entirely subjective term anyway. A lightweight cutting edge OS written yesterday might be primitive because it's lightweight and it lacks features, or it might be cutting edge because it was written yesterday.
The reason is simply that the two systems are different, neither is superior to the other, both have different purposes and goals and that inevitably results in different design decisions being made.
Consoles perform a lot of background services, and whilst talk of console cost-performance ratios inevitably involves some smartass pricing up some PC components that they claim are superior for the same price the reality is that they never are as they typically exclude everything in the box from the game controllers, through to the very bespoke hardware that achieves certain types of processing more efficiently than an equivalently priced PC ever good. The Xbox One for example constantly handles background processing of gesture recognition, voice recognition, multi-user voice chat, friends list, constant recording of games with the ability to save off the last 5 minutes of that record to disk, streaming of said video across the internet, as well as background apps including things like live TV display and so on. The reality is that you cannot build for Â£250 a PC that can do all that whilst still pulling off 1080p, 60fps or equivalent with other areas of high graphical fidelity.
Which isn't to say that you can't pay more and build a PC that does all that and then some - that's not my point, I'm not saying one is better than the other, merely making the point that there is nothing inherently deficient about console hardware for the price. It's good at what it does, it gives you the best gaming processing for that amount of money, but certainly if you have money to spare then yes, absolutely, a PC can do you all that and then more without a doubt.
So with that out the way, to answer why they reserved cores, the answer is that it's about user experience, yes, I know that's a wishy-washy term, but bare with me. On a PC you're in charge of the system, you're in control and that means you've both in control in terms of having flexibility of what you want to do, but also have to take responsibility when you fuck things up - if you decide to farm bitcoins whilst trying to play a game and either or end up crawling to a halt and being useless then that becomes your problem, you have to make a concious choice to do one or the other, or to restrict one or the other to do both. Consoles don't give you that choice nor are they meant to, they're meant to be easy to use and for you to not have to have any hassle with that, and as such the Xbox team has to make different design choices to the Windows team - whilst the Windows team gives you full control of your system, the Xbox team needs to make sure the system can always perform it's minimum baseline without fail and that anything the developers do or don't do doesn't break that.
And this is where it comes to a head, essentially if you fuck something up that impacts performance of a game or a background task on a PC then that's your problem, but if on a console a game stutters to a halt or a background task like voice chat or streaming starts to fail because something has hogged too many resources then that becomes the THEIR problem. Given this, I hope you can see why the console designers prefer to reserve resources and make sure that the experience is consistent and that developers know what bounds they must design their game within.
As for why now in terms of freeing up cores, I'd wager there are two reasons:
1) We're 2 years into a console cycle that may only be 5 or 6 years long. Initially they may have judged that they may need 2 cores to implement future unknown functionality, but now they're far enough into the lifecycle they can confidentally say that if in 3 years they desperately need more power then it's time for the next generation console release, or at least, time to start planning it.
2) At release they have no idea what curveballs the competition are going to throw and that they may need to dig out resources to respond to. If Microsoft did something really innovative with Kinect and it became a killer feature, Sony will have wanted to be sure they could respond to that without having to say "Oh shit, our console is all out of capacity, we can't". Again, now the competition is known and the roadmaps are pretty settled they're probably confident enough that they don't need a whole core of extra capacity reserved so are happy to free it up.
Probably the last big question mark on the horizon that led to these decisions was VR - how much guaranteed processing capacity do we need to keep aside for VR? as those technologies have begun to mature they probably said right, there's now nothing known on the horizon that would ever need this capacity so let's release it to the devs.
Is that a sufficient answer or is this going to be another one of those "but my PC is magic and only cost Â£20 but beats a cluster of PS6's" type discussions that are rife but pointless on Slashdot?