I don't own a Rift, and I likely won't.
What I remember from the Quake 1 days was this: The rendering was fisheyed. If I looked at a pillar, and then turned a few degrees to the right, the pillar got -bigger-: It consumed more pixels at the side of the screen, than it did in the middle of the screen.
This really bothered me at the time. I complained about it, and folks said "Well how ELSE would it be shown?"
And I'm all like "I don't care. That's not how I see things. And I hate it."
And then they're all "Whatever, fag."
Meanwhile, I'm sitting here, looking at a pillar in real life, and I turn my head (or my gaze) to the side, and it doesn't get bigger. It just moves over a bit in my field of vision, and doesn't look at at like looking through a magnified peephole in a door.
In retrospect, it is clear that this rendering method was done so that CPUs of the day could keep up at a reasonable framerate, as the periphery would require fewer polygons and thus render faster.
It also seems clear to me that something like Rift, which is intended to encompass the entire field of view, the system would need to be particularly careful about how it handled such things.
But then, output devices don't control fisheye. Game programmers do. Perhaps it can improve simply through better software design.