I don't see anywhere in the article where they claim that their results are any better than simply asking the player "Was this game fun to play?"
Said question is already asked of focus groups extensively during development of games.
The methodology the article provides isn't going to provide any better feedback to the developers than the way we already do it -- it just lets them put nice graphs and numbers up that tell us what we already know.
Yes, it is interesting to know that the psychological reactions to playing computer games are similar to the psychological reactions from playing real-world sports, but that doesn't give us a better process for making computer games than we have now.
Add to that the fact that often 75-90% of the game development has to be finished before you really have something playable that could be used for this testing. It is only after the majority of the game is done that user feedback actually becomes useful -- before that what you have is a pile of compiling code that only superficially resembles what the final product will be. Come up with a system that we can use on a game design document BEFORE we spend a year programming to the alpha stage of the game and you will have something useful.
Basically, I get the impression that the people behind the study don't really understand how computer games are actually made.