All upscaling algorithms are making up data based on assumptions on what "typical" hi-res images should look like given their low-res counterparts. That doesn't mean they are lying or misrepresenting. Furthermore, some assumptions are most statistically valid than others, and some produce more aesthetically pleasing results than others, actually resulting in images that are genuinely more likely to be closer to the true image than nearest neighbor.
Nowhere in google's paper are they suggesting that these images be used for forensic purposes, nor claiming that they are finding "deeper truth" or additional information in the images than what actually exists. They developed an approach that produces better results for common classes of images than previous algorithms, which is useful for a large number of applications that don't require the same level of rigor that forensics do.