It is very worrying!
In TFA, there's an example of an up-scaled tram. There a blue ... something, perhaps a warning notice or something near the front of the tram. But imagine this is a registration number of the tram. It is not visible in the original, and I don't believe there are enough pixels to even take a guess. However, I suspect the "AI magic" would use detail from other similar images it has been exposed to... thus, it _could_ perhaps show an incorrect tram-registration number.
*We* know this information isn't correct, but it could be difficult to a lay-person to defend themselves if presented with an apparent photograph that shows, what appears to be, absolute incontrovertible truth!
Obviously, right now, an "expert" would be required to enhance a photo, and thus they should know what may or may not work. But imagine a future where this tech is embedded into cheap consumer cameras. At that point, I worry that people would believe what they see in a photo that they believe they took.
Imagine a phone-camera where you have (effective) infinite pinch/zoom!
Still cool though! ;-)