They compared how well it compresses relative to lossless, but is that compressed lossless, or uncompressed? And how does it affect 'doing science' with the data?
Most non-earth science imaging data (astronomy and solar physics, but also medical and even some archival document scans) uses FITS (Flexible Image Transport System) or variations of it. You can then compress the data portion of the file, with most groups using Rice (aka Golomb) compression.
It supports data cubes and higher dimensional data (you can either stack multiple data segments into a file, or define multiple dimensions and how they're organized within the data segment, then give the bytestream for the data)
VOTable is derived from FITS, but has an XML header to get around some of the quirks of the FITS header (designed for punch cards and before bytes were standardized at 8bits, so you have 12 character max on variable names; need to use continues to store long strings (which older libraries will truncate), and the headers are always a multiple of 2880 bits (with some groups padding it out so they can splice in new variables without needing to re-write the whole file). But VOTable is mostly used for data tables, not images or other binary data.
I know that DKIST (solar telescope) decided to use one of the earth science data standards ASDF (Advanced Scientific Data Format), which I think is partly derived from FITS, but uses YAML for its headers. ... but I got out of the field before DKIST went live.
(They were originally planning on using HDF5, which they might still be using for their level 0 data) ...
The big issue with lossy compression is that what's considered noise for one researcher might be exactly what another researcher is looking for. There have been compression schemes with variable loss... where they keep the features of interest in high fidelity, but compress the other parts of the image ... so you have the context for the features of interest, but without the extra storage space or trying to store it as multiple files.
Years ago, when we started seeing the shift towards 'computer vision', I proposed that we needed to make an archive of test images and detection algorithms... so that when yet another compression scheme came along, we could compress images, decompress them, and re-run the detection routines... this would then tell us if the compression scheme screwed some groups by creating an unacceptable level of false positives or false negatives. ... I unfortunately never managed to find the right ear for it, and AISRP had just shut down due to sequestration issues.