Of course it's possible. JPEG encoding has three steps: cosine transform of each block (DCT), then quantization (where the loss happens), then coding. In JPEG, the coding involves a zig-zag order and a Huffman/RLE structure, and this isn't necessarily optimal. A lossless compressor specially tuned for JPEG files could decode the quantized coefficients and losslessly encode them in a more efficient manner, producing a file that saves a few percent compared to the equivalent JPEG bitstream. Then on decompression, it would decode these coefficients and reencode them back into a JPEG file.
I believe what they meant was that you would not be able to apply a lossless algorithm to the original data stream and achieve greater compression than applying a lossy algorithm. Your composite algorithm is just a more efficient lossy algorithm.
If we look at the original statement from an information theoretic point of view, the GP's statement should be easily understood. With a lossless algorithm, you have to encode all of the original information and restore it. Assuming an optimal encoding, it will still take a minimum number of bits to fully realize all of the original data on decompression. With a lossy encoding scheme, I can reduce the number of bits in the original stream before using the same optimal encoding. With fewer bits to represent it should be obvious that the encoded bitstream will always be smaller.