Precisely. Even if one session is fed the explicit code and documents it, then the second session generates code ostensibly based on the documentation generated by the first without having been fed the original code explicitly, the AI underlying both sessions was itself trained on the original code, even if a previous version of it, and holds large chunks of it lossy-compressed within its internal weights, to the point that, with the proper prompting in an entirely unrelated third session, we can get it to reproduce parts of that original code, if not the entirety of it. The end result is thus a two-step derivative of the original: original -> weights-compressed version of the original (first derivation) -> reimplementation based on that weights-compressed derivative version (second derivation).
For this to be true clean room one would need to entirely train a coding AI with everything it needs to become good enough except for the packages they want a clean-room implementation of, thus making sure there's absolutely nothing of that code anywhere within its weights. That AI would need to generate the documentation by being fed the completely novel (to it) code. Once that documentation was done, a completely clean state version of that AI would need to be started, no trace of the original code at all anywhere close to it or in its weights, then fed the documentation to code from it. Then, and only then, this code would be a true clean room implementation of that code.
Right now, that full special training from scratch for every set of packages one would want to clean room would be exorbitantly expensive, way more than paying two human teams to do the clean room implementation the old-fashioned way, so no one would really want to do it.