'"'That would be #2 "Ignore the premise that the CR supports and pick on the illustration'"
Those who use analogies should be prepared to defend them. I admire the craft displayed in the creation of the Chinese Room scenario. It seems on the surface to be a well-intended thought experiment for the purpose of shedding light on whether Artificial Intelligence is possible. Upon closer examination, the conclusion forced by it is foregone, and it serves primarily to insult the author's opposition.
A better thought experiment would be to replace the human with a black box that behaves exactly the same. For some reason, the presence of a human in the room incites an emotional response. Stripped of the author's semantic legerdemain, it is no longer so certain that the room does not "understand Chinese".
In any case, I target the premise directly. The premise, as I understand it, is that the "Chinese Room" does not understand Chinese, and that's it's absurd to suggest that a room could do so.
However, that fails to take into account that, in the Chinese Room scenario, the human occupant is part of the room, and by all accounts understands Chinese. Ignoring this is like removing the hardware from a workstation enclosure before benchmarking it.
Ultimately, the measure of "understanding Chinese" is being able to carry on a meaningful conversation in Chinese. The alternative is to define thinking as "something humans do". If that's your perspective, I'll grant nothing but a human will ever be able to perform actions that by definition can be performed only by a human. (I suspect that even if our intellect is surpassed by machinery, homo sapiens will remain unsurpassed in its tautological vanity.)
Would you say that a Chinaman's brain does not understand Chinese, since there's no Chinaman inside his skull to understand Chinese for him?
Wikipedia lists five categories of replies to the Chinese Room scenario. You only listed 4. The omission is left as an exercise for any interested parties.