The problem with acupuncture studies is that they can't be done double-blinded: that is, the acupuncturist always knows whether he is doing "real" acupuncture or "sham" acupuncture*. This then leads to a bias effect, in which the patient is unconsciously cued as to whether or not the treatment "should" work, and expectation effects are stronger than any purported acupuncture benefits (e.g., Bausell et al 2005, Eval Health Prof). I remember a study, which I cannot dig up at the moment, in which the researchers gave acting lessons to the acupuncturist to ensure that they behaved in exactly the same way with respect to the patients between real and sham treatments, and when they did so acupuncture did not outdo the placebo.
* You can, in theory, do double-blinded by randomly assigning patients to one of two technicians, both of which were naive to acupuncture treatment before the study's beginning. They are then trained equally on two different sets of acupuncture points, one valid and one invalid, with no knowledge of which one of them is which. However, objectively this isn't really a fair test of acupuncture: consider the case where you tried to tackle the effectiveness of heart surgery using the same model.