This is indeed somewhat of a problem in our profession. It's in general hard to find good metrics that quantify the performance of a programmer. Lines of code, number of closed tickets, or years of experience are all sometimes used but even though these might be indicative of performance, they all don't necessarily have to mean much.
Lines of code has been discussed quite often over the years, but it's typically not seen as a good indicator. People may use a lot of white space, or write a bunch a spaghetti code based on blindly copy-pasting stuff around. This blind copy pasting will result in extremely bad code that's often impossible to maintain. A better performing developer may actually refactor all this duplicate code and abstract it into some common class or method, in which case the LOC produced by said developer may actually be negative! Worse yet, people may check in stuff like .dia files to their source code repository, which might boost your supposed LOC productivity with thousands of lines, while all you did was draw a box with an arrow pointing to it.
On the other hand, LOC also doesn't mean nothing. I've seen developers reading slashdot all day instead of coding and as a result their daily, monthly and even yearly LOC count was extremely low. We use among others statsvn http://www.statsvn.org/ and though not perfect it does give a very crude indication of who's very active and who's basically doing nothing all day long.
Number of closed tickets is an indicator too, but just as with lines of code hard to really use for measuring some one's performance. Tickets (issues/bugs) can vary wildly in complexity and the "estimated amount of hours" and "impact" is hardly ever accurate. Given two bugs, one can be as simple as adding a forgotten quote somewhere, while the other can amount to weeks of digging through the lowest levels of some code base. Yet, on average, if tickets are assigned to developers without really taking into account their abilities, then over a longer period of time all developers should on average get an equal amount of quick&easy and hard tickets. In that case, the number of closed tickets might be indicative again. Someone who barely ever closes a ticket might not be that top performer, despite the inaccuracy of the ticket measurement.
Years of experience, which is I think used the most, is maybe also the most debatable of them all. It's a very natural measurement tool which takes no personal stuff into account. It's a very basic and easy to measure number. But here too, it can be deceiving. I've seen programmers who had some 8 years of Java experience, but appeared to be totally unable to pass a basic Java test and produced nothing but WTFs in their code like concatenating strings to each other with commas in between instead of storing them into a list, simply because they didn't grasp how a simple list actually worked! (I kid you not, I actually encountered this). In contrast with this, there's the guy (or gal) taking up some part-time job while still studying, who understands even complex stuff in the blink of an eye and produce nothing but exemplary code. But here too, given a group of all reasonable knowledgeable programmers, the ones with the most experience typically win out. When I look at my own code that I produced 10 years ago and compare it with what I produce now, I most definitely see a vast improvement.
Even though management might often have difficulties with measuring the performance of a programmer, there is one group of people who are true experts here: the team mates of said programmer; his or her fellow programmers! If you have worked in a team for some time, everybody knows who's the ace, who's the simply capable one and who is obviously trailing behind. As a programmer you actually work with the code of that other programmer. You are either able to extend that code with the greatest ease because of the elegant design and clear names being used, or you curse every minute that you have to spent in that code. As a programmer, you actually know whether the answer you get from that other programmer actually makes sense. If he or she answers your every question with a "yeah, well, uhm, it's not supposed to do that, but sometimes it happens anyway. I have no idea why, must be a strange VM error", they you simply *know* that person is subpar. On the other hand, if you almost always get an answer detailing you the exact location of some occurrence and a short but concise explanation under which condition something happens and why, you *know* this person is ace ;)