I work for a software development company and we have around 100 people working on a product, 1/3 of these people are QA. Lately management wants to have a better way to rate individual programmers performance so the suggestion was to use bug reports as a measurement. The more bug reports on a developer the worse he is. This seems ill-advised for more reasons than I can tell e.g. it is a subjective way of measuring, developers work on different projects of differing complexity. In addition if QA is measured for the number of bug reports they generate there will be a lot of discussions about the validity of bug reports.
What would be a better way to measure developers performance in such a setting?
One suggestion would be to not use bug reports from QA as a measure and instead use bug reports from outside, like beta testers then when such public bug reports are issued also let QA be measured by it.
EDIT:#1 After reading some of your excellent responses I was thinking that the general problem with the above described metric is that it is negative reporting bugs - it doesn't encourage producing good quality code.
EDIT:#2 I think the problem is that it is two worlds. There are the non-programmers on one side who treat programmers as workers basically, they want metrics loc/minute preferably. Then we have the Programmers, who want to see themselves as artists or craftsmen, "please don't disturb me I am c-o-d-i-n-g" :) I don't think measuring quality can be done by metrics not without being counterproductive. Instead things how a person reacts to bugs, willingness to change, creativity and above all quality of work are important and but mostly not necessarily measurable.
Trying to measure programmers performance with bug reports is a bad idea. However, so is trying to measure performance with virtually any other metric. No matter what you do, people will figure out how to game it and give you what you're measuring without giving you what you really want.
From one of Joel's other articles:
Robert Austin, in his book Measuring and Managing Performance in Organizations, says there are two phases when you introduce new performance metrics. At first, you actually get what you wanted, because nobody has figured out how to cheat. In the second phase, you actually get something worse, as everyone figures out the trick to maximizing the thing that you’re measuring, even at the cost of ruining the company.