Search code examples
code-metrics

What is the fascination with code metrics?


I've seen a number of 'code metrics' related questions on SO lately, and have to wonder what the fascination is? Here are some recent examples:

In my mind, no metric can substitute for a code review, though:

  • some metrics sometimes may indicate places that need to be reviewed, and
  • radical changes in metrics over short time frames may indicate places that need to be reviewed

But I cannot think of a single metric that by itself always indicates 'good' or 'bad' code - there are always exceptions and reasons for things that the measurements cannot see.

Is there some magical insight to be gained from code metrics that I've overlooked? Are lazy programmers/managers looking for excuses not to read code? Are people presented with giant legacy code bases and looking for a place to start? What's going on?

Note: I have asked some of these questions on the specific threads both in answers and comments and got no replies, so I thought I should ask the community in general as perhaps I am missing something. It would be nice to run a metrics batch job and not actually have to read other people's code (or my own) ever again, I just don't think it is practical!

EDIT: I am familiar with most if not all of the metrics being discussed, I just don't see the point of them in isolation or as arbitrary standards of quality.


Solution

  • The answers in this thread are kind of odd as they speak of:

    • "the team", like "the one and only beneficiary" of those said metrics;
    • "the metrics", like they mean anything in themselves.

    1/ Metrics is not for one population, but for three:

    • developers: they are concerned with instantaneous static code metrics regarding static analysis of their code (cyclomatic complexity, comments quality, number of lines, ...)
    • project leaders: they are concerned with daily live code metrics coming from unit test, code coverage, continuous integration testing
    • business sponsors (they are always forgotten, but they are the stakeholders, the one paying for the development): they are concerned with weekly global code metrics regarding architectural design, security, dependencies, ...

    All those metrics can be watched and analyzed by all three populations of course, but each kind is designed to be better used by each specific group.

    2/ Metrics, by themselves, represent a snapshot of the code, and that means... nothing!

    It is the combination of those metrics, and the combinations of those different levels of analysis that may indicate a "good" or "bad" code, but more importantly, it is the trend of those metrics that is significant.

    That is the repetition of those metrics what will give the real added value, as they will help the business managers/project leaders/developers to prioritize amongst the different possible code fixes


    In other words, your question about the "fascination of metrics" could refer to the difference between:

    • "beautiful" code (although that is always in the eye of the beholder-coder)
    • "good" code (which works, and can prove it works)

    So, for instance, a function with a cyclomatic complexity of 9 could be defined as "beautiful", as opposed of one long convoluted function of cyclomatic complexity of 42.

    BUT, if:

    • the latter function has a steady complexity, combined with a code coverage of 95%,
    • whereas the former has an increasing complexity, combined with a coverage of... 0%,

    one could argue:

    • the the latter represents a "good" code (it works, it is stable, and if it need to change, one can checks if it still works after modifications),
    • the former is a "bad" code (it still need to add some cases and conditions to cover all it has to do, and there is no easy way to make some regression test)

    So, to summarize:

    a single metric that by itself always indicates [...]

    : not much, except that the code may be more "beautiful", which in itself does not mean a lot...

    Is there some magical insight to be gained from code metrics that I've overlooked?

    Only the combination and trend of metrics give the real "magical insight" you are after.