There is a common saying amongst management types - If you can’t measure it, you can’t manage it. Despite the alluring nature of the statement, I would contend that this is actually a very dangerous idea because many of the most important things can’t be objectively measured. This doesn’t stop management types from trying, though.

One thing that ends up happening is a kind of quest, seeking to find the right Key Performance Indicators, or KPIs, to use as measures.

Very often, though, the presence of the KPIs acts to distort - and corrupt - the very business that they are measuring and trying to improve.

Here are some examples that I’ve seen in my own career …

Bugs fixed per day

Maximize the number of bugs fixed every day, with the goal of shipping faster.

Every bug, no matter how minor or trivial, was formally logged (with all the effort and overhead that this entailed). Lots of bugs were logged and fixed, but actual progress towards shipping slowed.

Average age of open issues

Minimize the average age of open issues, with the goal of encouraging timely resolution and to ensure issues weren’t left sitting around indefinitely.

All the old, trivial issues that nobody cared about were closed, while newer urgent issues that actually impacted on real customers were ignored. Customer satisfaction fell, because all the effort was being expended on old trivial issues that nobody was affected by.

Lines of new code per day

Encourage developers to be more productive by committing more code.

Developers would use copy & paste when a similar situation arose, instead of refactoring a common routine to reuse. Everyone’s effort was expended on writing new functionality, so the existing codebase wasn’t routinely tested, and any bugs found were left for later. The quality of the codebase fell dramatically, and maintenance became a real nightmare.

Average length of phone call

Improve customer satisfaction by providing prompt, rapid, service.

Instead of working with the customer to solve the problem, walking through various possibilities to troubleshoot the issue, a quick suggestion would be made and the customer asked to phone back if that suggestion didn’t work. Call volumes went through the roof as customers were forced to call back five or seven or a dozen times to get a solution to a simple problem.

Conclusions

In each of these cases, the poorly chosen KPI encouraged people into behaviour that was actively detrimental to the business instead of the reverse. The behaviours made the KPI look better, but at the expense of the factors that really matter.

Gaming of a KPI usually becomes apparent, and there are often ways around the problems to measure the right things for the right reasons.

The danger come when gaming of the KPI isn’t apparent - or when management becomes convinced that the measures of the KPI are more important than the behaviour they are promoting. This is a particular risk with non-technical managers of technical teams as their understanding of the work is necessarily limited.

Eric Gunnerson has a good discussion on the effect of poor KPI selection and the effects this can have.

Comments

blog comments powered by Disqus