The Now Platform® Washington DC release is live. Watch now!

Help
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
stephenmann
Tera Contributor

I've written on IT service management (ITSM) metrics before but a recent webinar (the on-demand recording is available here) with colleagues Paul van Nistelrooij and David van Heusden was a good opportunity to talk to this subject again. My previous blog covered 14 Common ITSM Metrics Pitfalls, this one looks at the right things to do (good or best practices).

 

The right things to do

 

Now much of the good or best practice can be spun out of what I suggested people avoid in 14 Common ITSM Metrics Pitfalls, such as:

 

  • Don't just "do" metrics — fully understand what you are trying to achieve through their use.
  • Take a business-driven approach — such that metrics look beyond the delivery of IT or IT services and don't just measure the easy stuff.
  • Ensure that you understand what your metrics really mean.
  • Be wary of the potential to misuse IT benchmarks — either deliberately or accidentally.
  • Create a robust metrics hierarchy and understand the context between different metrics.
  • Cater for potential human behavior issues.

 

But there are also two biggies that require the proverbial "stopping to take stock":

 

  • Getting reporting right — there's no point have the best metrics in the world if you fail to get reporting right. And who gauges whether reporting is optimal? It's not the people that create the reports — it's the people that read them, or possibly don't read them if they're not up to scratch.
  • Realizing that metrics, not just metric targets, can and should change over time. Ask yourself how many of the metrics you currently use are the same metrics you started with 5, 10, 20, or even 25 years ago.

 

So both of these will require you to take the time to analyze the status quo and to seek input from outside the IT organization.

 

Differentiating between different types of metric

 

Then there is taking a balanced scorecard approach — ensuring that you have a portfolio of metrics that doesn't just focus on what you do (that's the operational perspective). Instead you also need to look at the value perspective, the customer perspective, and the future-orientation perspective.

 

Picture2.png

 

The service desk is full of examples of where having a singular, operational, view can give the wrong impression of IT's performance and cause us to make sub-optimal change or improvement decisions. As this is a short blog, please Google "balanced scorecard" or go to Wikipedia if you would like to find out more.

 

And the use of leading and lagging indicators — where leading indicators focus on the input required to achieve an objective, and lagging indicators measure the output of your activities. The problem with leading indicators is that they're harder to measure than lagging indicators, which is why most reports only show lagging information. The strength of leading indicators, however, is that they're easier to influence than lagging indicators.

 

Hopefully this example from Paul makes the difference clearer:

Picture3.png

 

"A person wants to lose 5 percent of their weight in the next 60 days. It's very tangible objective, one that allows for exact measurement.

 

There are two things this imaginary person can do. One is to measure their weight, by standing on the scales every morning. This will clearly indicate any progress made towards reaching their objective. It's easy to measure, but it doesn't provide guidance as to what needs to be done in order to get closer to their goal. Weighing oneself is a perfect example of a lagging indicator.

 

The other thing to do is to focus on leading indicators. In this example, two leading indicators stand out: calories taken in and calories burned. Where the person can start running or exercising today, and can start eating healthier, or less.

 

The problem with these indicators is that they're harder to measure. If the person runs 5k each day, nowhere will it tell him, or her, exactly how many calories he, or she, burned. Or how much weight he, or she, lost as a result of it. The same problem exists with going on a diet — it's very hard to measure the immediate result of it.

 

The good thing is that the person can influence both of their leading indicators easily, which will eventually result in achieving their objective of losing weight."

 

So that's 10 good or best practices but, as this is the blog that keeps on giving, here are three more:

 

  • Ensure that new technology can capture required data. Sounds obvious but I remember auditing a new revenue management information system over 20 years ago. It only had two reports — one, no one knew what it was for. The other didn't do what the business needed it to do.

 

  • Collect everything that could be useful (if in doubt collect the data, with the obvious caveat around the cost of collection) but only report that what needs reporting. And stop collecting and reporting on data where the cost involved far outweighs the benefit.

 

  • Make sure everyone knows their roles in the performance management process or ecosystem — ranging from performing against targets, collecting data, creating reports, and making decisions on the produced information. Having the best metric reporting framework means little if people aren't using the reports to improve operations or services.

 

And to finish, consider these wise words from ITSM-industry legend Ivor McFarlane:

 

"If we use the wrong metrics, do we not get better at the wrong things?"

 

So how are your metrics looking?

4 Comments