It’s not unusual I’m asked what I view as good practice when assessing the risk of a change, to which I often start off my response by saying “We have a number of capabilities which enable change teams to assess risk.”. To eliminate some from suffering through another of my classic, long-winded, soap box responses I figured why not capture some of those ideas here?


Before talking about the obvious candidates, let’s touch on another related aspect, Conflict detection. Conflict detection provides the ability to identify scheduling issues with the change and is based on the CI and the change requests planned start/end dates. The most common examples of collision detection involve blackout and maintenance schedules (previously discussed here), but does extend beyond that. Conflict detection behavior is controlled by a series of system properties which are defined in the Conflict properties module. This allows the Change administrator complete control over when conflicts are checked, what’s checked and whether or not those conflicts are checked against only the CI defined in the Configuration Item field, checked against parent or children of that CI or the entire affected CI’s related list.


So what does this have to do with risk? Conflicts aren’t always a result of a Normal change being scheduled in a blackout window. Perhaps, there are just other changes scheduled at or around the same time against the CI, increasing risk or your ability to easily distinguish the source of an issue, heaven forbid, the change leads to incidents.


For those wise enough to be using CAB Workbench, you already know the Contextual Change Calendar highlights blackout/maintenance scheduling conflicts as well as potential scheduling conflicts by assignment group or the assignee.


The more traditional risk tools in Change Management are called Risk Calculation and Risk Assessment. Let’s start with the latter.


Risk Assessment leverages the legacy survey engine and allows the change administrator to define questionnaires a change user fills out to assess the risk of a change. Out of the box, we offer two different questionnaires, each triggered by the category of the change (hardware or software).


I, generally, describe Risk Assessment as a subjective tool. By subjective, I mean it relies on the user who fills out the survey to best answer the list of questions selecting the appropriate response. Responses can be weighted and risk defined based on the total question score received. Fortunately, it’s not like anyone would ever respond in a manner which benefits their view of the change, right?  Yes, we should hold people accountable, yes almost everyone is always going to answer in a responsible manner, but don’t outright exclude the human factor.


What do I think a good survey or questionnaire looks like? I think it’s reasonably short. It’s not twenty questions, it’s five, maybe eight. If it’s longer, I suggest questioning if it’s too long. Often, it’s that long because it’s repetitive.


What kind of questions may you ask?


  • Does the change affect a critical CI or business service?
  • How complex is the change?
    • Complexity being a factor of the number of affected or impacted CI’s, number of tasks involved, number of teams involved, and the difficulty of the tasks involved
  • How difficult is it to revert the change if issues arise?
  • Is there a redundancy plan in place?
  • How difficult is it to verify the change was successful?


Of these questions, we can probably argue over priority or weighting of responses, but I don’t view any as a wasted question. They all relate to an important aspect of risk and every one counts. If you have a question which doesn’t count, why would you ask it?


If I’m an approver of the change, in addition to reviewing risk assessment responses, I’m probably asking myself another question. What’s the track record of the team requesting or implementing the change? Is it that Occhialini guy who bungles every fifth change or that Cassidy gal who has never failed me? Risk assessments are valuable, but especially so in conjunction with Risk Calculation.


Risk Calculation, at least in my mind, is more of an objective measure of risk. It allows simple rules to be defined via condition or more elaborate rules via script that are used to calculate risk. Why objective vs. subjective? You’re not asking my, potentially, biased position. You’re looking at objective data which exists in your system today.


What kind of things might you look at?


  • Does the change affect a critical CI or business service?
  • How complex is the change?
  • How many CI’s are affected?
  • How many tasks and teams does it involve?


I could go on, but you’re change people, so the sharpest bunch around. Whether using risk assessments, risk calculation or both, you should be considering the same factors.


What other tools may be at your disposal? If you’ve invested appropriately in your CMDB, it should contain a great deal of the information you need to answer the right questions. Open that nice dependency map showing everything affected and how they relate. I love the phrase “Humans are visual beasts.” Open that map and verify with your own two eyes.


What other open tasks exists against the CI or have occurred recently? A lack of recent stability may indicate additional risk for the change or may just explain why we’re here assessing risk.


The rest of the information you need better be in the change request itself.


What other factors may I also consider? Yep, you guessed it. What’s the past track record of the team requesting or implementing the change? What is the past track record of success of changes of a similar nature? Does the team performing the change have prior experience implementing this type of change?


Using past performance or experience isn’t at all controversial, at least in my humble opinion. It’s controlling the lifecycle of all changes, enabling beneficial changes to be made in a manner which minimizes disruption to services. While you consider all of that, I’m going to try to recall where I heard that phrase.