1 2 3 Previous Next

Learn

157 posts

Every work environment is different, and people experience the same environment in diverse ways. Reflecting on the past 2 years, there are a few things that have made my experience at ServiceNow exceptional.

 

 

The Product

 

It's fantastic. After almost 20 years in technology, it is a breath of fresh air to use a product that works, and that does what it is supposed to do extremely well.

 

When I was first invited to interview at ServiceNow, I scheduled 15 minutes on my calendar to review the company and demo the product. 15 minutes stretched into 2 hours as every customization I tried, button I clicked, and feature I used worked as if I had designed them myself. There was a night and day difference when compared to the tools I had been using for the prior decades.

 

 

Goal Achievement

 

As organizations grow, they commonly add process and bureaucracy that stifles innovation, slows execution, and muddies priorities. Staff and leaders commonly lose sight of the forest for the trees. Prior to ServiceNow I worked with several services attempting to sell to the federal government, but no one was able to determine which customers were valuable enough to justify whichever certification applied. Half a decade later, that organization still loses deals because of this logjam.

 

At ServiceNow, that approach was flipped on it's head. We targeted the broadest, deepest, most complex certifications and controls and went after them all. As there is a significant amount of overlap, the highest ROI occurs when the superset of controls is met rather than a subset. Today ServiceNow has FISMA Moderate Authority to Operate (ATO), Section 508 compliance, and ISO 27001 certification (and others) and we actively sell into a broad set of government groups and agencies.

 

From compliance to international expansion to flash adoption, we set and achieve ambitious goals on aggressive timelines.

 

 

Culture of Transparency

 

Frank Slootman's public interviews display a refreshing amount of candor while effectively outlining our objectives and approaches (e.g. New Logos + Upsells + Renewals). Watch them if you can. Internal messaging is consistent and teams are rewarded accordingly. This freedom from duplicity results in a more trusting internal culture.

 

Externally we provide a great deal of detail about our service, from Allan Leinwand's infrastructure overview to Upgrade and Release Cycle guidance on the Wiki. We are honest with our customers. It sounds like a small thing, but it really isn’t.

 

 

Culture of Accountability

 

The first time I heard our founder Fred Luddy speak he was sharing his experience learning to work with an executive team that was laser-focused on identifying and resolving problem areas rather than simply celebrating success. His initial trepidation was a natural human response; we don't enjoy confronting or reliving our mistakes.

 

But the value of this approach lies in the resulting foundation for future growth. Addressing weaknesses before they become compromises built in to the structure of a business or product has been a cornerstone of ServiceNow's success. Taking accountability allows staff to grow as managers of their own destiny. And that is precisely what most technical staff want in their careers.

 

 

 

ServiceNow isn't perfect; it has challenges and growth areas like any other company or online service. There are still a few areas of technical debt to pay down. Executing changes or introducing new technology without jeopardizing the high SLAs our customers have come to expect can be challenging. Regardless, I am incredibly proud of ServiceNow and am still super excited to be a part of this team today.

 

Alright, maybe this is a little bit of a love letter.

bsweetser

Tag! You're it!

Posted by bsweetser Apr 22, 2014

How do you create your asset tags? Do you get them from an asset tag vendor? Do you create your own? How do you create your own? One question I have received on occassion is "Can't ServiceNow automatically create an asset tag for me?" This is possible, of course, with Number Maintenance. You could then purchase asset tag printers to create your own from this generated value.

 

This video shows how to have ServiceNow generate a default asset tag in your asset records. For the best viewing, open the video in YouTube and view full screen.

 

 

In addition, you may look to identify other details to include in an update set, such as identifying who can update the asset tag and what to do when you want to automatically add an asset tag to an existing asset.

 

How do you handle asset tags in your environment? Let me know in the comments section below.

bsweetser

Road to Asset Knowledge

Posted by bsweetser Apr 16, 2014

The more I work with customers and teach our Asset Management course, the more I come across useful little tidbits related to asset management that I believe many organizations can use. Because most of these tasks need to be performed by System Administrators, they are not explicitly covered in the Asset Management course.

 

With Knowledge 14 coming up at the end of the month, I thought it would be a good opportunity to share these items. In the week leading up to K14, I will post some videos and blog posts with these different tips. The fun starts on Monday, April 21 some discussion about asset tags and moves through other tips related to asset management. Join me then on the Community site and watch for notifications of the posts on Twitter and the ServiceNow Community on Google+ with the hashtag #RoadToKnowledge.


In the meantime, have you put together your K14 schedule yet? With so many great sessions and labs to select from, it can be difficult to determine what sessions are going to be the most beneficial. Alas, it is impossible to attend all sessions.

One does not simply_K14.jpg

(Thanks to rfedoruk for the meme)


I have gone through the possible sessions and labs with a focus on asset management and put together my recommendations to help you plan your schedule. These are my thoughts on topics and why. I hope you find this useful. I listed the labs and sessions in an order that would allow you to attend many of the ones I recommend. Some are offered multiple times or as self-paced and are indicated as such.


Top 5 Asset-related Labs

All labs this year use the upcoming Eureka release, so you not only learn, but you can get a preview of some great upcoming capabilities. My recommendations here are based on a review of these labs.

  • Asset 101: Cover Your Assets (and Configuration Items!) - Tuesday, April 29, 1:20 - 3:20 pm or Thursday, May 1, 10:10 am - 12:10 pm
    • I will admit to bias on this suggestion. I am leading this lab with Bryan Boyle, the director of Asset Management Development. We cover how assets and configuration items (and not just IT assets and CIs) relate to one another and work in conjuction in ServiceNow and how contract management fits in to asset management.
  • Field Normalization: Provide Better Data Consistency - Tuesday, April 29, 3:40 - 5:40 pm or as a self-paced lab
    • Do you trust your data? Did you know the ServiceNow platform provides the ability to normalize and transform data to ensure better consistency, which gets you better reports? Learn how to leverage it in this session with Steve Bandow and Aleck Lin.
  • Vendor Performance Management: Assess Vendors Objectively - Wednesday, April 30, 10:10 am - 12:10 pm
    • At the most basic level, vendor management might mean simply identifying what items you purchase from the vendors you deal with, but true vendor management goes well beyond that. From contracts to satisfaction with the vendor. Mike Zachary and Lisa Henderson take you through the new Vendor Performance Management capabilities originally made available in the Dublin release in this hands-on lab.
  • IT Cost Management: Measure, Allocate, and Report IT Spend - Wednesday, April 30, 4:00 - 6:00 pm
    • Asset management is invariably linked with financial management. Dave Knight and Giora "GT" Tamir take you through how to roll expenses up for your business services and tie them into a budget.
  • Asset 102: Track Licenses with Software Asset Management - Thursday, May 1, 12:40 - 2:40 pm or Tuesday, April 29, 3:40 - 5:40 pm
    • Another biased suggestion, but if you are a software asset manager, this is a must attend session. Not only do you learn how to manage software licenses in ServiceNow, but Bryan Boyle and I introduce you to some of the new capabilities in Eureka that help you better manage your licenses and compliance.


Honorable mention: Work Management: Power Mobile Worker Productivity with Geolocation - Wednesday, April 30, 10:10 am - 12:10 pm - If you have a field services team, you may want to skip the Vendor Performance Management session and instead attend this. Ben Hollifield and Melissa Moghaddam take you through how to effectively manage field services with geolocation and the ITSM and asset capabilities in the ServiceNow platform.


Sessions

You can learn a lot in the labs, but customers are doing great things, and Knowledge is about sharing. Customers deliver 90% of the sessions at Knowledge and there are some great topics to learn from what others are doing. I have collected them below with my thoughts on why they are important. Some of these do overlap with labs, so you will need to make some tough decisions still.

  • Innovating Healthcare Technology Management - Tuesday, April 29, 9:50 am - 10:40 am
    • I saw Chris Bailey from ProHealth Care and Tommy Lee from ServiceNow present this solution at the Wisconsin User Group meeting earlier this year. I love this presentation because it shows how to manage non-IT devices in ServiceNow, integrates with an external source for healthcare technology to get not only model information, but recall and support information, and leverages a lot of the ServiceNow platform in the process. This is a must-attend for healthcare providers, and I even recommend it for non-healthcare providers who manage asset and configuration in their environments.
  • The ITAM Journey - How to Navigate Your Path to ITAM Success - Tuesday, April 29, 1:20 - 2:10 pm (overlaps with the first offering of Asset 101 lab)
    • Shabaab Mazhar and Matthew Stroop from Cardinal Health share how they implemented their asset management lifecycle in ServiceNow. This is a great opportunity to see how they applied their processes in the platform.
  • Data Driven Asset Management - Best Practice Approach - Tuesday, April 29, 3:40 - 4:30 pm (overlaps with Field Normalization lab)
    • Rick Piatak from Cardinal Health and Jim Uomini from the University of San Francisco share their experiences and insights with ServiceNow and the importance of normalized data, particularly as it pertains to software asset management. In Cardinal's case, Rick also covers the work they did with BDNA to get normalized software data.
  • IT Asset Management Best Practices - Tuesday, April 29, 4:50 - 5:40 pm (overlaps with Field Normalization lab)
    • While this is not a customer session, Martin Thompson, founder of The ITAM Review, shares his insights from work with organizations around the world.
  • Eliminating Blind Spots with Asset Management - Wednesday, April 30, 2:50 - 3:40 pm
    • Abercrombie & Fitch has over 1000 stores worldwide with assets and consumables to manage in each. Dennis Kelly shares how A&F integrated their existing systems and BDNA to provide a robust asset management solution.
  • Using Mobile Scanners to Improve IT Asset Data Accuracy - Wednesday, April 30, 4:00 - 4:50 pm (overlaps with IT Cost Management lab)
    • One of the most common questions I have seen lately is "How do I effectively use barcode scanners with Asset Management in ServiceNow?" Robert Little from Santander Consumer USA shares how they did this.
  • Top 5 Tips to Solve Your Software Asset Management Challenge - Thursday, May 1, 10:10 - 11:00 am (overlaps with second offering of Asset 101)
    • Software asset management is a key aspect of an overall asset management practice. Ruben Ricalde from Delphi Corporation shares how they took control of over 14 million software installs with the proper people and processes.
  • Automating and Integrating IT Procurement - Thursday, May 1, 1:50 - 2:40 pm (overlaps with second offering of Asset 102)
    • Integration of asset management to request and procurement processes provides your organization with significant benefits, including better automation and simplified reporting and receiving. United Surgical Partners International moved from a SharePoint-based process for this to ServiceNow. Rodney Betts and Dustin Spain share how they did this in one of the final sessions of the conference.


Honorable mention: If you work with Ariba, you may be interested in the KPMG session Bridging the Gap Between Procurement and Asset Management. It runs on Tuesday, April 29 from 9:50 - 10:40 am, the same time as another session listed above.


I hope these suggestions help you plan a successful Knowledge14 adventure! See you there!





April is a busy month for IT service management (ITSM) industry events. We’ve already had the annual HDI conference in the US and the end of April sees two more: the Service Desk and IT Support Show (SITS) in the UK and Knowledge14 in the US – with a combined attendance of over 10,000 IT professionals. An audience that is amplified globally through streaming and social media. I for one will be watching the SITS twitter stream, #SITS14, while tweeting from #Know14.

 

So if you attend either event please tweet any ITSM nuggets you hear, people want to read about more than just vendor booth numbers and giveaway prizes. Remember that for every one person that will read a tweet at the event there are potentially thousands more not there who would love to read, and even retweet, your nuggets.

 

I’m oddly reminded of some advice from Prince – please tweet your nugget not your booth number – it’s childish but oh-so relevant. But this blog is actually about the value of attending SITS …

 

The ITSM world continues to shrink

 

And by this I don’t mean that there are fewer ITSM professionals or fewer vendors offering ITSM solutions. In fact it’s quite the opposite, and the latter is where SITS really excels – it’s a great event for seeing where the ITSM tool marketplace is and is going.

 

What I actually mean is that we are all so much closer to each now other thanks to technology. And that technology includes air transport – while I stated that SITS is a UK event, in reality it is anything but. There will be attendees from outside the UK (mainland Europe in particular), obviously ITSM tool vendors from all around the world, and internationally-renowned speakers such as Gartner analysts, Barclay Rae, Noel Bruton, and Kaimar Karu too.

 

Take a look at the SITS:

 

 

The latter of which range from safely selecting a new ITSM tool through to improving knowledge management or as I prefer to say – improving knowledge exploitation.

 

But SITS also offers a wealth of networking opportunities

 

With so many like-minded people at SITS, over 4,000 over the two days last time I was told, it’s a great place to network. Whether it be on the expo floor, after the presentations, or while a having a swift, after-hours pint in one of the many local drinking establishments.

 

At SITS the world is your oyster or, as it’s in London, the world is your cockle or jellied eel. There’s so much opportunity to talk to your peers about industry issues, trends, and best practices. In particular what others are doing to improve their IT service delivery and service experience.

 

SITS majors on the available ITSM offerings – so get a demo or two

 

Yes there will be great presentations from those internationally-renowned presenters and great networking opportunities, but SITS is somewhat unique from an expo perspective – the expo is open from the start to the end of the day. Unlike many events, where booth babes such as myself get to sit down between the time-boxed expo time slots, SITS is relentless for those working on the booths (and wearing high-heels a booth-rookie mistake).

 

But it’s great for the attendees who can talk to as many, or as few, ITSM tool vendors as they want, and when they want. From memory that’s any time between 9am and 5pm. But be warned, the Earl’s Court security guards are a little precious, they will be actively chasing you out of the building at 5pm sharp.

 

So get a demo as well as a chat from the vendor or vendors of your choice. There will be ample time over the two days, SITS should be treated as so much more than a vendor-swag-grabbing opportunity even if you aren’t currently looking for a new ITSM or service desk tool.

 

Oh, and did I mention that SITS is free to attend?

 

Vendor sponsorship of everything from the exhibition booths through to the lanyards that support your name badge all contribute to making SITS a free event for enterprise attendees. So what’s stopping you attending? Other than Knowledge14 of course. SITS is a great event for ITSM professionals and I for one will be sad to miss it this year.

 

The good news is that my ServiceNow colleagues will be there. Fielding questions about what we offer and, more importantly, what our customers are doing to improve their IT and other corporate service provider operations (think HR, facilities, and more) and how customers deliver improved service experience in light of consumer-driven expectations.

 

stand.JPG

 

So come see ServiceNow at SITS. I’ll not give you the booth number given my cheekiness above, just look out for a stand similar to the one above.

This is Part 3 of the Cost Modeling Your Cloud series.

 

So how does ServiceNow make rational decisions about what to buy when building our cloud? And how can you?

 

** Disclaimer: Math ahead. Actual costs and data points obfuscated for confidentiality. **

 

Be consistent

 

As a lean organization, we don’t have the luxury of a 50-man team just for testing hardware. We had to hack the evaluation process into the essentials.

 

First, generate a baseline. Simply run a test against whatever you are currently using. Do your best to use a benchmark that is similar to your production workload. A 3D rendering benchmark (e.g. Frames Per Second) would be largely irrelevant to DB performance measured in Transactions Per Second (TPS).

 

 

Second, create evaluation sets. Let’s face it: one could test and compare an infinite number of configurations, but that would require infinite resources.

 

In 2012 ServiceNow was evaluating more than a thousand possible configs (multiple CPUs, RAM configs, disk form factors,  RAID configs, NUMA settings, and file systems) for the 2013 hardware platform. Some of these tests required additional test passes at various load levels (e.g. 1,2,4,8,16,32,64,128 threads), resulting in tens of thousands of possible test results.

 

However, by creating evaluation sets to cost model similar changes, we quickly got a sense of which individual changes were most valuable.

 

If you were testing memory configurations, a change set might include:

4x8GB   LV 1333 DIMMS

4x8GB        1600 DIMMS

2x16GB LV 1333 DIMMS

2x16GB      1600 DIMMS

 

Here is a CPU and memory evaluation set built into the cost model from Part 2. This allows you to get a general sense of which config most effectively aligns with your business priorities (cost vs. performance, etc.). You can tell that our workload sees minimal performance gains from 1866 MHz memory (SKU B and D), and that the increased CAPEX and OPEX make it more expensive per performance unit. This low ROI actually led us to continue running 1600 Mhz for the 2014 hardware platform.

evalsets.png

 

 

Third and finally, build a pilot from the best options in the evaluation sets, and test again.

 

whack-a-mole.jpg

Bottlenecks are like whack-a-mole, and improvements almost never combine linearly. Rather, adding 2% and 2% improvements tends to equal 1.5% or 6% more often than 4%. You won’t know how much difference is actually made until you test the final combination. On paper, we expected to improve performance per dollar by 37% this year but it ended up being closer to 44% when changes were tested in harmony.

 

 

By applying these methods, we are able to run less than 100 tests to identify the best configurations out of tens of thousands of possibilities.

  1. Define a baseline
  2. Create and test evaluation sets against the baseline
  3. Build a pilot from the best eval set results and test against the baseline

 

Be smart: don’t test everything. But this only works if you can be consistent, test after test, month after month, year after year.

I drafted a text-based post to discuss some tips on preparing for a presentation but then one of my developers - Pamela Olomola - decided to turn it into an infographic. Click the graphic to check out the details!

ten_minutes_countdown_04_04.png

For those of you who are more visual, we created an infographic that displays our course catalog:

last_pager_infographic_web.png

I recently did a presentation to some colleagues on what I perceived to be the current CIO challenges and trends and, in doing so, I created a list of things based on what I was hearing, seeing, and reading about. But a list of CIO challenges seemed too one-dimensional, particularly in light of my view that CIO, or IT, priorities have changed little over the last 5 years – what I call the “aspiration to action gap” – you can read about it here.

 

But the CIO challenges, IMO, have changed dramatically – which is odd if the IT priorities haven’t. And they still fit into three high-level “challenge groupings” I imagined-up whilst working at Forrester (although I appreciate that we can all be limited by our own imaginations):

 

  1. Increased business scrutiny.
  2. Increased business and customer expectations.
  3. Increased business and IT complexity.

 

The Evolution Of CIO and IT Challenges

 

In the end I decided it would be fun to not only show the current challenges but also how they have evolved over time. And please remember that this is purely my opinion – no statistics were harmed in writing this blog.

 

The * denotes that the previous challenge is still relevant.

 

Increased Business Scrutiny

 

  1. Do more with less > Deliver more with less* > Reduce costs AND improve service (although see #2)
  2. Reduce costs* > Improve efficiency* > Demonstrate business value
  3. We need governance > We need better governance* > Help, we need governance

 

Increased Business & IT Complexity

 

  1. Achieve IT to Business alignment > Understand more about the business* > Be part of business operations
  2. Manage technology domains > Manage IT services* > Manage multi-supplier sourcing scenarios (which includes outsourcing, SaaS, and cloud)
  3. “Command and control” IT > Business function IT development > “Shadow IT” (especially “unsanctioned” cloud adoption)
  4. “Keep the lights on” > Innovation* > Support growth and competitive advantage (increase the 20 in the 80:20 spend profile)
  5. Technology complexity and opportunity: Mainframe > Client server > Web, mobile, social, “Big Data,” cloud, and BYOD
  6. Build IT infrastructure > Maintain legacy infrastructure* > Source third-party infrastructure services
  7. Build applications* > Application rationalization* > Build mobile apps
  8. Skill shortages > People shortages* > New skill shortages (to manage services) and embracing the rise of automation
  9. One song remains the same: Security > Security > Security

 

And I deliberately moved the first four bullets to the top of the list to highlight the importance of IT management challenges over IT.

 

Increased Business & Customer Expectations

 

  1. Customer satisfaction > User experience > Service experience (consumerization)
  2. CIO role: IT visionary > Infrastructure custodian > IT/CIO relevancy/irrelevancy based on what they do now and plan for later
  3. Need for IT > Need for more IT* > Need for speed (agility)
  4. Support multi-site operations > Support global operations > Support anytime, anyplace, anywhere operations (mobility)
  5. Knowledge retention > Knowledge management* > Knowledge exploitation, community, and collaboration

 

So that’s my list and, like Marmite you might love it or loathe it. But I believe it’s healthy to put your opinions “out there” whether they are fully formed (that’s my polite way of saying that we are not always correct) or not.

 

Finally, if you are at the 2014 HDI Conference in Orlando this week and fancy a chat, look out for a chubby Englishman most-likely holding a treasure chest by the ServiceNow booth (BTW it’s not me in the photo).

 

IMG_2544.JPG

This is Part 2 of the Cost Modeling Your Cloud series. Part 1 is here.

 

One of my least favorite questions from potential suppliers is whether ServiceNow is CAPEX or OPEX driven, as if excelling in one forgives sins in another. My answer is always the same: we account for both.

 

Most of us know the old joke: lies, **** lies, and statistics. I would happily add case studies and TCO models to that list. The assumptions and data elements that feed them are contextual, and the vast majority of sales and marketing content completely ignores this context.

 

So how then does ServiceNow make rational decisions about what to buy when building our cloud? And how can you?

  • Know your numbers
  • Develop the model
  • Be consistent

 

** Disclaimer: Math ahead. Actual costs and data points obfuscated for confidentiality. **

 

Develop the model

 

Each configuration you compare should have all of these elements. For example, if an App server has a CAPEX of $5,000 and will draw 150W, we would estimate OPEX at $4,320 (150W*$28.8). The TCO of $9,320 now allows you to compare the overall costs of devices that may be more power efficient but initially more costly or vice versa.

 

Then you simply divide to get the cost per performance unit or cost per customer. The $/perf for a $9,320 Server A with 5,555 TPS supporting 32 customers would be $1.68, and cost per customer would be $291.25.

 

Alone, these numbers are meaningless, but it becomes simple to evaluate new scenarios with this data.

  • $12,000 server B with 8,888 TPS? $1.35/perf unit.
  • Dropping Server A memory and power draw by 50% from 150W to 75W for Server C? TCO of $7,160 and $1.29/perf unit.

 

These all sound great, don’t they? But now the fun begins and the model becomes more nuanced: Divide by customer.

  • Server A & B both support 32 customers, so performance per customer is 174 TPS and 278 TPS (60% higher) respectively. However, cost per customer rises from $291 to $375 (29% higher), which may not be palatable for your business model.
  • Server C can only support 16 customers, so performance per customer ends up at 347 TPS (roughly 2X) and $448 (54% higher).
    • Server C also has the added benefit of reducing failure domain by half, which is not covered in this post.

 

Selections can then make informed choices depending on business priorities.

  • Need to be cost optimized? Server A looks good.
  • Want the best customer performance at any cost? Server C is your choice even though each server is cheaper than B and has less overall capability.

 

Here they are built into a simplified model:

 

a96bf0a3535c56e078c1ac65207fbbf2.png

 

Once you use your model a few times, you will quickly develop a sense for how a change will impact the model. Minor CAPEX increase with moderate OPEX decrease? Probably not much change. Newer parts at the same cost of the old one? Flat costs with nice performance improvements.

 

At ServiceNow, we optimize for the highest combined and individual performance per dollar possible, so we would probably go with Server B.

 

Be consistent

 

Next time we cover how to use this data to better manage your overall evaluation decisions.

The annual Pink Elephant conference was a busy time for AXELOS. Not only did it announce new staff and a new cybersecurity offering, AXELOS also ran two sessions: one on the ITIL* roadmap and another about how they have been engaging with the IT service management (ITSM) community on the future of ITIL.

 

This blog concentrates on the latter. Or, more specifically, the results of a community-based survey undertaken late last year. I offer a selection of the results below, the full survey will be issued by AXELOS is due course. **

 

Awareness of ITIL

 

Firstly, the similarities and differences across geographies are interesting. The circa 10% of respondents who have heard of ITIL but not studies it ** is nigh on uniform across the Americas, EMEA, and Asia Pacific. As is the circa 40% that, while they have studied ITIL, have yet to gain any certifications, interestingly bar North America which is at a lowly 20%. A reflection on the importance of certification in North America?

Ax1.png AXELOS considers North America to be its largest market for ITIL which is reflected in the percentage of respondents that have achieved ITIL certifications. Albeit with the smallest number of post-Foundation certifications. We could speculate as to why this is but that would be purely speculation – for instance we could jump to the conclusion that Foundation level is seen as good enough, but it requires further analysis as to why people in North America aren’t moving forward with harder-to-get certifications (other than the fact that they are harder to get of course).

 

The value of ITIL by role

 

The survey also looked at the perceived value of ITIL, which is pretty consistent across roles. That it is valuable to IT organizations. What is most interesting though is the C-level enthusiasm relative to other IT roles.

Ax2.pngAgain we could speculate as to the reason(s) – that the most senior of IT management have inflated expectations as to the potential of ITIL or that lower-level staff have been worn down by the trials and tribulations of adopting ITIL beyond incident, problem, and change. Or maybe even the upheaval from a decade or more of changing ITSM tools in the pursuance of improved operational performance and greater ITIL adoption has caused discontent with what is ultimately a best practice framework not a miracle cure.

 

Some might consider it evidence that C-levels are somewhat divorced from operational reality or that they are blissfully unaware of the difficulties of adopting the less common ITIL processes (beyond incident, problem, and change). But again it requires further analysis for us to be certain.

 

ITIL’s continued importance in light of cloud and agile

 

As with the previous data set, there is a consensus about the value of ITIL going forward. And again the C-level and similar roles have the greatest expectations of, and faith in, ITIL. With other IT roles twice as pessimistic in choosing the “ITIL is not valuable or is marginally valuable” and “ITIL is somewhat valuable” options. But 62%, so nearly two-thirds, of lower-level roles still see ITIL as very or exceedingly valuable for future IT operations.

Ax3.pngThe reason(s)? I can only give you my personal opinion. It’s similar to the previous speculation but this time I think we need to look at the multi-dimensionality of this. That there is the suitability of a best practice framework to help, and then there is the ability of people to use that best practice in anger. Having the fastest car in Formula 1 doesn’t mean that you will be world champion, but it has to help.

 

But there are other dimensions too: how ITIL has been sold to organizations and individuals (as the proverbial silver bullet) and whether people understand its power and limitations (and possibly the time and money that needs to be invested beyond the certifications). Successful ITIL adoption is about so much more than exam certificates and, dare I say it, having the best ITSM tool(s). Sadly we come back to that age-old chestnut of people, culture, capabilities, mindsets, performance measurement, and the like.

 

In summary

 

Firstly, good on AXELOS for living up to its promise of taking a global approach to the future of ITIL. You might have also seen them on the road as they appear at the majority, if not all, of the annual itSMF chapter conferences.

 

Secondly it’s great that ITIL is still seen as valuable to IT professionals. Albeit with a disconnect between the opinions of top management and other IT roles that needs to be addressed as part of reselling ITIL with a greater foothold in reality rather than as the previously too prevalent miracle cure to all of IT’s ITSM woes.

 

Well that’s as far as this blog goes. I’d love to hear your opinions on the stats I have shared here.

 

 

* The IT service management best practice framework formerly known as the IT Infrastructure Library

** The sample size is 380 and it should be noted that the results relate only to respondents who had heard of ITIL.

 

All diagrams are provided courtesy, and remain the property, of AXELOS

One of my least favorite questions from potential suppliers is whether ServiceNow is CAPEX or OPEX driven, as if excelling in one forgives sins in another. My answer is always the same: we account for both.

 

Most of us know the old joke: lies, **** lies, and statistics. I would happily add case studies and TCO models to that list. The assumptions and data elements that feed them are contextual, and the vast majority of sales and marketing content completely ignores this context.

 

So how then does ServiceNow make rational decisions about what to buy when building our cloud? And how can you?

  • Know your numbers
  • Develop the model
  • Be consistent

 

** Disclaimer: Math ahead. Actual costs and data points obfuscated for confidentiality. **

 

 

Know your numbers


ServiceNow has 2-3 primary server roles, depending on the generation:

  • front-end App servers
  • back-end Database (DB) servers
  • Backup servers


Each server has or can support these elements

  • capital acquisition cost (CAPEX)
  • operational cost (OPEX)
  • customer capacity
  • performance


All four elements must be quantified.

 

CAPEX is usually easy: It’s what the quote says. Most vendors come back at a later date with better pricing so it is helpful to be transparent up front about the size of the potential business so that pricing doesn’t shift much later in the process.

 

OPEX gets dicey the moment people costs are included. Since people costs can be calculated many different ways, we generally exclude them from evaluation cost models and only account for them at the end of a decision cycle that recommends a significant shift in operational practice. If we are simply refreshing a hardware platform, modeling people impact is overkill.

 

For most online services, data center space, power and cooling are the dominant server operational costs. ServiceNow uses a weighted cost per watt (W) that includes all three to account for OPEX over a 3 year server lifecycle. For example:

  • Weight and average the cost per kw per month from our providers (e.g. $800/kwm)
  • Divide into W (e.g. $0.80 per watt-month)
  • Multiply by an estimated 36-month lifecycle (e.g. $28.8/W).


Your data center contracts may be written differently, but the goal is to normalize these and arrive at a cost per watt over a defined lifecycle. A longer term obviously increases Total Cost of Ownership (TCO), but weights it more heavily toward OPEX.

 

Here is a sample weighted and averaged kw/m table based on a random set of rack draws and quantities that results in an $800/kwm weighted average.

KW Cost.png

If space is accounted for separately from power and/or cooling, amortize it across the planned draw of the cage/contract.

 

Now you need to know how many watts your devices use, and how heavily the devices will be used. Most manufacturers provide “nameplate” guidance and power calculators (e.g. dell.com/essa, shown below). Simply enter the configuration you are testing into the calculator, and it should provide the max potential draw at 100% utilization.


A few optimized services can achieve 100%, but most can’t. ServiceNow intentionally uses a 60%** utilization target for power and over-provisions performance to handle transient customer load demands. For example:


A sample App server draws 200W at 100% and 150W at 60%.

 

power calc.png

 

Customer capacity will be application dependent, and vary depending on the roles in your environment. In our case, App sizing is memory based (e.g. 1GB RAM/Customer, so an App server with 32GB could support 32 customers) and DB sizing is based on a combination of size and transaction rate. The important thing is to have this defined before building your cost model.


Performance testing will depend on your application, but each configuration under review should have the same test run against it. For example, let’s assume our App server can process 5,555 transactions per second (TPS). We make sure to use benchmarking tools that represent customer load as accurately as possible. We tune public tools like SysBench MySQL and SpecJVM to match our application and also use a variety of in-house benchmarking tools to gauge device capability.


Develop the model


Next time we will plug these elements into a cost model. It may be difficult to assemble these numbers, but without them it will be extremely difficult to develop any useful conclusions.

 

** Utilization targets are not linear. If a device uses 100W at 100% util, 60% util could very well be in the 70-85W range, not 60W. A mistake at this point could eventually lead to significant under- or over-provisioning capacity.

May I have your attention please? *

 

Ah “Shadow IT,” it sounds so seedy and a little bit scary. And sadly the phrase has nothing to do with Blade Runner’s rogue replicants. But it is in vogue, and even has its own Wikipedia entry. Interestingly, the Wikipedia definition seems to be a little tighter than what I see elsewhere:

 

“Shadow IT is a term often used to describe IT systems and IT solutions built and used inside organizations without explicit organizational approval”

 

Did you spot what I meant? This definition talks of “without organizational approval” rather than not being provided or sourced by the IT organization. Although I guess you could argue that the IT organization is usually the corporate mechanism for organizational approval.

 

But I digress. The point that I intended to write about is the old-skool “not on my network” IT mentality that crafted, and continues to use, the term “Shadow IT.” Or “Stealth IT.” Or even “Rogue IT.” It’s like we have built a fort on a hill, surrounded it with a moat, and pour boiling oil on anything that is not of our own creation. (Warning: similar gross generalizations will follow).

 

So, I nearly called this blog: “What IT Can Learn From Dallas Buyers Club”

 

I loved Dallas Buyers Club. It’s an inspirational movie, if you haven’t already seen it, and I’d be surprised if it doesn’t win one, if not both, of the male actor Oscars this year.

 

Based on a true story, the lead character, Ron Woodruff, felt failed by the US health system and used his own initiative and resources to import unofficial “medications” to help AIDS patients (including himself) live as long, and as comfortably, as possible.

 

Does it sound familiar from an IT perspective too? The system being circumvented by “unhappy” users of IT services? Such opportunities are there to be taken.

 

But “Shadow IT”? Really?

 

To me it evokes mental images of Nosferatu-like employees hiding in dark alleyways as they secretly add new business-enhancing IT capabilities out of sight of their corporate IT overlords. With the garlic-laden IT professionals doing what they can to expunge the filthy practice that encroaches on their domain.

 

Nosferatu.jpg

 

But sadly, they see all the dangers of “unofficial IT” but they don’t see the upside. They also conveniently ignore the drivers of such behavior – the inability, perceived or otherwise, of the corporate IT organization to deliver, in a timely manner, against business function needs for new technology enablement.

 

So consider Shadow IT from the customer or consumer perspective (and, again, please forgive my gross generalizations and evilness): we, in IT, are the Shadow IT. We are the mysterious group that operates in the shadows, where people hide behind a bucket email address and process-enabled barriers. Left to our own devices (that’s at least the third Pet Shop Boys link in this blog BTW) we’ve created an island far from the corporate mainland, where the inhabitants have no comprehension of the need for business and IT agility. Who knows, what we in IT call Shadow IT might be called “Easy IT” on the corporate mainland.

 

Think about the lead time and barriers to new IT, who could blame employees for thinking of the corporate IT organization as the place where their IT requests go to die?

 

Does IT live in a glass house (and it’s a house with no mirrors)?

 

I still get annoyed when I see articles or blogs that talk of corporate IT organizations defending their territory from cloud, BYOD, and Shadow IT. It’s sub-optimal behavior at best.

 

Why do we keep reinforcing the mentality and behavior that has IT at the center of the corporate universe? Why do we continue to think that the corporate IT organization has a right to exist? It doesn’t. Like any other part of the enterprise it has to earn that right by fulfilling needs and demonstrating business value. Yes it might sound fluffy but it’s very real (if not well defined).

 

So let’s lower our defenses and look at Shadow IT through a different lens. Yes there might be maverick colleagues procuring cloud services because they think they know better than IT, or because they don’t like IT. But saying that they can’t do it is not the answer – think back to limiting employee access to internet, that didn’t really work did it? Personally been there, done it, and bought that t-shirt.

 

Instead let’s use some of those ITIL-espoused capabilities that we probably don’t use as much as we should: business relationship management, continual service improvement, service portfolio management, empowerment with governance, or even root cause analysis to understand the reasons for Shadow IT. Let’s take IT out of the shadows wherever that Shadow IT might sit.

 

I often describe IT as riding a bicycle with big peddles and big wheels but no breaks. And there is no benefit in peddling so fast if we can’t occasionally slow down to see if we are in fact peddling in the right direction.

 

So, if you only do one thing after reading this blog, please look at Shadow IT from the employee perspective – you might just find that it is you.

 

As always, your thoughts and opinions are appreciated and actively encouraged. Feel free to call me out, and bonus points for any PSB references.

 

Image source = Flickr: plong's Photostream

* Title and first line courtesy of Eminem

In a recent Computing.co.uk article, journalist Danny Palmer wrote about the results of a cloud research project conducted by EasyInsites on behalf of managed service provider Adapt. The headline was: “Businesses can't get all they want from one cloud provider – Adapt.” And the key statistic was:

 

“Of those businesses surveyed, 53 per cent don't believe that a single cloud provider is capable of meeting all of their requirements.”*

 

But there was another statistic in the article that resonated more with me, that …

 

“… Just a quarter feel that cloud providers really understand their businesses”

 

For me it was an interesting question to ask – “do your cloud providers understand your business?” But it was also an important question to ask; if you recognize that cloud providers (or cloud service providers) provide a service, or services, rather than technology.

 

IMO, however, not only is this question (about whether the providers understand their customers’ businesses) important, so is whether the corporate IT organization thinks it is important for the provider to know. Surely if the corporate IT organization has never felt it wise to invest time understanding the business why would they expect third-party suppliers to? Also, consider the following question:

 

“If a cloud, or any other external, service provider asked your corporate IT organization questions about the business, would they be able to respond correctly?”

 

How would you reply? Yes? No? Maybe? Or “eek”?

 

And could they do more harm than good in answering the question? Of course the cloud service provider could be directly engaging with a line of business (or other business function) but please bear with me on the importance of business knowledge …

 

IT to business alignment? At least 10+ years of aspiration

 

Notwithstanding the fact that IT is actually part of the business, not something to be aligned with it, the connectivity of IT strategies, plans, policies, and operations with the needs of other business functions is often questioned. A Forrester blog from November 2012 contains a great graphic which shows this disconnect between corporate IT organizations and the people they serve – in this instance that “the business doesn’t rate IT very well (and sometimes IT doesn’t rate itself well).”

 

This is nothing new, and hence the section title. But the issue continues to be important, if not more important than ever.

 

Consider the findings of the Pink Elephant Think Tank

 

At the recent Pink Elephant conference, a pre-selected group of IT service management (ITSM) industry notables came together in a Think Tank. And, whilst the real fruits of their labors will be outlined in another blog, there was a key point from Think Tank member Charles Araujo. That:

 

“IT really needs to start understanding the business.”

 

Which was an interesting point for Charlie to make after 30+ years of corporate IT functions. And after 10+ years of talking about aligning IT to the business. But I can’t disagree with him.

 

Services require consumption; consumption requires need …

 

… and those needs need to be understood.

 

I’m a mathematician at heart so I can’t help trying to be logical. For me, services must meet a need to be consumed. It’s all a QED thing.

 

Unlike traditional IT where infrastructure and applications are built or bought and maintained, third-party delivered services are defined, sourced, and paid for (and paid for again and again and again). Hopefully with some form of service management activity too. Where the associated service mentality and service delivery model is no longer about technology itself but meeting business needs.

 

James Finister, another Pink Think Tank member, summed it up nicely when he stated that the SIAM contracts that his team works with (at TCS) are not written in terms of servers and storage. Instead they are, for example in the case of a large automobile manufacturer, written in terms of automobiles being produced. It makes sense, the supplier needs to understand the customer’s business. But can they?

 

Yes of course they can. But how easy will it be for them to do so if they are working through a corporate IT organization that doesn’t know enough about their business, from corporate purpose through to operations? It’s a worrying thought, that the corporate IT organization is potentially an inhibitor to third-party services.

 

So where is your IT organization in terms of business understanding (I don’t want to hear talk of alignment)? What have you done to improve that understanding? And what’s your view of cloud, or any third-party, service provider needing to know more about their customers?

 

I hope you will take the time to comment.

 

* This was a survey of UK businesses

We have a curse jar in the office. Every time someone says "When I was at X, we Y", referencing prior employer X, 25 cents goes into the jar.

 

ServiceNow is neither search engine nor online retailer. The goals and constraints we have are unique in their combination, and this uniqueness requires us to make decisions with fresh eyes instead of simply replicating past experience. The curse jar requires us to pause and consider to what degree prior experience applies.

 

These are some of the differences my team accounts for when evaluating or designing the physical environment and hardware platform that runs ServiceNow, and each of these could be a subject of numerous posts.

 

Business vs Consumer

Businesses pay for a higher level of service than consumers generally receive. A Netflix outage simply forces the consumer to watch a movie via Vudu or Amazon. There aren't analogous "backup" services that our customers can effectively fall back on if we go down. Their professional lives are in our hands, and so we architect heavily for resilience and keep extensive backups. We have also made a number of changes for our 2014 server platform to reduce restore times.

 

Business Critical vs Productivity Tool

If Workday goes down, you can still use your insurance to see a doctor. If SalesForce is down, you can still sign a contract or close a deal with a customer. If ServiceNow is down, many of our customers businesses stop functioning. Parts stop flowing from suppliers halting assembly lines, home loans aren't issued, and doctors can't request prescriptions or radiological scans. Check out how CERN relies on ServiceNow. As a result, many of our customers want or need 100% uptime. Serviceability and potential impact on SLA become key evaluation criteria at each level of the technology stack.

 

Binary vs Qualitative

Complicating the desire for dial-tone service level or 100% uptime is that ServiceNow usually appears On or Off to the customer. Website availability doesn't step down from Super HD to Regular HD. The application has relatively low bandwidth requirements and isn't very latency sensitive so outages appear "complete" to a customer, even if the customer beside them is operating perfectly. We overbuild certain aspects of our server platform (like IO) so that we degrade gracefully under stress (like MySQL working set swaps).

 

Customization vs Guard Rails

The ServiceNow product and platform are significantly more customizable and extensible than SalesForce, for example. We don't have the same guard rails, which means we have to engineer for broader use cases. Nothing prevents a customer from creating an inefficient query or report that monopolizes the system at the expense of other users, so we invest heavily in customer sizing and isolation to ensure this doesn't happen. (Allan and Tim discuss this in more detail here.) We intentionally invest more to gain better performance from a server, processor, DIMM or storage device instead of trying to save a few pennies. This better prepares us to handle the load that comes from inadvertent customer mistakes, unpredicted customer load and an intentional lack of guard rails.

 

High Revenue/Server vs Low Revenue/Server

These numbers can difficult to quantify as core data isn't often shared. Some of these are best-guess estimates based on public data.

 

eBay recognizes roughly $38k in quarterly revenue per server (roughly $2B over 52.5k servers) in Q4 2013. (http://tech.ebay.com/dashboard) With an install base of 4.5k servers SN was closer to $30k. This is roughly 2x Google’s number and 3x Facebook. Microsoft and Amazon are even more difficult to quantify given the use and profitability of subsidized business lines, but generally speaking we do a much better job extracting value from servers than most online service providers. A colleague of mine mentioned that we are 53X Azure.

 

But more interesting than raw revenue is what it means for the footprint. Operating with this degree of efficiency eliminates the need for us to build our own data centers. A smaller footprint also means that minor efficiency gains possible at 300,000+ server qty would not be realized until well after the end of our hardware lifecycles. The engineering effort to build or ODM our own servers has negative ROI in the short term.

 

We optimize for scale as much as possible, with the knowledge that we simply don't need the volume. We focus on cost-effective improvements that result in immediate value for our customers.

 

Blast Radius

Monolithic solutions like SAN's or blade servers commonly make financial sense at the project level, or for IT consolidation projects, but failures can take out large swaths of customers. We tend to select technology that scales alongside customer growth (e.g. 10 customers = 1 server) at the smallest inflection point that makes financial sense. For example, we would not consider doubling customer density on a server just to save 2%. We would rather spend the 2% and have only 50% of that customer set affected in the case of an outage.

 

Data Volume

We move a relatively small amount of data across our network and across the Internet to our customers, but the value of that traffic is exceptionally high. A Netflix user may pay $7.99/month and consume 250GB of data, whereas a ServiceNow customer may pay hundreds of thousands of dollars for the same quantity of bits. The integrity of these bits is of exceedingly high priority, so we have multi-site replication with backups in both.

 

Bleeding Edge vs Tried and True

While we evaluate new technologies every day, the hard truth is that we tend to remain conservative to introduce the smallest amount of risk into the production environment. Even staid technologies are fully tested, evaluated, and on boarded as if they were bleeding edge to ensure there is no disruption when introduced en masse.

 

 

Bringing all of these into balance seems daunting and occasionally impossible (e.g. traditional HA solutions tend to be more monolithic and complex vs distributed and simple), but we have a rigorous evaluation and design process to ensure each server platform generation is the best possible combination. More on that later.

When I was a callow youth living in MIT's Baker House, I was a big fan of wall posters. One of my favorites was one that looked differently from this one, but carried the exact same message.


The Truth Will Set You Free Poster.jpg


Another favorite, so old that I can find no online image of it, asked "How can you fly a starship if you can't clean your room?" A worthy question, indeed.


Both of these images have been recurring in my mind's eye recently, as I've been thinking a lot about enterprise IT, innovation and transformation. Here's why.


In this New Age of Service, when every business service relies upon IT, IT must be transformed into an excellent provider and manager of services. IT must then extend that transformation to other shared services groups across the enterprise. But here's the misery-inducing (or -increasing) truth. Legacy solutions and processes are simply unable to provide the flexible, agile, powerful, easy-to-use "service fabric" the modern, service-centric enterprise requires.


Meanwhile, too many IT teams are still mired in a morass of outdated, reactive, break-fix-focused tasks, processes and tools. Or to paraphrase that second poster, how can IT innovate or transform itself if it can't clean up its incident or trouble ticket backlog?


Fortunately, there is a way out. And it's being demonstrated and documented by ServiceNow customers, as they use the power and flexibility of ServiceNow to begin and accelerate innovation and transformation, within and beyond IT. How? By using ServiceNow to:

  • Consolidate, standardize and globalize IT services, systems and processes onto a single system of record.
  • Deliver a consumerized service experience to customers, internal users and external business partners.
  • Implement "lights out," "zero-touch" automation to replace manual, redundant activities.
  • Extend successful solutions first proven in IT to manage service relationships effectively throughout and even outside of the enterprise.

 

Each of the first three of these steps makes IT less reactive and more agile, efficient and forward-looking. Which in turn improves the quality and availability of all the services enabled by IT. And each creates opportunities for innovation and transformation within IT. But it's that fourth one that marks the beginning of IT-led, IT-enabled, enterprise-wide transformation via modern custom applications and true service relationship management.


If you (or your colleagues, or your boss) want to see ample, credible evidence of the above, check out the finalists from ServiceNow's 2013 Innovation of the Year competition. Then watch the winners from Target discuss and display their winning custom application in this on-demand Webinar. (Free; minimal registration required.) To see much more such evidence, in real life and real time, you and they must come to Knowledge14 in San Francisco. And if you and your team are already on the path towards innovative transformation, within or beyond IT, get your application for this year's Innovation of the Year competition in by March 31.


Once IT begins to innovate with ServiceNow, transformation is within reach, for IT and beyond. Transform IT. Transform the enterprise.

Filter Blog

By date:
By tag: