Skip navigation

Long promised, now finally found the time to write about loading spinners in Service Portal.

Probably everybody has (or has not ) seen the three small dots in the Service Portal header, when there is work happening in the background.



This is a great way to indicate to the user, that there is still work to do for the browser.

However I personally find, that those dots might not draw the attention you would want, so let's have a look at how you could override those dots with your own loading spinner.


The good news is: this is really, really simple and quick


In the next few steps, I will describe everything that is needed to get a new loading spinner into your portal.

Attached to this blog, there is also an update set containing the exact result of the below described steps.


1. Picking a new loading spinner & uploading the SVG file

There are plenty of great resources out there, where you can create your own loading spinners. A lot of them will be CSS based, but I personally prefer the .svg (Scalable Vector Graphics) spinners since those will allow you to simply upload the new spinner into the Images tables and then use it in your Header Widget with the <img> tag, rather than re-creating all the CSS.


Here are a few resources (in no particular order, though one of my favourites is

For the following example, I will use the spin loading image from


Click Get SVG and Upload the SVG to the Images [db_image] table with a name of portal-loading-spinner.svg or pick your own name, just remember that you will have to use it in Step 3.


2. Explaining the out of the box loading spinner

The code snippet responsible for rendering the loading spinner is included in the HTML part of the Header Menu widget.

<div class="header-loader" ng-show="loadingIndicator">
    <div class="hidden-xs sp-loading-indicator la-sm">


The sp-loading-indicator class on the inner <div> element refers to the sp-loader.css file, which ships with Service Portal. This is the CSS class, which contains all CSS styles for rendering the three dots. We will get rid of this <div> in a second, but let me talk about the ng-show for the loadingIndicator variable first.

The whole loading spinner <div> is only shown, when the loadingIndicator variable is set to true, but where does this happen?


Have a look at the Client Script part of the widget and you will find the following three lines:

$scope.$on('sp_loading_indicator', function(e, value) {
  $scope.loadingIndicator = value;


By listening to the sp_loading_indicator event, which is again part of Service Portal, the value of the variable will be determined.

In line 2 of that widget, it will initially be populated with the value of that variable within the $rootScope object.

$scope.loadingIndicator = $rootScope.loadingIndicator;


Essentially everything is already prepared for us, we won't have any work on the client side. We just have to make sure, that our new loading spinner (the .svg image), is rendered in the HTML part.


As the Header Menu is an OOTB widget, we will first have to clone the widget, so we are able to edit it. As a reminder: this is a common practice for all OOTB widgets to make sure, that ServiceNow does not override your changes during the next upgrade.


3. Cloning the Header Menu Widget and overriding the loading spinner

Open the Widget Editor (Service Portal -> Service Portal Configuration -> Widget Editor) and select the Header Menu Widget.

Click the Hamburger Menu beside the Save button and Clone the Widget.


You will not have to work in the Client or Server Script, so you can deselect those parts.

In the HTML part, replace the part described in Step 2 (the whole header-loader <div>) with the following snippet:

<div class="header-loader" ng-show="loadingIndicator">
    <div class="hidden-xs la-sm">
      <img src="portal-loading-spinner.svg" class="loading-spinner"/>

Make sure the name of the image is the name of the image you uploaded in Step 1.


Hint: to test your loading spinner, remove the ng-show directive, so that the loading spinner will always be displayed (or change it to ng-hide). Now you can do a right-click + Inspect in your browser and modify padding, margin, sizing etc. to temporarily adjust and test your styling. Once you have your styling all figured out, you can revert the change and adjust the CSS class.


Now we only need a few minor CSS changes. Within the CSS part, add the definition for the class loading-spinner and make some small adjustments to the header-loader class.


Here is the OOTB CSS for the header-loader class:

.header-loader {
  float: left;
  width: 24px;
  position: relative;
  top: 24px;


Change it to the following and add the loading-spinner class:

.header-loader {
  padding: 5px;

.loading-spinner {
  width: 50px;
  height: 50px;



4. Adding the Header Menu to your Portal

The last step is, that you will have to use that new Header Menu in your Portal.

Navigate to your Portal record in the platform view, open the Main Menu that is allocated to your Portal and simply change the Widget reference (or keep it, if you already had your own widget, to which you added the above described changes).


And that's already it! With a few minor changes, we created our own loading spinner.

That's how it will look (I did not add any menu items to my new header menu):




Obviously you could also pick a CSS based spinner. In that case you would have to add the according CSS to the CSS part of the Widget (or CSS include, that should be related to your portal). In my opinion, SVG files are a great alternative to save yourself some work and still keep it light-weight (the spinner used here, has only a size of 3 KB).

Keep browser support in mind, when using SVG files. In general all major browsers support .svg files, only IE8 and below will not be able to render it.


If the new spinner is still not obvious enough, you could certainly take all of this and render it i.e. in a Bootstrap Modal and display it right in the center of the screen  - that will hopefully be sufficient


Credits go out to daniel.conroy and napike who sent me on the right path


Next time, read about integrating a Google Custom Search Engine into your Service Portal!



The Service Portal has an extremely useful feature called Record Watch. Record Watch allows you to configure a listener function that notifies your widget when certain database actions take place. When you have a Record Watch function configured, your widget can automatically adjust itself accordingly.


In this example, I am going to explain how I added a Record Watch listener function that automatically increases the size of a bar in a bar chart when a matching record is added. This will build on a previous post of mine which can be found here, so this post will strictly focus on the Record Watch portion.


Record Watch Function


First, you'll want to inject spUtil into your client script function parameters. I'll post my full client script at the end in case you aren't sure where to put this.


Here is my Record Watch function which I will walk through:


spUtil.recordWatch($scope, "incident", "active=true", function(name, d) {                     

            if (d.action == 'entry') {

                        for (i=0; i < $scope.activeData.length; i++) {

                                    if (d.record.category.display_value == $scope.activeData[i].category) {









In the first line, we call the Record Watch function from spUtil. The second parameter we pass is the table that we want to listen to and the third parameter is the filter so we only get notifications for the specific types of records we want. Lastly, we create an anonymous function that will allow us to make sense of the notification we receive from our Record Watch function.


We are passing the parameters of name and d to our anonymous function. The name will provide information about the update. The d parameter contains information about the action type as well as the information from the record that was updated. I encourage you to log these 2 objects to your console so you can explore them to get a better feel for what we get back from Record Watch.


You can see that inside of my anonymous function I am only looking for inserted records by using if (d.action == 'entry'). When I get a matching notification, I check the newly created incident's category and increment the bar that has a matching category.


This is just one example out of infinite possibilities of how you can use the Record Watch functionality. My specific thought behind this example is that you could create a dashboard that doesn't need to be refreshed because the widgets automatically adjust according to the Record Watch notifications.


Listening Bar Chart.png

Client Script


function(spUtil, $scope) {

            /* widget controller */

            var c = this;


            // Grab our category counts from our Server Script

            $scope.activeData =;

            $scope.inactiveData =;

            $scope.allData =;


            // Set the width of the chart along with the height of each bar

            var width = c.options.width,

            barHeight = c.options.bar_height,

            leftMargin = c.options.left_margin;


            $scope.updateBars = function(data) {        

                        // Set the dimensions of our chart

                        var chart =".chart").attr("width", width)

                        .attr("height", barHeight * data.length + 50);



                        // Remove existing axis and tooltip




                        // Add a placeholder text element for our tooltip

                        var counter = chart.append("text").attr("class", "counter")

                                    .attr("y", 10)

                                    .attr("x", width-20);


                        // Set the domain and range of the chart

                        var x = d3.scaleLinear()

                                    .range([leftMargin, width])

                                    .domain([0, d3.max(data, function(d) { return d.value * 1; }) + 10]);



                        // Bind our new data to our g elements

                        var bar = chart.selectAll("g").data(data, function(d) { return d.category;});


                        // Remove existing bars that aren't in the new data



                        // Create new g elements for new categories in our new data

                        var barEnter = bar.enter().append("g")

                                    .attr("transform", function(d, i) { return "translate(0," + i * barHeight + ")"; });


                        // Enter new rect elements


                                    .on("mouseover", highlightBar)

                                    .on("mouseout", unhighlightBar)

                                    .attr("class", "chart-bar")

                                    .attr("height", barHeight - 1)

                                    .attr("x", leftMargin)


                                    .attr("width", function(d) { return x(d.value) - leftMargin; });


                        // Enter new text labels


                                    .attr("x", leftMargin - 5)

                                    .attr("y", barHeight / 2)

                                    .attr("width", leftMargin)

                                    .attr("dy", ".35em")

                                    .style("fill", "black")

                                    .style("text-anchor", "end")



                                    .text(function(d) { return d.category; });


                        // Update existing bars


                                    .attr("transform", function(d, i) { return "translate(0," + i * barHeight + ")"; });



                                    .on("mouseover", highlightBar)

                                    .on("mouseout", unhighlightBar)

                                    .data(data, function(d) { return d.category;})


                                    .attr("width", function(d) { return x(d.value) - leftMargin; });


                        // Create the x-axis and append it to the bottom of the chart      

                        var xAxis = d3.axisBottom().scale(x);



                                    .attr("class", "x axis")

                                    .attr("transform", "translate(0," + (barHeight * data.length) + ")")

                                    .attr("x", leftMargin)



                        // Define functions for our hover functionality

                        function highlightBar(d,i) {

                          "fill", "#b0c4de");                   

                                    counter.text(d.category + ' ' + d.value);      



                        function unhighlightBar(d,i) {

                          "fill", "#4682b4");






            spUtil.recordWatch($scope, "incident", "active=true", function(name, d) {                     

                        if (d.action == 'entry') {

                                    for (i=0; i < $scope.activeData.length; i++) {

                                                if (d.record.category.display_value == $scope.activeData[i].category) {














- Record Watch




Have you ever coded a complex function or customization, only to look at it later and realize you forgot to annotate it with comments, or skipped this step to save time? In this first installment of our best practices series, we look at the importance of accurately commenting your scripts and customizations.


Why is commenting on your scripts and customizations so important?

The script or customization details may be obvious to you today but may not be clear to you or others who must use or update the item in the future. Providing helpful comments as part of the development and upgrade process is well worth the effort and can save you and others a lot of time and trouble later. Most code is read many more times than it is written. Give your future self (and colleagues) insight into your thoughts! Here's what we recommend.

code sample 1.jpg


Annotating scripts and customizations best practices:

When writing scripts or customizing records, follow these best practices to avoid confusion.

  • Add clear and accurate comments that provide relevant information. Comments can include such as what the script or record does, its inputs and outputs, the business justification, and configuration requirements.
  • For scripts, use the proper style and tags required to start and end comments in the specific scripting language. It’s best practice to comment every substantial section of code, describing what the intent is behind it so that others looking at it later will understand how it works.
  • For other records, add descriptions to help users and developers understand its content and functionality. Important records to describe include business rules, UI actions, and access control list (ACL) rules. Most ServiceNow records have at least one field for descriptions or comments, such as the Description field. This field is not always visible by default and may need to be added by configuring the form.
  • Where applicable, include cross-references to related records or business requirements to provide additional information and context.
  • When you update a script or record, also update the comments, as needed.

code sample 2.jpg


Behind the scenes here at ServiceNow, the Knowledge Management team works closely with subject matter experts to disseminate critical information to our customers. We’ve found that certain topics come up frequently, in the form of best practices that can help you keep your ServiceNow instances running smoothly. This series aims to target those topics so that you and your organization can benefit from our collective expertise.


To access all of the blog posts in this series, search for "nowsupport best practices series."

In this post, I will outline how I was able to create a treemap in a Service Portal widget using D3.js. Treemaps convert hierarchical data into a conglomeration of nested rectangles that represent the data values.


This particular widget example will query the ServiceNow catalog categories and catalog items that are in my personal developer instance to generate the data object that will be visually expressed in my D3 treemap. As a business use case, you would probably want to query the Requested Item table to display which items and categories are the most frequently ordered. My developer instance doesn't have much Requested Item data, so I generated random numbers to better display the treemap functionality.


Below is a screenshot of my treemap widget in action:

D3 Treemap.png

Each color represents a single catalog category and each rectangle represents a catalog item. The size of the catalog item rectangle is scaled according to how many times that item has been ordered. The bigger the rectangle, the more that item has been ordered. I also added the ability to resize the rectangles to equal sizes to display category sizes based on how many items live under it. To change between these two views, I set up radio buttons to trigger the transition. Below is a screenshot of the second view:


D3 Treemap Categories.png


Since there are previous posts giving a more in-depth introduction to using D3 and Service Portal together, I won't go into much detail with the code. Here is the pasted code for my HTML, CSS, Client Script, and Server Script:




<div class="centered-chart">

     <h1>D3 Treemap</h1>

     <svg width="960" height="570"></svg>



     <label><input type="radio" name="mode" value="sumBySize" checked> Ordered Count</label>

     <label><input type="radio" name="mode" value="sumByCount"> Category Size</label>





form {
   padding-left: 150px;
   font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;

.centered-chart {
   text-align: center;
   font: 10px sans-serif;


Client Script


function() {

            /* widget controller */

            var c = this;


            // Grab our category object from the data object

            var categories =;


            var svg ="svg"),

            width = +svg.attr("width"),

            height = +svg.attr("height");


            var fader = function(color) { return d3.interpolateRgb(color, "#fff")(0.2); },

            color = d3.scaleOrdinal(,

            format = d3.format(",d");


            // Define our D3 treemap object

            var treemap = d3.treemap()


            .size([width, height])




            // Define function that draws treemap based on the data parameter

            function loadData(data) {


                        var root = d3.hierarchy(data)

                        .eachBefore(function(d) {

                           = (d.parent ? + "." : "") +;

                           = (d.parent ? + " > " : "") +;



                        .sort(function(a, b) { return b.height - a.height || b.value - a.value; });




                        var cell = svg.selectAll("g")



                        .attr("transform", function(d) { return "translate(" + d.x0 + "," + d.y0 + ")"; });



                        .attr("id", function(d) { return; })

                        .attr("width", function(d) { return parseInt(d.x1 - d.x0); })

                        .attr("height", function(d) { return parseInt(d.y1 - d.y0); })

                        .attr("fill", function(d) { return color(; });



                        .attr("id", function(d) { return "clip-" +; })


                        .attr("xlink:href", function(d) { return "#" +; });



                        .attr("clip-path", function(d) { return "url(#clip-" + + ")"; })


                        .data(function(d) { return[A-Z][^A-Z])/g); })


                        .attr("x", 4)

                        .attr("y", function(d, i) { return 13 + i * 10; })

                        .text(function(d) { return d; });



                        .text(function(d) { return + "\n" + format(d.value) + " Requested"; });



                        .data([sumBySize, sumByCount], function(d) { return d ? : this.value; })

                        .on("change", changed);


                        // Set timeout that will automatically change our treemap to demonstrate transitions

                        var timeout = d3.timeout(function() {


                                    .property("checked", true)


                        }, 2000);


                        function changed(sum) {







                                    .attr("transform", function(d) { return "translate(" + d.x0 + "," + d.y0 + ")"; })


                                    .attr("width", function(d) {

                                                var width = parseInt(d.x1 - d.x0);

                                                console.log(typeof width);

                                                return width;


                                    .attr("height", function(d) {

                                                var height = parseInt(d.y1 - d.y0);

                                                console.log(typeof height);

                                                return height;





            // Call our data load function to initially draw the treemap with our data object



            function sumByCount(d) {

                        return d.children ? 0 : 1;



            function sumBySize(d) {

                        return d.size;





Server Script


(function() {

/* populate the 'data' object */


// Query catalog items in the Service Catalog

var catGR = new GlideRecord('sc_cat_item');


// I hardcoded the sys_id of the Service Catalog here. This could definitely

// be dynamically set up as a widget option.

catGR.addQuery('sc_catalogs', 'e0d08b13c3330100c8b837659bba8fb4');





// Declare our object that will contain the item data

var cats = {

"name": "Service Catalog",

"children": []



var previousCat = '';

var tempArray = [];


// Loop through items and populate our cats object according to the

// category structure. I don't have great RITM data in my personal

// dev instance, so I used random numbers to imitate the counts.

while ( {

var category = catGR.category.title+'';


if (previousCat == category)

tempArray.push({"name":'', "size": Math.floor((Math.random() * 100) + 1)});

else {

cats.children.push({"name": previousCat, "children": tempArray});

tempArray = [];


previousCat = category;



// Pass our category object to the data object to be used client side

data.categories = cats;







- Treemap



This blog is not the definitive guide to application development on the ServiceNow platform.  Rather is is one developers' notes, thoughts, enlightenments and mis-queues as he moves from .NET and development to service based application development on our platform.


Before joining ServiceNow I wrote Cobol on the mainframe, Visual Basic apps for client server, and cloud based apps using  In the course of this evolution I spent 2 years working on a software development application called Telon.  I bring this up because there are similarities from this Rapid Application Development product and custom application development on ServiceNow but for now I'll start from the beginning.


Like almost every developer I know, all I wanted was an "instance" of the platform ... then get out of my way while I play around and find the cool stuff.  My problem was I expected the interface to be much more like than it actually is.  (Note:  My next post will show my Force/SN Apples-2-Apples matrix). makes it pretty easy to know where things go.  Your HTML is built via a meta-data front end called VisualForce.  Your database queries are written against "Objects" using SOQL (Salesforce Object Query Language).  The syntax is similar to native SQL, however it is NOT SQL.  Most developers speak SQL as a second language but you soon find out that aggregates and other functions are not supported. Finally, server logic is written in Apex.  Apex is very similar to Java. If you know Java, you know Apex.


The training sections in the developer community are the best way to get a feel for ServiceNow development.  This is not the only form of development however. Catalog Item Designer (no-code), Service Creator (low-code) and Service Portal (pro-code) are other flavors of application development on the platform.  We will cover each in this series, but today's entry is focused on scoped service management applications via SN Studio.


I was making my way through the training when I followed a link for UI Pages that took me to another reference for Jelly and Glide.  Both were new to me so I followed the links.  I ended up spending two days in Jelly - Jelly : Executable XML and Jelly Tags - ServiceNow Wiki.  Do NOT do this.  When I came up for air I realized that this is the equivalent of's VisualForce for custom page development. I then made a note of it then put it on hold until I had a reason to return.


Back in the training environment I walked through every section.  Each module made sense but there was a new "language" emerging. Things like Business Rules, Script Actions, and Transform Maps were new names for processes I was familiar with from other frameworks so I started keeping a spreadsheet to correlate these new names with my old names.  Then I hit the UI's !!  Catalog UI Policies, UI Actions, UI Context Menu's, UI Macros, UI Pages, UI Policies, and UI Scripts.  Now I'm not the brightest bulb but I have been around for a while so naturally I figured if a code block starts with UI it must mean "User Interface" and it would execute on the client.  Well as it turns out this is not always the case.  (see below).

Components of a ServiceNow application  - UI x


Code Types

UI Page [sys_ui_page]

Custom Jelly-based HTML pages available to the system


Pro Code

UI Macro [sys_ui_macro]

Custom Jelly-based user interface elements available for lists, forms, and pages


Library - Pro Code

UI Action [sys_ui_action]

Controls to create a custom button or link on lists or forms to perform a particular action

Client and Server


UI Policy [sys_ui_policy]

Controls to specify what fields are visible and editable on a form based on its current content



UI Formatter [sys_ui_formatter]

Controls to specify what custom user interface to display on a particular form



UI Script [sys_ui_script] *

UI scripts provide a way to package client-side JavaScript into a reusable form, similar to how script includes store server-side JavaScript.


Library - Pro Code

Catalog UI Policy [catalog_ui_policy]

Fields to display when viewing Catalog Tasks, Catalog Items, and Requested Item Tasks





I didn't know it then, but as I explored each logic block it hit me.  Developing apps on the ServiceNow platform via Studio is very similar to Telon!   Telon helped mainframe programmers build CICS/DB2 applications quickly by standardizing a base code flow and opening up "Code Blocks" where custom code could be inserted.  Standard edits like 'numeric', 'date', etc were automatically generated.  Only the real logic of the program needed to be added.  A studio built custom application works the same way!  Each program block in ServiceNow has a function.  If the logic of your application calls for it's type .. Then you know where to put the code!  It's efficient and the reason why applications on SN can be built so quickly. Another component that immediately stood out is the power of the Workflow engine. The ability to trade custom code for a workflow is a game changer. Once you build one or two apps its like a light bulb goes off. has a different philosophy.  VisualForce, SOQL, and Apex are blank pallets for a developer to create an application. This is both good and bad.  Good because it's easy to relate after using other IDE's, but bad because members of a team could write their portion of an application in completely different manners.  As time goes on, another person or persons will be responsible for maintaining and updating the application. They will have to understand each programmers style before making any changes.  With ServiceNow, once you get a hang of your naming conventions, you know right were to look!


For pure business value, application time to market and maintainability the ServiceNow approach is a proven winner.



General Platform Block1.png



Advanced Platform Block1.png


** Dave Knight is the author of these Platform Block diagrams.  They helped me and I hope they will help you as well.

Last time, we ran a demo test called "Basic UI Test" and created a Service Catalog Task record. We also saw that, by design, the record was permanently deleted after the test was completed, with the Automated Test Framework (ATF) "automatically taking care of rolling back changes after testing". Before going further with more tests, I'd like to take a deeper look at the various building blocks of ATF both on the surface and behind the scenes; this will give us the lay of the land and help us what to look for later as we create and run more tests.




As noted in Part 1, the Navigator provides the following modules for the atf_test_admin role under the Automated Test Framework application menu:




Below is a quick rundown of what you get for each module:




This module shows list of Tests. Tests include both UI and server tests.




This module shows list of Test Suites. Test Suites are made up of one or more Tests and/or Test Suites.


Test Results


This module shows list of Test Results. There may be multiple Test Results for a Test.


Suite Results


This module shows list of Test Suite Results. There may be multiple Test Suite Results for a Test Suite.


Run > Client Test Runner


This module opens Clint Test Runner window. This window, labeled as UI Test Runner, may also be opened from the Run Test dialog box. Without this window opened, UI tests won't run.


Run > Test Run Queue


This module shows list of Tests that are Waiting or Running. When running a Test Suite, this lists all Tests that are part of the Test Suite and its child Test Suites, if any. This list doesn't show the execution order, limiting its usefulness. NOTE: Tests can't be scheduled to run at a later time.


Occasionally, when I click on this module while Tests are running, I get the message "Security constraints prevent access to requested page" as shown in the screenshot below:


Run > Suite Run Queue


This module shows list of Test Suites that have Started or are Running. As in Test Run Queue, this list doesn't show the execution order. NOTE: Test Suites can't be scheduled to run at a later time


Administration > Properties


This module shows property settings. As noted in Part 1, these settings are unchecked by default and the first checkbox must be checked to be able to run tests.


Administration > Step Configurations


This module shows list of Test Step Configurations. These are used to build Test Steps. ATF comes with several predefined Test Step Configs and new ones can also be created. Test Steps can only be created from an existing Test Step Config.


Administration > Step Environments


This module shows list of Test Step Environments. These are used in Test Step Configurations and there are two predefined environments: UI and Server. NOTE: this doesn't allow selecting a server instance, for example QA, DEV, etc.


Administration > Test Templates


This module shows list of Test Templates. These are used to build Test Steps in a Test. One Test Template comes with the demo data. A Test Template contains a list of Test Step Configurations in a Glide List.


Administration > Step Configuration Categories


This module shows list of Test Step Configuration Categories. These are used in the Add Test Step dialog box to filter Test Step Configurations when building Test Steps. There are two predefined categories: Form and Server.




Using Test Suites, multiple Tests can be bundled. A Test Suite may contain Tests and/or other Test Suites. A Test may belong to more than one Test Suite, as shown in the below hierarchy, whereas a Test Suite may belong to only one parent Test Suite:



When Test Suite A is executed, here's what happens (within the same level in the hierarchy diagram, assume Tests on the left have lower Execution Order, so executed first):


  1. Test 1 runs and finishes
  2. Test 2 runs and finishes
  3. Test Suite B starts
  4. Test 3 runs and finishes
  5. Test 2 doesn't run again, since it already ran in Test Suite A
  6. Test Suite B finishes
  7. Test Suite A finishes


The test sequence is shown in the Run Test Suite dialog box while the Test Suite is running. When a Test Suite has both Tests and Test Suites, like Test Suite A above, the Tests are always executed first before Test Suites.


While ATF allows a Test to be used in multiple Test Suites, care must be exercised when there are dependencies. In the above example, Test 2 was used twice, first under Test Suite A and second time under Test Suite B. We noticed that Test 2 didn't run again under Test Suite B because it already ran under Test Suite A. If Test 2 had dependency on Test 3 in Test Suite B (e.g., using an output value from Test 3), it may not run correctly.




I used GQL Pad to inspect the database and put together the below ERD (Entity Relationship Diagram) showing the various tables used by ATF and their relationships. This, in conjunction with the module descriptions and test hierarchy above, provides an insight into the inner connections in ATF. For clarity, instead of showing all fields, only the reference fields are shown here to highlight the various relationships. We'll reference this later when we discuss how test records are connected and investigate any issues.



I noticed that not all tables had data after loading the demo data; we'll see how and if they're being used as we run more tests later. The Test Suite Test [sys_atf_test_suite_test] table is a many-to-many (m2m) join table that connects between Test Suites and Tests, as shown earlier in Test Hierarchy. Test Template [sys_atf_test_template] has a Glide List for Test Step Configs and is not explicitly related to any tables.


Next time, we'll resume running more Tests as well as creating new ones.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Winner of November 2016 Members' Choice Award

Last time, we ran a simple read-only demo test named "Verify That Form Defaults Are As Expected" that had only three test steps. This time, we'll continue running more demo tests and see how the Automated Test Framework (ATF) works.




After logging in with the atf_test_designer role (not impersonating since the test runs impersonating another user) using Internet Explorer 11, I chose to run the simplest write test among the demo tests called "Basic UI Test". This test has only four steps, opening a new 'Catalog Task' form, set some fields, and submit. Below is the screenshot of the test:





I clicked Run Test and switched over to the Client Test Runner window (labeled as UI Test Runner in the window header) to watch the test in action as shown below. The three field values set are highlighted in red rectangles:



When the test was completed, the status dialog box was updated indicating successful completion as below:





When clicked on Go to Result, the results were shown as below:



In the Step Results tab, all four steps showed Success with summary output for each step. In the Test Log tab, it showed more detailed output with 92 entries.




The results included three screenshots attached:


  1. When the form first opened
  2. After the field values were set
  3. After the form was submitted

This time, I'm not only seeing the form header obscuring the top portion of the form and some missing elements (buttons and icons) from the header, but also the third screenshot taken after the submission is malformed, with the misaligned field labels; you can compare this with an earlier screenshot from the Client Test Runner window above. As I noted in Part 1, I believe this is a side effect of screenshots not being taken directly from the screen. Screenshots provide objective evidence for test results, thus the fidelity is an important prerequisite for regulated testing. I hope the ServiceNow team can address this issue.




Next, I impersonated "ATF User" and looked for the Service Catalog Task [sc_task] record SCTASK0010004 that had just been created and assigned to "ATF User". I navigated to Service Desk > My Work, but couldn't find it. I removed the filter to see all tasks for everyone, but still couldn't find it. I tried the same by navigating to Service Catalog > Open Records > Tasks, but no luck, even after removing the filter conditions. Then I logged in with the admin role and looked inside Sys Audits [sys_audit] and Audit Deleted Records [sys_audit_delete] but no trace. I then ran this Background Script:


var gr = GlideRecord('sc_task');
while (;

var gr = GlideRecord('task');
if (gr.get('number', 'SCTASK0010004'));


but still no luck. The only trace of it was that the Number Counter for SCTASK was showing the next number as 10,005.


ATF provides data cleanup via Automated Test Framework > Administration > Table Cleanup. But it only applied to the Test Results [sys_atf_test_result] table and it was set to run after 2,592,000 seconds (30 days) since sys_created_on, so this would have nothing to do with the missing Service Catalog Task [sc_task] record.


I also inspected the test steps, especially the final step of "Submit a Form" and its Step Configuration under Administration to see if anything would delete the test record, but didn't see anything obvious. Looking through other settings under Administration didn't yield a clue either.


The ATF wiki Automated Test Framework does mention

The test framework automatically tracks and deletes any data created by running tests, automatically taking care of rolling back changes after testing.

So I believe this feature must've deleted the record without a trace and I confirmed it. I do think it's a nice feature, but I can foresee cases where you want to inspect your test results, especially if tests fail, and also possibly take additional screenshots. It would be nice to give the user an option to delete test data later. I also like creating a large number of tickets for load testing, for which an automated tool is ideal (web services would be faster at creating a large number of records but they're not the same as UI tests). I didn't check it, but my thinking is this auto deletion feature would also take care of cascading deletes.


Further review revealed some log entries for rollback in Rollback Logs [sys_rollback_log] as shown below:



Next time, I'd like to take a look under the hood to see how ATF works.




2016-12-07 added rollback log and screenshot


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Winner of November 2016 Members' Choice Award

I've been running a blog series on Data Sampling Techniques where the latest topic was on Statistical Sampling Using Scripted Filter. While the series is targeted towards those interested in data analysis and data quality, I felt a variation of the technique might be of general interest to a wider audience and use cases. So here's a technique on random sampling, that is, randomly selecting GlideRecords using a Script Include and a Scripted Filter. Some use cases might include:


  1. Randomly picking top 3 prize winners for those who responded to Service Desk Satisfaction Survey in the last month, the grand prize being an iPad! This would be a good way to increase response rates to any survey.
  2. As you're launching the new Service Portal, you want to promote the portal and self service by giving away prizes; the more the customers use the portal, the better chances they have at winning the prizes.
  3. You've noticed your Knowledge Base is being underutilized, so you'd like to promote the use by giving away prizes.
  4. With the year-end holidays are approaching, you want to give out prizes to your customers as part of marketing campaign.
  5. An auditor is asking for a random sample of 10 change records for the Accounts Payable system from the last 12 months.
  6. You as Process Manager would like to review 30 incident records from the last month as part of Continual Service Improvement program.


There may be numerous other use cases not listed here and I'd like to hear about yours. For more analytical data sampling techniques, please see my blog series.


Let's add some fun and excitement!




Here's a quick overview of what we'll do; more technical details can be found in my other blog. Here, we'll focus more on various use cases.


  1. Create a Script Include with a function we'll call randomSample().
  2. Call randomSample() from Condition Builders using a Scripted Filter.
  3. Retrieve and review the records.


SCRIPT INCLUDE randomSample()


Let's first create a Script Include with the randomSample() function; this is similar to the statSample() function from my other blog, without the statistical part. Here's how the function works:


  1. Takes the table name and encoded query string needed for querying and sample size; if a field other than sys_id is to be returned, specify it.
  2. Query the table and get the row count using .getRowCount().
  3. Pick a random row from the record set and save the specified field value or sys_id; repeat until the sample size number of unique values are collected.
  4. Return the saved field values in an array.


To create a new Script Include,


  1. Log in with admin role.
  2. Navigate to System Definition > Script Includes.
  3. Click on New button to create a new Script Include.
  4. Fill the form as in the screenshot below:
  5. This can be either Global or Scoped; if Scoped, make sure to jot down the API Name to be used later in Scripted Filter.
  6. Since we want to use this from other applications, set Accessible from to All application scopes.
  7. For this to be used as a Scripted Filter, Client callable must be checked.
  8. In the Script field, paste the below script (also attached below as a file).
  9. Finally, click on Submit.


NOTE: I noticed an unexplained behavior that the function is called twice in a row when it's used in a Scripted Filter; the first call generates the list view and the results from the second call are displayed in the Condition Builder's breadcrumb, resulting in different sets of data between the breadcrumb and the list view. This would go unnoticed in most cases because the repeated calls bring back the same results. However, due to the random nature of randomSample(), the return values are different each time the function is called. I added some special handling to the script to ensure the results are identical for all calls by sampling only during the first call. I also ensured the function can be called by other scripts as a Script Include without an issue in case it's used outside of Scripted Filter.


 * Performs random sampling against a filtered list of GlideRecords.
 * Takes table name and encoded query string then returns an array of specified field or sys_id of sample records.
 * [Sys ID] [is] [javascript:randomSample('incident', 'active=1', 30)]
 * returns 30 sample records from the population size of 54,939
 * NOTE: Scripted Filter runs in rhino.sandbox context so not all classes/objects are available for scripting.
 * NOTE: The function is run twice in Scripted Filter somehow, so use randomSampleRecords to run only once.
 * MIT License
 * Copyright (c) 2016 John.Chun
 * @param {string} tableName - name of table for GlideRecord 
 * @param {string} encodedQuery - encoded query string 
 * @param {int} sampleSize - number of records to have in sample 
 * @param {string} fieldName - name of field whose unique values are to be returned
 * @return {string[]} array of sys_id of random sample records

var randomSampleRecords = [];  // this is in rhino.sandbox context in Scripted Filter; otherwise in global

function randomSample(tableName, encodedQuery, sampleSize, fieldName) {

  if (randomSampleRecords.length) return randomSampleRecords;  // in Scripted Filter, force to run only once
  try {
    //var gr = new GlideRecordSecure(tableName);  // enforce ACL; GlideRecordSecure undefined in Scripted Filter in Helsinki
    var isScriptedFilter = !this.GlideRecordSecure;  // use the fact that GlideRecordSecure is undefined in Scripted Filter
    var gr = new GlideRecord(tableName);
    if (!gr.isValid()) throw 'Invalid table name "' + tableName + '".';
    if (!gr.canRead()) throw 'No permission to read from "' + tableName + '".';  // test ACL for table
    fieldName = fieldName || 'sys_id';  // default to sys_id
    if (gr.getElement(fieldName) == null) throw 'Field "' + fieldName + '" not found.';
    if (!(sampleSize > 0)) throw 'Sample size must be a positive integer.';
    // get population
    if (encodedQuery) gr.addQuery(encodedQuery);
    gr.query();  // to getRowCount()
    var population = gr.getRowCount();
    if (!population || population < sampleSize) throw 'Total number of rows ' + population + ' is less than sample size ' + sampleSize + '.';

    // throw dice and get a random sample
    var offsets = [], records = [];
    while (records.length < sampleSize) {
      var offset = Math.floor(Math.random() * population);  // 0 <= offset < population
      if (indexOf(offsets, offset) >= 0) continue;  // dupe offset, so rethrow dice
      if (offsets.length >= population) break;  // tried entire population
      gr.chooseWindow(offset, offset + 1);  // works in global & scoped
      if ( {
        var value = gr.getElement(fieldName).toString();
        if (indexOf(records, value) < 0) records.push(value);

    if (isScriptedFilter) randomSampleRecords = records;  // in Scripted Filter, save randomSampleRecords
    return records;
  catch(e) {
    return 'ERROR: ' + e;  // return error message
  // emulates Array.prototype.indexOf() in older JavaScript
  function indexOf(arr, val) { for (var i = 0; i < arr.length; i++) if (arr[i] == val) return i; return -1; }




Let's pick three lucky winners among those who responded to Service Desk Satisfaction Survey last month. If someone responded to more than one survey, it increases the odds of winning (if not, they may not be motivated to respond to subsequent surveys). Sent-out surveys are stored in the Survey Instances [asmt_assessment_instance] table (the Survey Responses [asmt_assessment_instance_question] table contains a row for each question answered; unless you want to increase the odds based on the number of questions answered, the Instance table is a better choice). We'll look at only Service Desk Satisfaction Survey and whom they were sent out to, in the user field. We'll also filter the taken_on field to last month only. Since we're selecting people, we'll do all this in a list view for Users. Below is the summary of the parameter values:


List ViewOrganization > Users
tableNameasmt_assessment_instance Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
Scripted Filterjavascript:randomSample('asmt_assessment_instance', ' Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, '')


By default, the return value is an array of sys_id. However, you can pick any other field. For example, we can pick, dot-walking to the user record's name field. We need to set the Condition Builder to


[Name] [is] [javascript:randomSample('asmt_assessment_instance', ' Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, '')]


Make sure this is the only filter condition. When you run this, the result is



The breadcrumb shows the three winners, in the order they were picked at random. The list view under it shows the three user records, in the sort order you defined, which, in this case, is by Name in descending order. If you have first, second, and third prizes, you'll want to use the breadcrumb. Depending on your rules, you may want to add a few backup winners so if the winners don't claim their prizes within a certain time, the prizes are given to backup winners. You may also want to use backup winners in case Service Desk staff members are picked but disqualified.


Every time you refresh this, you'll get different winners; you may want to make sure to have that one official drawing (refresh) for the prizes.


If your organization has people with the same name, you may want to use User ID instead since it should be unique:


[User ID] [is] [javascript:randomSample('asmt_assessment_instance', ' Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.user_name')]



If you really insist on using sys_id, here's what it looks like (user is a reference field that returns sys_id from the sys_user table):


[Sys ID] [is] [javascript:randomSample('asmt_assessment_instance', ' Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user')]



Notice the breadcrumb is not as useful as before unless you can tell who's who from the sys_ids (you can hover over the User ID column and look at the link displayed at the bottom of the browser, if needed).


If you don't have permission to the Users [sys_user] table, you can run the Scripted Filter from other list views, such as Incident. Simply navigate to Incident > Open and set the Condition Builder as below:


[Number] [is] [javascript:randomSample('asmt_assessment_instance', ' Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, '')]


You can use Number or any other string field. This will show



This obviously doesn't show the names in the list view since it's not a Users list view but the names appear in the breadcrumb, as shown in the screenshot above.




You've noticed your Knowledge Base is being underutilized, so you'd like to promote the use by giving away prizes. You'll pick 3 winners from those who viewed KB articles last month; the more articles they viewed, the higher odds of winning. The data on who viewed which knowledge base article is stored in the Knowledge Use [kb_use] table. We'll use sys_updated_on and user fields to run similar conditions as before:


List ViewOrganization > Users
encodedQuerysys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
Scripted Filterjavascript:randomSample('kb_use', 'sys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, '')


We need to set the Condition Builder to


[Name] [is] [javascript:randomSample('kb_use', 'sys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, '')]


Make sure to run this on the first day of the month; if a user viewed the same article last month as well as this month, the sys_updated_on field will only show this month's date, removing the record from the pool.




You as Process Manager would like to review 30 incident records from the last month as part of Continual Service Improvement program. Let's look at only the closed incident tickets and randomly select 30 records:


List ViewIncident > Open (or any Incident list view)
encodedQueryclosed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
Scripted Filterjavascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30)


We need to set the Condition Builder to


[Sys ID] [is] [javascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30)]


This returns an array of sys_ids; another option is to return an array of Numbers and match on Number:


[Number] [is] [javascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30, 'number')]


We've looked at a few practical use cases. This should give you ideas on how to use random sampling so you can use it for other cases. Enjoy and have fun!


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

If you are planning to import thousands of records into your instance and you have a complex coalesce key to update data, this post is for you. Easy import, data load, and import sets are wonderfully designed to import data into your instance.


ServiceNow uses two steps to import data:

  1. Loading
  2. Transforming


Data import is crafted to a very high specification, where the loading happens on the Data Sources while the transforming happens on the Transformation maps. Each execution is controlled by an Import set that displays the history of the data imported. Transformations maps can have coalesce field (keys to avoid duplicates) to allow them to update records.


I will focus on showing an example of a transformation map with a "complex key" (more than one field as coalesce) to update the target records which also avoid duplicates from reference fields (see below) and make one just one query (instead of multiple internal queries if selecting multiple coalesce fields).




On a transformation map, one, or several, 'coalesce' fields define when a record is updated. Whilst the transformation maps are flexible and configurable, when using complex "keys" some transformations are better with a "field map script" as coalesce (aka "conditional coalesce").


A few notes on coalesce fields:

  • Coalesce field searches benefit from indexes on the target data field they are mapped to.
  • sys_id is indexed on all tables, making searches faster if they are used for mappings.
  • Using a reference field as coalesce can cause duplicates if the referenced data has duplicates (see below).
  • Setting multiple fields as coalesce, they could cause multiple queries for each of the coalesce fields on the target data, increasing import times.


On my example, I will use the alm_stock_rule table. To the untrained eye, you would think it contains only strings, and integers.


A closer look at the alm_stock_rule table show the fields Model, Parent stockroom, and Stockroom are references to another table data (Reference fields). Reference fields store a sys_id for each referenced record in the database, but the sys_id is not shown. The reference field shows the display value.

(empty) or blank does not means the reference field is empty. It could be that the reference field display value is (empty) or blank. Always validate this by reviewing if it contains a sys_id value or not on the record itself e.g. Review the XML data of the record.

stock rule table.png


Coalesce using one-to-one field mapping on the transformation map

On reference fields, you can import data using the sys_id of the target 'referenced' data. However, most times, you would like to import data into "alm_stock_rule", using the display value instead to match the existing records.

import data.jpg


For this example, we would use "Stockroom", "Model", "Parent stockroom", "restocking option" as key for updates.


On this transformation map, we would define the "Stockroom", "Model", "Parent stockroom", "restocking option" fields with coalesce "true"

transform map.png

duplicated reference.jpgHere is a list of pros and cons I've generated on using one-to-one field mapping on the transformation map for the coalesce fields:





It is very configurable per field


You have no control on the final searches performed to match the coalesce fields values to the target data. This means that more than one search could be triggered. Worst case scenario is that more than one per each coalesce fields may be triggered.

It is easy to understand


If some of the coalesce fields source data is empty, it can trigger a query for (field=NULL) and the remaining coalesce fields which is unlikely to follow the indexes

No scripting is required


It depends on the field mapping options available

You can map more than the display value of the reference field by using "referenced value field name"


If some of the coalesce fields data holds very limited values (e.g. choice field) and the target table is very large, the query could be slow. e.g. you add impact as part of you coalesce fields, and your target table is incident. There is a case where query could be "select ... from incident where impact = 1" which could be a large query if you have a large incident table.

It is easier see which fields on the target table requires indexes (if the data is unique enough)


It could cause duplicates if reference fields are used as coalesce (see below)


Duplicate records could appear if reference fields are used as coalesce.


Notes on coalesce on reference fields

In this example, the model we are importing is "APC 42U 3100 SP2 NetShelter." I have created two records on the model referenced table (it is not the target table itself but the 'Product Model' table which is referenced by 'model'). As this happens, the coalesce fields will match two, then the import will create a new unwanted record instead of updating it. This is a common problem as not all tables holds unique values.

import data coalesce.jpg

On the import set, those records will show as State = Inserted when it should show ignored or updated

duplicate model.jpg

Using a reference field as coalesce can cause duplicates if the referenced data has duplicates

reference coalesce.jpg


Coalesce on field map scripts

An alternative coalesce would be a "Script" mapping to the target "sys_id".

For this example, I will explain a technique of creating a simple coalesce field by field map script to the sys_id of the target. As sys_id have an index already, so the last search with the script result as coalesce is minimal. You would like to do this to have more flexibility on the final search generated to update your data.


When using a field map script, the previous example transformation map would look as follow:

field map script.jpg

Then set the field map script to match the sys_id on the target and make it the ONLY with coalesce = true.

coalesce true.png

On the field map script, add the script to find the correct target record:

target record.png


Here is the script I used to find the target record:


answer = function(a) {  
     var list_to_compare=[["u_stockroom","stockroom.display_name"],  
     return findmatch(list_to_compare, source, map.target_table,false,true);  

/* Function findmatch is use on transformation maps to find a match with multiple coalesce fields

vlist: list of fields to compare, Array = [[ "source_field","target_field"],...]  Target field allows dot walk. 
vsource: source record,  
vtarget: target record,  
nomatchcreate: true will create record if there is no match)  
debugon: true will log the information about the matching results 
Returns sys_id of the target record, or null if error or if nomatchcreate = false and no match is found. 
Coalesce empty fields need to be OFF, so null answer (e.g on error), insert is cancelled 
function findmatch(vlist, vsource, vtarget, nomatchcreate, debugon) {
try {
    vtarget = new GlideRecord(vtarget + "");
    // Check the source fields coalesce has a value to add to the query 
    for (var h = vlist.length, c = 0; c < h; c++) 
        vsource[vlist[c][0]].hasValue() && 
        vsource.isValidField(vlist[c][0]) && 
        vtarget.addQuery(vlist[c][1], "=", vsource[vlist[c][0]].getDisplayValue());

    var d; ? 
         // if we find a match, we return the sys_id, otherwise, if nomatchcreate = false returns null 
        (d = vtarget.sys_id, debugon && ("source: " + vsource.sys_id + " - record match: " + d), vsource.sys_import_state_comment = "record match: " + d)) : 
        // If no match is found it validates whether a new sys_id is required
        nomatchcreate ? 
            d = gs.generateGUID() :
            (d = null, debugon && ("source: " + vsource.sys_id + " - record match: None"), vsource.sys_import_state_comment = "record match: None"));
    return d
} catch (f) {
    return log.error("script error: " + f), vsource.sys_import_state_comment = "ERROR: " + f, null


The script gives you flexibility to set the search that better meet your business requirements.

Ensure you set "coalesce empty field" unchecked (OFF), because if an error happens on the query or field script, it will return null, then it will ignore the record coalesce field is matching null


You can see this example is center the updates on only one query that depends on the values available.

After opening the data source and clicking on "Load All Records", then transforming them, the import set data will show as follow:

load all records.png

On the import set, the import set rows tab will show the records would match the correct value this time.

import set records.jpg

The imported data will insert the new record, and update the existing one, even when the referenced model has duplicated data, the field map script will match the right record.

duplicated data.png

Using the field map script, we know it will only execute ONE search on the target form, and allow you to define any query that identify uniquely your target record, giving you flexibility and increasing performance on updates.


I've tested using Helsinki, using Google chrome as the browser.



For more information on transforming your data see:

Video demos:


Importing and Exporting data:


Transforming your data:

Validating the order of execution for transform map scripts

The new Automated Test Framework (ATF) in the Istanbul release is a long-awaited feature that I'm sure many people are excited about. Test automation has been a focus area of mine for some time, especially in highly regulated industries under SOX (financial) and GxP (life sciences) regulations where testing is a critical part of compliance. So I decided to take it for a test drive when it first became available (using glide-istanbul-09-23-2016__patch0-10-05-2016); my understanding may be lacking at this point, so any feedback/clarification/correction would be greatly appreciated. Congratulations to the team that delivered this, whom I had the pleasure of meeting during K16!




The Test Management application has been available since the Fuji release. It has test cases and suites for tracking manual test activities for ServiceNow or any other applications. First thing I noticed was ATF and Test Management are two separate applications. This means those who've been using Test Management for ServiceNow testing will need to keep track of testing in two separate places. The same applies to those who have a mix of manual and automated tests, which is a common scenario. I wish that ATF was an extension of Test Management, so all tests can be managed in one place; manual tests then can be progressively converted to automated tests without losing continuity and a single dashboard can provide progress for all tests.




ATF comes with two predefined roles: atf_test_admin with all permissions and atf_test_designer who can create tests in addition to other things. I think two additional roles might be useful, similar to those predefined in Test Management: atf_test_manager and atf_test_tester. The atf_test_manager role would manage creation and execution of tests whereas the atf_test_tester role can only run tests.


I created two users with each of the predefined roles and here's what they see in the Navigator:






ATF was already activated in the Istanbul instance but it didn't have any demo data loaded, except one template. So I logged in with admin role and navigated to System Definition > Plugins to open Automated Test Framework and clicked on Related Links > Load Demo Data Only as shown in the screenshot below:


The demo data adds 14 Tests and 14 Suites.




Before executing tests, the feature must be explicitly enabled. Login with the atf_test_admin role and navigate to Automated Test Framework > Administration > Properties. Here you'll find two checkboxes as shown in the screenshot below. Make sure to check at least the first checkbox to be able to execute tests.





Using Internet Explorer 11, I logged in as a user with the atf_test_designer role and opened one of the simplest demo tests. This read-only test, named "Verify That Form Defaults Are As Expected", has only three steps that check for default values on a Catalog Task form, impersonating "ATF.User"; this user doesn't have either atf_test_admin or atf_test_designer role.



When Run Test was clicked, a dialog box showed up with status as in the screenshot below (I noticed the time displayed here is in PST, the system time zone, although I set the user's time zone to EST):



When Click here of "Click here to open a Client Test Runner" was clicked (see the screenshot above), a new Client Test Runner window opened (if it's not opened, the Test won't proceed). At the top of the window, it showed the progress with a blue progress bar for each step. In the Execution Frame tab, the form was displayed inside an iframe (more on this later), as shown in the screenshot below:



When the execution was completed, the status dialog box was updated with the results, as shown in the screenshot below:





When the Close button was clicked, the Test form showed the test result in the Test Results tab as shown below:



When clicked on the test result link, the Test Result page opened up showing the details of the test as shown below:



The test result page showed step-by-step results in the Step Results tab and included two screenshots as attachments. Below is one of the screenshots from the attachments; notice it doesn't quite look the same as what we saw from the Client Test Runner window previously and the top is obscured by the tall gray header section and there are missing buttons and icons in the header, indicating the screenshots were not captured directly from the screen (more on this later):



The Test Log tab contains quite a bit of information; for a three-step test, it produced 43 test log entries as shown below:



When each log entry was clicked, the Test Result Item form showed more details as below:



The simple read-only test ran with flying colors with no major issues.


Next time, let's take a look at some more complex tests.




2016-12-07 added ENABLE TEST EXECUTION section.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

We finally got to get Istanbul to our personal developer instance and it was just to dive in.


This post will be a quick overview of the 5 big things that I think will rock your world with the Istanbul release. My main hopes are that I can follow up this post with a more deep dig post about each function/application.


So let’s skip the small talk and get to business.


Automated Test Framework:

We all been struggling with testing and now ServiceNow has giving us something to start with. There are great companies out there which has their main focus on this subject and I still think that their products are more advanced than this, but this is a great start for those who got nothing and want to start looking into what they can do and at least have some test automated to get the feeling on how their road ahead looks like.


test done.JPG


CAB Workbench:

Need to activate this plugin to get the workbench going: Change Management - CAB Workbench, at least if you upgrade from an older release.

This is a workbench that will give the life the Change Manager a much better way to have it all in one place. As you notice when you go in here this UI is built with a Service Portal which then might give you all a good idea of the power you can do with the Service Portal. Here then CM can schedule meetings, send out invitations etc. When the meeting is live, you have all the records you need in one place and you can even make functions like ServiceNow will send out a Connect message to the people whose change is coming up next etc.


CAB workbench.JPG


ServiceNow Benchmarks:

Now if your company wants it, you can join the SeviceNow Benchmarks and compare yourself to other companies that is using ServiceNow. You can choose to drill down and compare to companies within the same industry, same size etc. And of course this is a voluntary. If you don’t want to do it, you don’t share your data either with others. But if you want to compare, you need to let others compare to you. Now this is going to be a part of the HI portal, so it isn’t  something that you do from your instance.  You need to login to HI to access these numbers. But It will be nice to be able to see how for example you MTTR is compared to others. Is it so bad that you think, or perhaps it’s a lot better than most of your fellow companies?


Since this is in HI and I ain’t a customer anymore, this is sadly something I can’t dig deeper in at the moment. But I bet there will be other posts about this from people who can give you more insight than me.


Anomaly metrics:

Now in ITOM and Event management we got something called Anomaly Metrics. You can start looking at your CIs and identify anomalous behaviors and stop bad things from happening before it get real messy. Of course you can get a nice good graphics map over the those CIs that have gotten the highest anomaly score over a time span. And the hero of this data is the Operational Metrics that goes throw all that chunk of historical metrics data from example SCOM.


B2C in Customer Service Management:

They have now gone from B2B (Business To Business) and added B2C (Business To Consumers). Focusing on helping companies that handles the bigger crowd of anonymous consumers that may not always be a registered user in your company. Having the ability to have a anonymous chat and a portal that is made to be more suitable for the B2C needs.  To get the portal you need to activate the plugin “Consumer Service Portal”.

consumer portal.PNG


Now these are just a few of the new features in Istanbul and it wasn't easy to pick out just 5 and I think that a lot of you have other favorites which I love to hear about.


Take Care,



Symfoni-Logo-Color (1).pngadvocate.jpg


ServiceNow Witch Doctor and MVP
For all my blog posts:

This time I will show you an easy way to do some interactive filters on normal reports which also will work on homepages.


Now if you got Performance Analytics I would go to that instead and look at the "Interactive Filters" it will give you and here is a great post about that suzanne.smith wrote. You can find it here: Adding interactive filters to homepages and Performance Analytics dashboards


Anyway, I don't think I'm alone when I say that there is a lot of different requests about reports that hits the sys admin team. And many of them are similar. For example open incidents...

Some want to have a bar that groups the incident by assignment group. Some want it by Assignee. Then of course some want it on groups, but stacked with assignee. So if you wanna put it on a homepage, it would be a lot of reports. And if you don't want to give them access to the report itself where they can configure and change stuff without know what they really are doing..


Here is a small step on the way, making your life a little easier when giving the users the option to change the group by and stacked by on the homepage instead.


Since I started to like making videos, I did one for this as well.


Take care,



Symfoni Logo Color Box.jpgsn-advocate-135X48px.pngsn-community-mvp.png 


ServiceNow Witch Doctor and MVP
For all my blog posts:

Filter Blog

By date: By tag: