Skip navigation
1 2 3 Previous Next

Developer Community

546 posts

Other posts related to Themes

"Helsinki Gray" UI16 Theme

Istanbul Theme Properties Visual Guide

Jakarta Theme Properties Visual Guide


We are talking about Themes once again in this post, this time for the Kingston release.


Test Theme Record

Here is everyone's favorite Theme record, updated for Kingston:


/* Kingston Test
    Created by Jim Coyne -

    This is to help show what elements are affected by the CSS colors - PLEASE, PLEASE, PLEASE DO NOT ACTUALLY USE THIS THEME FOR REAL  :-)
    It uses HTML color names to hopefully make it a little easier to understand and find the color

    The comments include a copy of the UI16 default value for Kingston:

    Refer to this post for more information:

/* Mostly Banner */
$navpage-header-bg: DodgerBlue  /* #303a46 - banner background  */
$navpage-header-color: Aqua  /*  #ffffff - banner title text  */
$navpage-header-button-color: Coral /*  no default, not documented - logged-in user name + Connect, Help and Settings icons   */
$navpage-header-divider-color: FireBrick  /*  #455464 - banner separator line  */
$navpage-button-color: BlueViolet  /*  #fff - Update Set and Application icons + Navigator icons + Connect icons  */
$navpage-button-color-hover: Yellow  /*  #7EC24F - banner icons + clear search text icons + Navigator buttons when clicked  */

/* Mostly Navigator */
$navpage-nav-bg: BurlyWood  /*  #303a46 -  Navigator and Sidebar header and footers + unselected Navigator and Connect tabs background + History time separator background  */
$navpage-nav-bg-sub: Pink  /*  #455464 - Navigator and Sidebar backgrounds + background for Applications, Favorites and History entries  */
$nav-highlight-main: LightSkyBlue  /*  #3D4853  - Module, Favorite, History, Connect and Help item when clicked  */
$subnav-background-color: SlateGray  /*  #455464 - Module background  */
$navpage-nav-app-text: Black /*  #cfd4d8 - Application, Favorite and History text + Connect and Help text  */
$navpage-nav-color-sub: Tomato  /*  #bec1c6 - Module text  */
$navpage-nav-app-text-hover: DarkTurquoise   /*  no default, not documented - Selected Module, Favorite, History, Connect and Help item text  */
$navpage-nav-selected-bg: Olive  /*  #4B545F - Selected Navigator and Connect tab background */
$navpage-nav-selected-color: OrangeRed  /*  #ffffff - Active Navigator and Connect tab icons  */
$navpage-nav-unselected-color: Orange  /*  #bec1c6 - Inactive Navigator and Connect tab icons  */
$navpage-nav-border: Magenta  /*  #ddd - Global Search, Navigator and Connect search box outlines + search box filter icons  */
$nav-hr-color: YellowGreen  /* #303a46 - Separator modules without a label + Vertical separator line between main frame and Navigator/Sidebars  */
$nav-highlight-bar-active: Red  /*  #278efc - Highlight line under active Navigator/Connect tabs + selected Connect, Help or Settings icon + number of Connect messages dot  */
$nav-highlight-bar-inactive: PaleGoldenRod  /*  #828890 - Highlight line under inactive Navigator/Connect tabs  */

/* Search text */
$search-text-color: LightGreen  /*  #e7e9eb - Search text + clear search text icons + Navigator bar filter icon when minimized  */

/* unknown properties, listed as a default on the docs site  */
$connect-latest-message: Violet  /*  #cfd4d8  */


Just like the previous test Themes, it is not meant for actual use, but to help point out what properties affect what controls.


I used the default values from the "Default CSS styles for UI16" section from the Default CSS styles article on the docs site to build the Theme.  Below is a list of the individual properties with screenshots to show the affected controls/areas, which appear in yellow.


I've also listed any changes from Jakarta as well as some note/comments/opinions.


$navpage-header-bg - #303a46

  • Banner frame background



$navpage-header-color - #ffffff

  • Banner frame title text
  • Domain picker icon

Change from Jakarta:

  • Global Search icon no longer uses the property



  • Domain picker icon should be using "$navpage-header-button-color" instead for consistency



$navpage-header-button-color (#ffffff, not listed in the Default CSS styles article)

  • Logged-in user name text
  • OpenFrame (phone), Global Search (magnifying glass), Connect (text bubbles), Help (question mark) and Settings (gear) icons
  • Xplore: Developer Toolkit icon (eyeglasses) (see note below)

Changes from Jakarta:

  • Global Search icon uses this property now



  • almost there - only need the Update Set, Application and Domain picker icons to use it now for consistency
  • The Xplore: Developer Toolkit is an excellent third-party tool from James.Neale and company, available on Share



$navpage-header-divider-color - #455464

  • Banner frame separator line



$navpage-button-color - #ffffff

  • Update Set and Application icons
  • Minimize Navigator and Edit Favorites icons
  • Create a New Conversation, Open Connect standalone interface and Close Connect Sidebar icons


  • Update Set and Application icons should be using "$navpage-header-button-color" instead for consistency



$navpage-button-color-hover - #7ec24F

  • Update Set, Application, Global Search, Connect, Help and Settings icons when cursor is over the controls (only the Global Search icon is highlighted in the first screenshot below but the others will highlight when the cursor is over them)
  • Clear search text icon when cursor is over the control in Navigator and Connect sidebar
  • Navigator bar icons when clicked (some browsers [e.g. Chrome] only remove the highlight after cursor is clicked elsewhere)


  • the Domain picker icon does not actually do anything so there's no change when hovering over it



$navpage-nav-bg - #303a46

  • Navigator, Connect and Help Sidebar header and footers
  • Unselected Navigator and Connect Sidebar tab backgrounds
  • Connect Sidebar section headers
  • History time separator background



$navpage-nav-bg-sub - #455464

  • Navigator, Connect and Help Sidebar backgrounds
  • Background for Applications, Favorites and History entries
  • Selected icon when editing a Favorite



$nav-highlight-main - #3D4853

  • Module/Favorite (not in Safari)/History item when clicked (each browser has its own quirks with this one - Safari only shows while clicking the item, others will keep the highlight a second or so)
  • Selected Connect item (remains highlighted until another is selected or another record's chat window is selected or the record's chat window is closed)



  • Help item is no longer highlighted (shown differently now in Kingston)



$subnav-background-color - #455464

  • Module background



$navpage-nav-app-text - #cfd4d8

  • Application title text
  • Separator Module icon
  • "Loading..." Navigator message
  • Favorites text
  • Delete Favorite icon
  • History time separator text
  • History items text
  • Connect message text
  • Connect informational text
  • Help Sidebar title and context menu icon


  • Edit Application and Add to Favorites icons no longer use this property



$navpage-nav-color-sub - #bec1c6

  • Module text
  • Favorite icon
  • Edit Module and Add To Favorites icons when hovering over them



$navpage-nav-app-text-hover (unknown default, not listed in the Default CSS styles article)

  • Module text, Edit Module and Add To Favorites icons when Module is selected/clicked
  • Favorite text and Delete Favorite icon when Favorite is selected/clicked
  • History text, Connect message
  • First Module that matches a Navigator search


  • Help item text when selected/clicked no longer uses this property (Help works differently in Kingston)



$navpage-nav-selected-bg - #4b545F

  • Active Navigator tab background (Apps, Favorites or History)
  • Active Connect tab background (Chat or Support)



$navpage-nav-selected-color - #ffffff

  • Active Navigator tab icon (Apps, Favorites or History)
  • Active Connect tab icon (Chat or Support)



$navpage-nav-unselected-color - #bec1c6

  • Inactive Navigator tab icons (Apps, Favorites or History)
  • Inactive Connect tab icons (Chat or Support)



$nav-highlight-bar-active - #278efc

  • Highlight line under active Navigator tab (Apps, Favorites or History)
  • Highlight line under active Connect tab (Chat or Support)
  • Navigator and Connect search box outlines when selected
  • Selected Connect, Help or Settings icon (only the Connect icon is highlighted in the screenshot below but the others will highlight when clicked/selected)
  • Number of Connect messages dot
  • Outline of logged-in user control when selected



$nav-highlight-bar-inactive - #828890

  • Line under inactive Navigator tabs
  • Line under inactive Connect tabs



$nav-hr-color - #303a46

  • Separator modules without a label
  • Vertical separator line between main frame and Navigator/Sidebars



$navpage-nav-border - #dddddd

  • Global Search, Navigator and Connect search box outlines
  • Navigator and Connect search box filter icons
  • Outline of logged-in user control when selecting a drop-down menu item



$search-text-color - #e7e9eb

  • Global Search, Navigator and Connect search text
  • Clear search text icon in Navigator and Connect search boxes
  • Navigator bar filter icon when minimized




I'll try to keep this post updated with anything new that I find.  Please let me know if I've missed anything, or if something is incorrect.  I started this article based on the Jakarta one, so please forgive any copy/paste errors (but let me know about any).  Thanks in advance.

*** Please Like and/or tag responses as being Correct.
And don't be shy about tagging reponses as Helpful if they were, even if it was not a response to one of your own questions ***

As companies are moving the existing legacy application to the cloud and adopt a mobile and social first approach; users are becoming less tolerant of antique interfaces, stand alone applications that lack proper integration, and data in various silos. Today’s companies need modern and easy to use apps that are accessible from any device, consolidated data, easy integration with back office systems, and the ability to innovate & adapt.


As part of the journey to modernize the application portfolio, many companies are looking to replace Domino Lotus Notes (DLN)  applications.  These applications range in variety, complexity, and functionality.  Many DLN applications are task and workflow based, which make them ideal candidates to migrate onto the ServcieNow Platform.  In this paper, we describe an approach to migrate your DLN applications onto the ServiceNow Platform.


Based on our experience from a large number of DLN to Servicenow Platform migrations, we typically see four phases to migrating your DLN application:


Phase 1: Determine Complexity

In this phase you will be working with the process owner of this app.  They should be able to take you through the workflow and use cases for this app.  Don’t worry about solving the technical details at this point, but get the requirements flushed out, and leverage the expertise of the process owner to take notes about where this app can improve when migrated to ServiceNow.


I often get asked at this phase – “Does ServiceNow have a tool that lets me migrate my DLN app into ServiceNow automatically?”  There is no migration tool, but what I have found is that most people want to use this opportunity to readdress the implementation of the previously written DLN app, and fix issues with their implementation (not migrate over same problems). 


It is at this phase we break down an existing DLN application and really understand what needs to be migrated.  Use this opportunity to rethink how things should be implemented.  Avoid over-engineering and unwanted functionality (process, data, integrations, etc.) to the new Platform. Take note about fields on our forms that are not used, overloaded, or the wrong field type; making our workflow more difficult than it should.


I normally like to collect answers to the following questions (thank you to my colleague Frank Schuster for sharing these questions):

  • What is the functional app description?
  • Who is the business owner of the app?
  • What is the business criticality of the app (on a scale from 1-5, 5 being very critical)?
  • What was the usage of the app in the past 3 months?
  • How many Notes databases are required for this app?
  • Are there integrations or messaging involved with this app?  Does the app use Sametime for messaging?
  • What is the current size of the database?
  • What is the number of documents in the database?
  • Does the app use ACL and what does that security structure look like? Who has what roles?
  • Does the app need to be optimized for mobile?
  • Does the app generate Outlook calendar invites?
  • Do we need to migrate the existing data into ServiceNow?


Phase 2: Plan for Success

You have a few options to solve your app requirements on the Now Platform.

  1. If things are pretty simple, in terms of data model and workflow, then a catalog item may be the best approach. Catalog items are services that are available to order from a service catalog, and use the out-of-box service catalog request data model. Administrators and catalog administrators can define catalog items, with details such as formatted descriptions, workflow, etc. There is no hard and fast rule here, but it is generally a good idea to keep it simple and clean, and included in the appropriate ServiceNow scope. 
  2. You can also define your own data model, not using the service catalog request item table, and keep things nicely contained within one functional ServiceNow application.  These simple data models are easy DLN migrations, and can be represented similarly to the catalog item (within a service catalog and with their own custom workflow).  The main difference between this type of implementation and a catalog item implementation is usually around licensing.
  3. Ultimately, if what you need to convert has a pretty complex data model, workflow, security, integrations, or other business logic, then you should probably do the implementation as a separate application on the Now Platform.


As a best practice you should not create a one to one application in ServiceNow for every DLN application.  In ServiceNow, you have the ablity to create application scopes that represent functional groups.  A functional group is defined in terms of the service you are offering, meaning that a single application may contain several DLN applications in one ServiceNow application scope.  For example, if you have several DLN apps that are used for invoicing, you may want to create one ServiceNow application for invoicing, and combine the functionality of those DLN apps into one Servicenow application scope.


Extending Task tables

Another big question to ask at this point is how to extend your data model from the ServiceNow Task table.  Task is one of the core tables provided with Servicenow Platform; it provides a series of standard fields used on each of the tables that extend it, such as the Incident and Problem tables. In addition, any table which extends task can take advantage of task-specific functionality for driving tasks.


Once again, we do not have to talk about the implementation details here. We do need to find out if we should treat and use the functionality that ServiceNow gives us around extending the task table.  There is no changing our mind once the table is created, without rebuilding the entire table.  I generally like to introduce some of the features of extending Task, in order to determine if we want to introduce this in our DLN app migration.  There are several out-of-box features you get if your data model extends Task, but there also comes a lot of extra metadata you may not really need. A previous blog post talks about extending the task table here:  What you get by extending the Task table .  If your DLN requires SLAs or Visual Task Boards, then the decision is easy - extend Task.  Look at the provided link and make that determination at this stage in the process.


I also like to start mapping parts of DLN to the appropriate ServiceNow application file as we will migrate this DLN app to ServiceNow.  It is much easier to break down functional requirements if we have gone through this exercise up front. We can also identify any gaps in product features at this point, and make the decision if ServiceNow is the right tool for this particular DLN application.  I usually like to categorize my application components into three main categories:

  1. Data Model
  2. Display
  3. Code


Data model

The data model from DLN will include tables, fields, files, and data relationships.  This maps pretty nicely into ServiceNow as a table, with the appropriate fields and security.



The User Interface from DLN will include forms, views, navigators, and web pages, and these will map into UI Pages, Catalog Items, Portals, Process State Flows, Dashboards, Reports, and Related List in ServiceNow.



The code from DLN will include formulas, LotusScript, Java, JavaScript, and other API calls. This will map into business rules, workflow, script includes, and events within ServiceNow.  All of the coding for converting these DLN files will be done in JavaScript on the ServiceNow Platform.  With the exception of Web Pages (which use HTML, CSS, etc) all of the implementation in ServiceNow is done in JavaScript.


Phase 3: Build and Test for Quality

Once you know what needs to be built, and we have a migration plan, the implementation isn’t too hard to do. I usually start building the data model first.  Most of the functionality in ServiceNow is data driven.  What I mean by this, is that once we create the right table structure ServiceNow will autogenerate APIs, List views, and Forms based on the data model.  This can all be done with no code in the ServiceNow Platform, saving quite a bit of development time.


After all of the tables are created, I start to tweak the out-of-box List and Form views of our newly created table.  Remember these forms are autogenerated by ServiceNow.  Once again we can do this without code.  I usually like to insert some sample data and create some reports and dashboards in this phase as well as use our out-of-box Service Portal if a more modern single page application is necessary.


Lastly I tackle the business logic.  This is where I build out the workflow and reimplement the coding details given from the DLN side of things. Understanding the process at this point is imperative.  Do you need approvals for a particular request? Do you calculate data when a particular database trigger is fired?  Do we need to implement a particular integration with an external system?  All those requirements are addressed in this phase.


Ultimately, you do not have to follow these steps exactly. You may find yourself bouncing around these steps, but as a best practice it is important to test frequently along the way.  What I mean is that you implement a functional requirement and test, then implement another requirement then test.  These tests are not a full test, but small functional test based on the feature you currently implement.  Once you feel your application is in a good state, then you can publish to a Test instance of ServiceNow and do a full end to end test with real production data, etc.  Depending on the complexity of app you may have multiple deployments of an app (resulting in more than one version before it is pushed to production).  It is recommended that you use GIT as a repo for this ServiceNow application, as it facilitates the management of these versions.  It is also a best practice to do all code changes and development on your ServiceNow development instance and not on your test/QA or production instances.


Phase 4: Deploy into Production

At this point in the migration we are ready to get our application into the hands of our customers (internal or external).  As a best practice I like to deploy frequently.  Do not be afraid to get your application into production and start getting feedback from your user base.  ServiceNow has a feature for publishing applications to a private repository for your company domain.  It is through this publishing your other instances will have access to the install or update your newly created app.  The length of time to convert a DLN app depends on the complexity of what we are converting.  You may find yourself publishing multiple applications a day and some that may take a week.


This paper is only the beginning of what you can do once you convert an app from DLN into ServiceNow.  There are many features within the ServiceNow Platform we did not get into, but can be utilized to enhance your DLN application from then to NOW!


Feel free to contact me at:

The Treasure hunt is back on. Earlier this year we had the hidden diamonds of Jakarta(LINK) and now it's time for the lost treasure of Kingston.


For those who find the hidden diamonds of Jakarta, my main thoughts here is to highlight the new features in Kingston that might not have gotten the same amount of star power as for example Flow Designer or the entrance of AI.

Before I start, I must say thou that my personal favorite is the Flow designer. I really like it and I think it can really be a game changer when it comes to slim your process, have better performance, less coding and the potential of re-usability within the designer is huge.


AI is of course also really cool, but in Kingston it's still in it's cradle, but I think it will unleech it's true self (hopefully we dont all die ) in London.


So, back to the lost treasures. First I want to mention two things that ServiceNow has buried so deep in Kingston, that those functionalities is long gone and I don't have any treasure maps to find them


  • UI11: I'm guessing many of you don't even know about this. I myself started my journey on Eureka where UI14 came, so I haven't even used it myself. Until Kingston, you could have lived in past and still using UI11, but in Kingston, it's gone.
  • Next thing that also has vanished is the ability to hold "Shift" and hover over the "i" in the list meny to get a popup window with the record in edit mode.


Now for the fun part:


Service Catalog in Service Portal:

There coming a whole new bunch of widgets and other fun stuff. The Service Catalog gets whole bunch of loving and for example our beloved Order guide now gets support for attachments; even per item. And as you can see for the order guide, it reminds a lot of the view we got in the native UI.


You can read more about it here:


Automated Test Framework (ATF):

I haven't have have time to look deep into ATF as I want. But I can see that one things that I would really love if I used ATF, is the new feature to be able to rerun specific failed tests in a suite without need to rerun the whole suite.

You can read more about it here:



he CMDB SDK provides a set of Representational State Transfer (REST) application programming interfaces (APIs) that enable third-party applications to use the identification and reconciliation engine to create, read, and update configuration item (CI) records. Thus, eliminating duplicate CIs and improving overall CMDB data quality.


So hopefully this will lighten up the life the CMDB people when it comes to make sure there isn't duplicate Cis etc.


You can read more about that and more CMDB news here:



In Kingston we see a few of these workspaces (not sure of the official label). In my eyes there are the future and I bet in London these will have grown and taking over a lot more applications. If you want to see how it works, take for example a look at the Agile Board. (You need to activate the agile 2.0 plugin to get it).


You can read more about the agile board here:


Software Asset Management (SDM):

In Kingston there will be OOB integration to handle your Office365 subscriptions. I think there is a lot of money for companies to be saved in all kind of licenses and Office365 sure is a place there has saving potential as well.


You can read more about it here:


Performance Analytics:

Now you have the ability to use External Data sources to pull information from to your PA reports. Real simple and no data is saved in ServiceNow, only the scores is saved in ServiceNow.


You can read more about it here:


Function fields:

A new field type has become reality in Kingston. With this field you can do let it show the result of a database function like for example add, concat or datediff. And by doing this here instead of a business rule, you should get a lot better performance. So together with the Flow designer, I think we will see a lot less use of business rules in the future than we have seen in the past.


You can read more about it here:


And for last,


Mobile App

The mobile app is really starting to evolve and now there is functionality for user to still be able to use the app even if they doesn't have an internet connection. I think this is a real nice feature and probably wanted by many.

You can read more about it here:

Good luck on the treasure hunt!




sn-community-mvp.pngSymfoni Logo Color Box.jpg


ServiceNow Witch Doctor and MVP
For all my blog posts:

So, this time I my journey took me into the Service Portal and the widgets. This time I was looking at a knowledge article within the portal and wonder who I can make the picture bigger if I clicked on it. To be able to achieve that I took use of the functionality called Link function which you can see in the widget editor. I can easily say that I never used that before so I wasn't really sure how to use it and what it really was for.

I took a look at the documentation and what I could find out was that I used Link Function to directly manipulate the DOM (Link Docs). And this was pretty much what I was after. To manipulate the pictures in the articles so if I click on them, I would get a enlarged version.


I made a video here showing what I did and how I did it.



So grab a cup of coffee and enjoy.



Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

LDAP Integration allows ServiceNow instances communicating with Active Directory (AD). This integration facilitates customers in following:


  1. Importing Users/Groups/Roles from AD to ServiceNow instance
  2. Schedule this import so as to keep the data in sync with AD
  3. Authenticate users from AD when login ServiceNow instance


Hence, ServiceNow does not store LDAP user’s password as they are authenticated from the AD. ServiceNow instance resides on cloud/on premise and AD is installed on a different server.


At times, LDAP connection to AD fails due to whatever reason and no LDAP user is able to login. This leaves a big impact on business and cause a P1 incident. The root cause of this connection failure can be anything like a local network outage in customer area, incorrect LDAP connection attempts post cloning, LDAP credentials change on AD etc.


ServiceNow datacenter hosts excellent monitoring tools which polls LDAP test connections in customer instances and if a test connection fails, it generates an alert which in turn generates a high priority incident. Be it an issue on ServiceNow instance side or in a local network on customer side, the major impact is LDAP users cannot login and cannot work unless the issue is fixed.


In order to mitigate the impact, ServiceNow has introduced LDAP One Time Password feature from Istanbul release onwards.


What is LDAP One Time Password?


This is a new feature introduced with ServiceNow Istanbul release assisting LDAP users generating a temporary local password to login ServiceNow when LDAP Server is down. This is available and enabled Out of the Box and requires no plugin activation. It is controlled using below system properties:


  1. glide.ldap.onetime.password.enabled - It's a boolean property used to enable/disable this feature
  2. glide.authenticate.onetime.password.validity - It's an integer property indicating temporary password validity in minutes


How Does This Feature work?


OLD Situation: Login error message when LDAP is unreachable:


Error message on screen: Your account is configured to use LDAP authentication, and we cannot currently connect to the LDAP server. Please contact your ServiceDesk to resolve this issue.


New Situation: Login error message when LDAP is unreachable


Here is the difference in error message: Your account is configured to use LDAP authentication, and we cannot currently connect to the LDAP server. Please contact your ServiceDesk to resolve this issue. To obtain a password for one-time login, click here. An email message containing the password will be sent to you.


User clicks the hyper link click here and platform sends a one-time password to user’s email address for next login as shown in below screen:


Behind The Interface In Platform:

  1. When user clicks on link click here, platform generates a one-time password in security_nonce table which can be used once and expires after used.
  2. By default this password is valid for 10 minutes but can be configured with system property glide.authenticate.onetime.password.validity.
  3. Post one-time password generation, platform generates an event
  4. This event in turn triggers email notification OneTimePasswordEmailNotification.


Troubleshooting Tips when user does not receive One Time Password:

  1. Login as an admin and check in event logs.
  2. If event log is there, make sure notification OneTimePasswordEmailNotification is enabled in user profile.
  3. User profile has a valid email address.
  4. Open a Hi incident when you see steps 1 to 3 are OK and user still missing one-time password email.


This is a small feature but I find it a great enhancement as it brings down the impact of LDAP user login issue tremendously when LDAP Server is down and generates a big value for ServiceNow customers in terms of business continuity.



LDAP Integration Setup

LDAP Integration FAQs

LDAP Integration Troubleshooting

Brgds, LK

PS: Hit Like, Helpful or Correct if I was able to assist you

While developing a scoped application, you may want to secure scoped applications against other applications. Securing application helps the author to have control of their application and prevents customers from interacting with any other artifacts without the author's knowledge. Below are the ways in which design time permissions on the scoped application can be granted or restricted.


3 ways to secure your application

  1. Application Access Setting
  2. Cross Scope Access
  3. Restrict Table Choices


In this post, I will show you how to utilize Cross Scope Access, Application access setting and Restrict Table Choices to secure your scoped apps against other applications. I'll give you examples of how to use Application Access Setting, Restrict Table Choices, Cross Scope Access to enable access, disable access, and track your scoped apps across the platform


Application Access Setting

Application Access Setting is defined to specify what application artifacts are available to other custom applications in different application scopes. These permissions are in addition to the standard access controls (ACLs) that determine whether users can access data in the custom application.


  1. On your instance navigate to System Applications > Studio
  2. Click on open Studio tab.
  3. Click on the application and then select the tables whose application access setting needs to be modified. You will see the below image once you click on any table.

scoped application access.jpg

The Can read, Can write, Can update, and Can delete Application Access options, grant scripts from other application scopes the ability to perform database operations against the table records. In the default case, all application scope scripts can read the table’s records but cannot perform any other database operations.



"Allow configuration" restricts whether out-of-scope applications can create application files like Business rules, New fields, Client script and UI actions.


Restrict application on a scoped app

I can restrict other applications from doing any operation (Create, Read, Update, Delete, Web Service Interaction) on this scoped app by selecting the "Accessible from" value to “This application scope only.” Other application can only interact when the value is set to  "All application scopes.” Depending on the requirement and the use case, you can select all or either on this checkbox to grant permission for other applications.


For other artifacts like Script Include, access is granted or restricted depending on the value set on “Accessible from” field. This field defines whether a script is public or private.


Making a script public

A public script is a script available to all applications. To make a script public, set the "accessible from" field on the Script Include  to all application scopes. Any changes to a public script include must be done carefully to prevent breaking applications that depend on it.


Making a script private

A private script is a script which is specific to the application which it is defined.To make a script private and inaccessible from other applications, set the "Accessible from" field to This application scope only. This allows the script to be called only by code within the defined application scope and prevents code in other application scopes from accessing the script. By setting scripts to private, application developers can easily modify or refactor scripts in future versions since no other applications depend on it.


Example of creating a Record in a Group Table:


Application access settings are different on each OOTB tables. For example, the default Group table allows another application scope Web Service access and Read access. However, other permissions are restricted. If a script attempts to perform an operation that is not allowed, admin users see a message:

Execute operation on API 'GlideRecord.insert' from scope 'CSA' was denied. The application 'CSA' must declare a cross scope access privilege. Please contact the application author to update their privilege requests.

Evaluator: com.glide.script.fencing.CrossScopeAccessNotAllowedException: Access to GlideRecord.insert from scope x_13241_csa not allowed

In the above case, "Can Create" checkbox has to be set to true to be able to create the records in Group table.



Example of making a call to JSUtill from a scoped application:


In the below screenshot we can see JSUtil cannot be called from any other application as the accessible value OOTB is set to "This application scope only".

jsutill scoped app.jpg

In the above case, accessible from has to be changed to "All application scope" to ensure JSUtill works across applications.


Please be aware any changes made on global artifacts will be in the global update set.


Please work out with ServiceNow Certification Team in case you have modified the application access setting (Global or base system tables) and the app is intended to be submitted to store. We will approve or reject apps on the store based on the application use case. The same case applies to any modification made (Accessible From value) on base system artifacts like script include.

Cross Scope Access

Cross scope access allows administrators to manage out-of-scope access to application resources by creating a list of operations and runtime privileges that the application authorizes to run on the target instance. A cross scope is applicable only if the author of the app sets the Runtime access value to Tracking or Enforcing. This helps the author to have control on their application and prevents customers from interacting with any other artifacts without the author's knowledge once the app is downloaded on the target instances.


Cross-scope privileges can be granted for:

  • Table: Read, write, create, delete records
  • Script Include: Execute API
  • Scriptable (script objects): Execute API


Enabling Cross Scope Access

  1. On your instance navigate to System Applications > Studio
  2. Click on application i.e for ex : CSA ( This is the custom application I have created on my instance)

    create scope access.jpg

  3. Open the File menu and select the Settings menu item. The default value for the Runtime Access Tracking field is set to Tracking.

    scoped app tracking.jpg


  • None: All cross scope privileges are granted automatically at runtime.
  • Tracking: Allows application scripts to access resources from other applications. A record for the access is automatically inserted in the cross-scope access table with a Status value of Allowed. This is the default setting.
  • Enforcing: Allows application scripts to access resources from other applications only after an admin authorizes the access. A record is automatically added to the cross-scope access table with a Status value of Requested.


A custom application which have "runtime access " set to tracking will be changed to enforced automatically during app installation on the target instance.


During testing, application developers should run all of their application scripting logic to ensure the system creates any necessary cross-scope privilege records. After application publication, the system only allows runtime requests to run that have a valid cross-scope privilege record.


Example of across access scoped application:


Let’s assume I have a business rule on the custom table which creates a record on Incident table. To make this app work on other instance, I as an author should ensure that this script is at least once executed on my dev instance. You will see an entry in cross scope table as soon as this script is executed on dev instance before the app is published.


Here we are assuming Runtime Access Tracking is set to Enforcing.

var gr = new GlideRecord('incident');
gr.short_description = 'This is a test for CSA';


When the script executes, ServiceNow checks to see if the cross-scope access between the CSA scope and the Global scope table is allowed. In this case, it is not because the Enforcing setting requires an admin to authorize the access. This is a snippet of the error from the System Log:

Security restricted: Execute operation on API 'GlideRecord.insert' from scope 'CSA' was denied. The application 'CSA' must declare a cross scope access privilege. Please contact the application author to update their privilege requests.

Evaluator: com.glide.script.fencing.CrossScopeAccessNotAllowedException: Access to GlideRecord.insert from scope x_13241_csa not allowed


  1. Open the Application Cross-Scope Access module by navigating to System Application> Application Cross-Scope Access
  2. Search for all records with a Status field value of Requested
  3. To grant access, an admin user must click the Open record icon to open the record for editing.
  4. Change the Status to Allowed then click the Update button.

access cross scope app.jpg

The above case is only when the author chose to set runtime access to enforcing. By default, runtime access is set to tracking on the application and at runtime, cross scope privilege records are automatically granted access.



Restrict Table Choices in a scoped application

To get to Restrict Table Choices, follow the same steps 1, 2 and 3 as mentioned in cross scope section above. Restrict Table Choices application sets limits on an application file configuration to only tables from the current application. This setting can be defined at each application level. By default, the checkbox is set to false. You can set the value to true depending on your app requirements.


Example of restricting table choices:


Let’s assume I have set Restrict table choice checkbox to true. In that case, we can only select tables which are in the same scope. This will be true for any artifact created in scoped app for ex: Client script, UI Policy Etc.

restrict table choices.jpg


We have successfully covered how Cross Access Tracking manages out-of-scope access to application resources, Application Access restricts database operations, and Restrict Table Choices application to limit application files configuration. Utilizing Cross Scope and Application Access Setting are key components of scoped applications and its success.


For additional help on this topic, see what other customers have asked:

cross scope privileges denied by table cross scope

Cross scope privilege issue in Scoped Application

access to api refused

Scoped apps - can I allow scripted read access to global scope without allowing creation of business rules, too?

- Pradeep Sharma (@sharma_pradeep)

PS: Hit like, Helpful or Correct depending on the impact of the response

While I was working at a pharmaceutical company, I had to generate lots of documents for compliance reasons. In some cases, I had to pull data from ServiceNow to embed them in Word documents, such as requirements, test scripts, traceability matrix, etc. Soon it became very labor intensive so I decided to automate the process using the ServiceNow ODBC driver and Microsoft Office VBA (VisualBasic for Applications, a macro language). This allowed me to extract data from ServiceNow and use them in documents; the time savings was huge.


I originally published this on Share shortly after presenting it at Knowledge 14 in San Francisco. Since then, I lost access to Share and, unfortunately, wasn't able to restore my access. So I'm republishing it here after making some revisions.


Attached are


  1. Slide deck that was presented at K14 with instructions on how to run the samples in the Word file.
  2. Word macro file containing VBA samples for Word, Excel, PowerPoint, Outlook; pdf files are created by exporting Office files as pdf.


The Word demo file was created using Word 2010 but should work with later versions. Below is the abstract from the deck:

Beyond ITSM, ServiceNow is a powerful platform for any business service request management. Enhance the power by fully integrating with Microsoft Word, Excel, Outlook and PDF, from simple form-based report generation to live, interactive documents. Automatically pull contents from ServiceNow and apply formats to meet business needs. Add password protection or digital signature for enhanced security. Simple, practical do-it-yourself solutions will be demonstrated and best practices discussed. If you ever wanted more than the standard Excel and PDF export from ServiceNow, this is a must-attend session!


The slide deck includes:


  1. Solution – Requirements Tracker
  2. Solution – Overview of exporting to Word templates
  3. Solution – Prepare Word VBA
  4. Solution – Open ServiceNow Database
  5. Solution – Query ServiceNow Database
  6. Solution – Create Word Documents
  7. Solution – Export to PDF
  8. Solution – Protect Document (using Digital Signature)
  9. Solution – Create Excel Spreadsheets
  10. Solution – Create PowerPoint from Excel
  11. Solution – Create Outlook Email
  12. Alternate Solutions
  13. Tips – ODBC
  14. Tips – VBA


Hope you find it useful and please post questions if you run into anything.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Apart from the basic concepts of HTML, CSS, and JavaScript, one of the most powerful things you can learn to really feel comfortable in Service Portal is AngularJS scopes.


There are many resources online which do a great job of explaining AngularJS scopes; there's really no better than what can be found in the official AngularJS documentation. However, there's nothing I'm aware of which is targeted at someone who's getting started in Service Portal, has the basic knowledge of creating a widget, but still feels confused about what's actually going on behind the scenes. Hopefully after reading this article some of that fog of confusion will have lifted.


Once you understand scopes, the amount of road blocks you'll hit when developing widgets will dramatically lower, and you'll be well on your way to becoming a Service Portal master!


This is my first post in what will hopefully become a series of articles helping to explain the basic concepts of Service Portal. If you have any suggestions for topics, feel free to leave a comment below, or reach out to me on twitter with my handle @dylanlindgren.




Given that you're reading this article, it's likely that you've ventured into the developer tools of your web browser on at least a few occasions. Perhaps after a sudden fit of anxiety at the overwhelming amount of information in there, the first thing you'll have noticed is a tree-like structure of elements, within elements, within elements, and so on. This structure is a live, visual representation of the current state of something called the Document Object Model of the page you are on; for short it's called the DOM.




The description of the DOM from the Mozilla Developer Network is as follows:

The Document Object Model (DOM) [...] represents the page so that programs can change the document structure, style and content. The DOM represents the document as nodes and objects.

Shown in another way, you can really see how this parent & child relationship that each DOM element has goes in to building what you see in your browser window. Note that the "stacking" of elements shows the parent & child relationships of the elements on the page.




The key point to understand about the DOM is that it is a "tree" of elements, like in this hypothetical DOM tree:





A comprehensive definition of an directive can be found on the AngularJS documentation:

At a high level, directives are markers on a DOM element [...] that tell AngularJS [...] to attach a specified behavior to that DOM element [...], or even to transform the DOM element and its children.

To rephrase that, any element within the DOM can have a directive attached to it, and you can use the directive to give the element custom behaviour.

This element you're targeting could be anything, such as an image, a header, a list, or even a container div element (which may have many elements within it). And the custom behaviour you give to it could be an animation, to load some text into it from an external website, or even hide it from the page completely; it's up to the creator of the directive. This allows developers to build their own reusable components which do not exist natively in HTML, such as a login button that has a particular style/functionality, or a list of records.


So in the case of the hypothetical DOM tree I showed earlier, perhaps the elements with a green border below might have directives associated with them.




Note that the element with the blue border is the element on the page with the attribute ng-app. In AngularJS the element with this attribute is considered the root element of the application. It's only the elements below this which Angular treats as part of the application.


Let's just show the AngularJS-related elements of the DOM, and rotate it so it looks more like a "tree" (albeit, one that's upside down!)




Directives, a.k.a Widgets


You might now be asking yourself...

Why am I reading about directives? How does this relate to Service Portal?


Well, Service Portal widgets are directives. When you load a Service Portal page, a new AngularJS directive is created for each different widget on the page. This directive is then placed in the DOM wherever the widget was put on the page via Service Portal Designer or Page Editor. As you know from the previous section, a directive adds custom behaviour, and in the case of a widget this behaviour is to insert the HTML from the widget, and perform the actions you define within the client script and/or link function of the widget.


So why are there two names for the same thing? Well, there's lots of functionality that's been added to widgets to make them easier to work with, such as options, the addition of the server-side behaviour that happens in it's server script, and the passing of the data variable generated in it to the client script.


So in the case of widgets, the hypothetical structure of the directives from the previous section could be like so:




You'll notice above there's an "embedded" Widget B, and is not added via Page Designer. This widget is embedded in Widget A as described on the official ServiceNow Documentation. This is actually the only way that you'd end up having a widget sitting below another widget in the DOM.




One characteristic that's just as important to a widget as it is to a directive is scopes. A scope can be thought of as a space where application functionality is contained to. By default a directive will share the scope of it's parent directive, however they can also be set to have a new, isolated scope.

In Service Portal a widget always has an isolated scope. This is something that has been set by the developers of Service Portal to ensure widgets play nicely together, and that you can have multiple instances of the same widget on a single page as they all have their own isolated space to play.


The scope tree


Not just is the DOM a tree, but scopes sit in a tree as well. You can think of the scope tree as a cut-down version of the DOM tree, with all the things that didn't create a new scope removed. As all widgets create a new scope then the scope tree in the example above would look like so:




Local scope vs Root scope


You'll have noticed in the above diagram that at the top of the tree is a thing called $rootScope. You can think of this as an overarching scope that is available to your whole application, from any widget/directive. It also allows you to listen for/broadcast events, but more on that in the next section.


Cross-scope communication


All widgets have isolated scopes, so how can we access data from one widget in another widget? There's two ways:




Similar to the concept of events in ServiceNow, AngularJS events can be announced from any scope in your application. These events can be listened from any scope as well, however which method you use to announce the event will have an impact on which scopes will hear it.


There are two methods you can use to announce events:


  • With $broadcast(), the event will travel down in the scope tree from wherever it was announced, until it hits the bottom. The event will not be heard by sibling scopes. See more on the official AngularJS documentation.
  • Using $emit(), the event will travel up in the scope tree from wherever it was announced, until it hits the $rootScope. This event will not be heard by sibling scopes. See more on the official AngularJS documentation.




So now that we know how announce an event, how do we listen for it? For that, we can use the $on() method. When you use this method, you supply it with a function and whenever that event is heard by the scope the function will be called. See more about this on the official Angular JS documentation.


The $broadcast() and $on() functions are also available on $rootScope, which gives you another place to listen for/announce an event to ensure the right scopes will hear it.


Angular Providers


Another means of communication between widgets is a service or a factory, which in Service Portal are both types of Angular Providers. Services and factories in AngularJS basically offer a place where you can define functions and variables, and any scope which has access to it can use them.


The diagram below visualises where these sit in relation to the scopes, if for all 3 widgets in our example the Angular Provider was added to their related lists.Given the overwhelming amount of information in this article already, I'll cover in detail how one would use an Angular Provider in a future installment.






In the next installment in this series, we will look into how to put this theory into practice by making a series of widgets which communicate with each other. Specifically, how to create a widget showing a list, that reacts and updates with new information when you change a field on the out-of-box form widget.


If you'd like to read more about AngularJS scopes, I've put together some further reading links below:


Well, that's it for scopes! Feel free to leave feedback in the comments below, or reach out to me on twitter with my handle @dylanlindgren if you'd like to buy me a beer! 

user interface.jpgThe Knowledge Base is kept current with frequent edits and additions. Find out what is new and stay up-to-date on the latest ServiceNow Knowledge Base articles by reviewing the weekly KB digest.



The ServiceNow user interface is the main way to interact with the information and applications in your instance. Organizations can customize their user interface to incorporate their brand logos and colors. The ServiceNow UI extends to the mobile, tablet, and desktop.


Recently added and updated articles on UI:


User Interface

Customize, personalize and manage the way users view and interact with your organization or company’s interface using UI.

creatorcon challenge.PNG


It's on again. Yes, that's right - the CreatorCon Challenge is back for your app creation and entrepreneurial pleasure.


No amount of shouting about how great it all was and how "overwhelming" the response was last year is going to make any difference - the proof that last year's inaugural edition was well received by the ServiceNow developer and partner community, as well as the judges, the attendees at the finale at Knowledge17,  and most importantly the customers that have already purchased the winning apps, is that ServiceNow decided to fund it and invest in it again for the 2nd year in a row.  Proof's in the pudding as they say. Well, the pudding is here, and it's mighty tasty.



This year, we're delighted to up our challenge game in four ways (not necessarily a complete list):


1) ServiceNow founder Fred Luddy is a judge. Who better to pitch your company and app to on the big stage at Knowledge18 than Fred Luddy? Answer: nobody.


2) Your personal VC mentor - BJ Lackland, CEO of Lighter Capital , a leader in the non-dilutive VC space, is back (he was a judge last year), this time as a personal mentor to the three finalists. That is some pretty serious value for startups to get access to someone of BJ's expertise to coach and mentor you as you fine-tune your investor pitches and then to be on-stage with you at Knowledge18. Sort of like your own Angelo Dundee.


3) $1M in total prizes - $500K in cash investments from ServiceNow Ventures plus $500K worth of sales and marketing prizes. See the FAQ for a detailed breakdown of how the $500K in sales and marketing is calculated, if you have any doubts about what it's really worth. And that doesn't even include what it would cost you for access to a mentor such as BJ Lackland (because BJ agreed to be a mentor after the FAQ was published, natch).


4) Expanded distribution channels - reach new customers in new markets all by yourself with the OEM Program. The perfect complement to the ServiceNow Store. Winners can (and are required to, actually) distribute and monetize apps on the Store, OEM Program, or both!


We are once again psyched to see what our developer and partner community will create on the Now Platform with all of the new platform services  in Jakarta (such as MetricBase) and what's coming soon in Kingston.


Good luck -  create something amazing and pitch it to Fred. We look forward to reviewing your entry.

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

At a time when Twitter, Facebook, Google and Yahoo give you flexibility to select what you want and don't want to read, the email is still one of the places where "unsubscribe" is not always an option. As users go on holidays, move jobs (e.g. the account get disabled), mailboxes get full, emails are marked as spam, etc, some email relays would start bouncing back those emails, and those users would not be able to unsubscribe to those senders. On your instance, it is a good practice to review the emails bounced back and review the users for which no further notification should be sent. This requires an efficient and regular intervention, as bounced emails would slowly increase if no action is taken. From the administration point of view, if users are empower themselves to unsubscribe, you will have less work to do, and wanted emails would go out faster.



Following up my previous blog Speed up your email delivery by validating recipients, here is my review on the matter.


3 ways to minimize bounced emails

  1. Educate your users to unsubscribe from the unwanted notifications instead of just ignoring them
  2. Regularly execute a validation of the emails bounced back to the instance
  3. Archive junk emails to improve the email table performance


Educate your users to unsubscribe from unwanted notifications

To educate your users to unsubscribe, ensure to inform them this functionality is available on your instance. Users can create filters within their preference to only receive the notification if the condition set on their personalised filter are met. They could also choose to disable per notification, or disable notifications received by the user altogether. This does not affect the user active status - just the notifications targeted to them.


Here is one example from a notifications received by one user: angelo.ferentz.



For this, users can go to their notification preferences, then select the notification they have received and set the dial to "off." That would avoid further notifications from being sent to this user only.



They can also decide to leave them ON, but set a personalised filter that avoids them under certain conditions.




Finally, if users decide to stop any notification from being sent to their devices (e.g. before going on holiday), users can select to disable them altogether.



From the performance point of view, if users only enable the notifications they require, there would be less emails to send. This is a much better option than generating tons of emails that go out and back again, when these settings are ignored.

If you are reading this, unsubscribe of all the unwanted notifications  to help improve performance on the services you receive and provide.


Regularly execute a validation of the emails bounced back to the instance

There is not standard practice to parse the emails bounced back. However, as a cloud, instance administrators are limited on the options available to avoid bounced emails.


Administrators can see the emails received and marked as Junk. If you recognise those are emails as bounced back, the safest option is to set the user 'notification' to Disabled. Then contact the user to inform them of the change. You can do this regularly as it improves your email reliability because it reduces the amount of emails bouncing back. If your instance is having a larger portions of emails bounced back, I have created some tools to help.


retrieveBouncedBackEmail is a script when executed  that would match a query and parse the emails on the body_text, which usually contains the bounced emails. It ignored the emails after the "Received:" text, as it usually contains the original senders.


setUserNotificationbyEmail is a script that would enable or disable the notification on the user that matches the email address.


Here is the script for retrieveBouncedBackEmail:


// Retrieve the list of emails on bounced emails 
// Note: The normal practice here isn't to attempt to parse the bounce message at all.
// Returns an array with the list of [0] messages [1] Array of emails bounced
function retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery) {

    return _retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery);

    function _retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery) {
        // Set max emails retrieved from each bounced email parsed - default 10 
        maxToFindPerEmail = maxToFindPerEmail || 10;
        // Limit parsing more than limitquery emails - default 100
        limitquery = limitquery ? limitquery : 100;

        // Set the query that matches the bounced emails - default: "type=received-ignored^sys_created_onONThis week
        queryToMatch = queryToMatch || "type=received-ignored^sys_created_onONThis week@javascript:gs.beginningOfThisWeek()@javascript:gs.endOfThisWeek()";

        // Set the response message and the query to perform
        var vmessages = ["Query limit set " + limitquery, "Searching on sys_email", "Query: " + queryToMatch, "Threshold: " + maxToFindPerEmail],
            b = new GlideRecord("sys_email");
        vmessages.push(" sys_email records returned: " + b.getRowCount());
        // Loop on the matched emails and assume the body contains the list of emails bounced back
        for (var lemails = [];;) {
            var a = b.body_text,
                // Ignores the text after "Received:" on the body_text as it contains the emails from the original message
                // that you do not want to parse. You want to retrieve only the ones related to the bounced email
                g = a.indexOf("Received:");
            0 < g && (a = a.substr(0, g));

            a = (a = extractEmails(a)) ? uniq(a) : [];

            // It would report on email bound back on which we could not extract an email 
            // so you need to inspected it, to validate why it did not retrieve an email
            0 == a.length && vmessages.push("Email we could not extract an email addresses: " +
                b.sys_id + " - count: " + a.length);

            // If the email contains too many emails, it will add a message
            // so you need to inspected it, to validate why there were so many emails
            a.length > maxToFindPerEmail && vmessages.push("Email ignored by threshold: " + b.sys_id + " - count: " + a.length);
            (0 < a.length && a.length) <= maxToFindPerEmail && (lemails = uniq(lemails.concat(a)))

        // returns an array with the list of arrays: 
        // [0] array of message [1] array of emails
        return [vmessages, lemails]

    // Generate an array of emails found on the text provided
    function extractEmails(text)
        return text.match(/([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9._-]+)/gi);

    // Sort and remove non-unique strings on the array
    function uniq(a) {
        return a.sort().filter(function(item, pos, ary) {
            return !pos || item != ary[pos - 1];


Here is the script for setUserNotificationbyEmail:


// Set the User 'notification' to Enable/Disabled for the matching email addresses
// vemailarray is the array of emails to disable
// setEnable is either "enabled" or "disabled". Disabled is the defaul
// limitquery is the max number of users to set. It has a limitquery of 100
function setUserNotificationbyEmail(vemailarray, setEnable, limitquery) {

    // It will only query the users that need to be enabled or disabled, and ignore the ones already set.
    var c = (setEnable = /^(enable|enabled|true|1|yes|on)$/i.test(setEnable)) ? "notification=1^emailIN" + vemailarray.join(",") : "notification=2^emailIN" + vemailarray.join(","),
        b = new GlideRecord("sys_user");
    limitquery = limitquery ? limitquery : 100;
    // create the message to provide back
    c = ['Query Limit set ' + limitquery, 'Records found: ' + b.getRowCount(), 'Query executed: ' + c];
    // Here the users are being set
    for (;;) c.push((setEnable ? "Enabled " : "Disabled ") + "notification: " + b.sys_id + " - user: " + b.user_name), b.notification = setEnable ? 2 : 1, b.update();
    return c



For an example, here is a bounced email found on an instance:





// set the qualification to match your bounced emails.  
var result = retrieveBouncedBackEmail("sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7", 45, 1000);
gs.print("\n Results:\n" + result[0].join("\n") + "\n\nFinal list of emails " + result[1].length + "\n\n\n" + result[1].join("\n"));



*** Script:
Query limit set 1000
Searching on sys_email
Query: sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7
Threshold: 45
sys_email records returned: 1

Final list of emails 3


To disable the users, you can execute the following:


// create an array of emails, for the users you want to disable.
var todisablelist = ["","",""]

// To disable the User Notification uncomment the next line
gs.print( '\n\nSetting user disabled:\n\n' + setUserNotificationbyEmail(todisablelist,'Disabled').join('\n'));



*** Script:


Setting user disabled:


Query Limit set 100
Records found: 1
Query executed: notification=2^,,
Disabled notification: 8d256345dbe983002fd876231f96196e - user: andrea.sisco


Or you can use them together as follow:


// set the qualification to match your bounced emails.  
var result = retrieveBouncedBackEmail("sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7", 45, 1000);
gs.print("\n Results:\n" + result[0].join("\n") + "\n\nFinal list of emails " + result[1].length + "\n\n\n" + result[1].join("\n"));

// result[1] hold the array of emails, for the users you want 
// To disable *after you validate* the User Notification result[1] contains an array of the emails
gs.print( '\n\nSetting user disabled:\n\n' + 



*** Script:
Query limit set 1000
Searching on sys_email
Query: sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7
Threshold: 45
sys_email records returned: 1

Final list of emails 3

Setting user disabled:

Query Limit set 100
Records found: 1
Query executed: notification=2^,,
Disabled notification: 8d256345dbe983002fd876231f96196e - user: andrea.sisco

Please note the user notification value does not affect the "active" or "lock" status on the account. It will only affect the notifications.

I hope these actions empower the administrator to set a user account's notifications to Disable. This would dramatically reduce the amount of emails being bounced back.



Archive junk emails

ServiceNow can archive and eventually destroy email messages that you no longer need, in the case Junk emails or if your Email table is excessively large. This is especially important if you depend on emails as very large tables tend to degrade over time and delay upgrades.


As per performance, we are looking for email retention rules: "Emails - Ignored and over 90 days old". This rule archives email message records that were created more than 90 days prior to the current date and are of type received-ignored or sent-ignored.




Once the emails are archived, then they stay on the archive for a year before being destroyed.



Now, it is time to put this all in place and allow your users to unsubscribe. For those which the notifications bounce back, then set the sys_user 'notification' to Disabled. Finally, for those emails ending up on your instance Junk, archive and destroy them. If you keep doing this, your email delivery will be reliable and delays will be a thing of the past.


Here are some useful resources for more information :

Automation is the name of the game for streamlining your organization’s processes, and adopting the ServiceNow platform is the first step to transforming your business.


So we have to ask - are you still condemning one of your employees to sit and perform tests manually, over and over, each time you’ve modified forms or upgraded your instance? Or are you the poor soul stuck with that task?


By setting up automated testing that you can reuse and modify, you take the error-prone human factor out of the loop. The Automated Test Framework (ATF) allows you to create and run automated tests on your ServiceNow instance to confirm that the instance still works as designed after being upgraded or modified. In this installment of our NOWSupport best practices series list, the following best practices will help you leverage the power of ATF.


But first, if you want to understand the basics of ATF, check out this video on our NowSupport YouTube channel:



And now, within the guiding principle of Always safeguard your production instance comes our first best practice:


Always run ATF, or any other testing framework, within a dev or test instance but never within prod


This best practice goes hand-in-hand with why you shouldn’t develop on your production instance. You may be tempted to run a quick test on your production instance to verify a minor change, but never succumb to that urge. Why? Here are a couple scenarios:

  • A test can change data that may be designed to trigger actionable events, like sending out emails. You don’t want to have to explain to your manager why your test sent emails to your entire customer base, do you?
  • A test can impersonate users with extensive security access. Even a minor oversight could allow test user Joe Admin to run amok.


Start a test with an Impersonate step to ensure the test user has the required roles


We recommend that you always configure an Impersonate step as the first step in your test. To do this, select Impersonate as the first test step, and specify the user to impersonate. In doing so, you can ensure that the test user has the required roles and behaves like that specific user.


Why bother to employ this step if you run the test as the test designer user? Because your test could break if someone changes the test designer user roles, or disables the test designer user.  Or, you may find that the test can’t access a record or form properly based on the correct user roles. Setting up an impersonate step helps you avoid all these pitfalls.


Be aware of the browser throttling issue and how it can affect your tests


What causes ATF tests to fail? There are several possible root causes for this, including the tester failing to open the Client Test Runner, logging out of the session, or clearing the browser history on another tab, which affects all tabs. But inadequate central processing unit (CPU) power due to browser “throttling” is often at fault. Our previous best practices post, How to avoid ATF Testing Failures, gives you the lowdown on this issue.


When testing a form, set fields critical to your business process to ensure they all work


When creating a test for a form, you may be tempted to take shortcuts and only set the form’s required fields, or fields that you’ve edited. We recommend setting form fields critical to your business process, so that every time you run the test you’re verifying these fields.

A field must be present on the form in order to set it with Set Field Values.


Use the Test Logs and Test Transactions to troubleshoot test errors


And finally, here’s a shout out to the Test Logs and Test Transactions that provide a treasure trove of information for troubleshooting. Check them first before contacting customer support – the reason why a test is failing is probably recorded in one of these lists.


These are easy to find. After running your test, click Go to Result and the Test Results page displays.



  • Click the Test Logs tab to see a record of all the detailed information that the test can track, including browser console logs and other errors:



  • Click the Test Transactions tab to see a record of all system transactions recorded during the test:



Ready to get started with ATF? In this video, we show you how to set up your first ATF test:



For more information:


Getting started with the Automated Test Framework (product documentation)

Build and run your first automated test (product documentation)

Jakarta Juices Up Automated Test Framework (ATF) (blog post)




Behind the scenes here at ServiceNow, the Knowledge Management and Multimedia teams work closely with subject matter experts to disseminate critical information to our customers. We’ve found that certain topics come up frequently, in the form of best practices that can help you keep your ServiceNow instances running smoothly. This series targets those topics so that you and your organization can benefit from our collective expertise. If you have a best practices topic you’d like us to cover in this series, please let us know in the comments below.


To access all of the blog posts in this series, see our NOWSupport best practices series list.

One of the things I came across when prepping for my Jakarta blog post was that we've released a new field type called Name-Value Pairs. Intrigued by the name I looked into it and found this description from the field types article on the docs site:


Field that maps text values. Each mapping is one-to-one, however a single Name-Value Pairs field can contain multiple mappings. Each mapping must use a unique name, and the name cannot be empty.


From here I fired up a dev instance and created a scoped app (that's just how I roll), created a new table called Data, and added a name-value pair field called Data values. This is what the default record looks like:


Screen Shot 2017-09-19 at 9.48.18 AM.png

Adding values is very straightforward.


Screen Shot 2017-09-19 at 10.06.39 AM.png


It's nice to be able to store data like this, but I think the real value comes in when you start working with it from a script. For example, here is a script I can run against the record created above.


//grab the gliderecord object for the record in the screenshot
var gr = new GlideRecord('x_85636_name_value_data');
if (gr.get('a49e63a56f110300d8c252b10b3ee41e')) {

    //iterate over the current properties in the data values field and print the values
    for (var name in gr.data_values) { + " = " + gr.data_values[name]);

    //add another property to the data values field"Adding a height property")
    gr.data_values.height = '50';

    //iterate over the current properties in the data values field and print the values
    for (var name in gr.data_values) { + " = " + gr.data_values[name]);



Here are the printed values:

x_85636_name_value: color = Blue

x_85636_name_value: length = 27

x_85636_name_value: width = 10

x_85636_name_value: Adding a height property

x_85636_name_value: color = Blue

x_85636_name_value: length = 27

x_85636_name_value: width = 10

x_85636_name_value: height = 50


And this is what the record looks like after I've added the height property to the field from the script:

Screen Shot 2017-09-19 at 10.10.58 AM.png


The docs article points out that this field type would be useful for holding header information for a web service request where the name of each mapping is the header such as Content-Type and the value is the header value, such as Application/json. I would expect that there are a lot of other good use cases out there for this type of field, where you need to store some data that is less structured and you don't want to create a field for every possible option.


If you can think of a use case where this field type could help, please share it in the comments below.

ServiceNow is a fantastic platform that is easy to learn - but hard to master. Despite having a thriving, friendly online community to ask for help, it can be very daunting as a beginner when you are enthusiastically bombarded with multiple solutions. You finally get a solution that seems to make sense, and seems to work - when an expert with even more points and badges tells you that what you are doing "is not best practice". You're then provided with 100 lines of code, that need to be added in multiple different places in the system, that all need to link up together somehow It's all very confusing and you don't understand any of it! And yes, I'm talking about GlideAjax.


Rarely a day goes by when someone isn't reaching out for help with Client Scripts. It is very easy to get a value from any table in ServiceNow from business rules and Script Includes, but Client Scripts are so much harder! The API is similar - yet different. It's hard to know what functions work on both client side and server side. You're suddenly told not to use GlideRecord Queries - or that you must have a callback method. The next developer will tell you that you must use GlideAjax and write lines and lines of code. But how - and why?


You finally get a grasp of the code, after 10 posts back and forth through 5 different people in 3 different time zones. You've got your Client Script, you've got your Script Include - and it finally works! That is until you realize your original requirement won't do exactly what your boss wants. You have no idea how and where to change the code and in the process of it your code doesn't work at all anymore! You wish you understood how you got there in the first place.


Well, you're in luck. Today I am going to go through my process of writing GlideAjax scripts, which I have learned from years of writing Client Scripts and GlideAjax Script Includes for customers and helping ServiceNow community members. I will show you the order I write them in and compartmentalize them - all testable along the way - and leave you with a library to reuse in future. I highly recommend installing Xplore in your instance before proceeding - however, I will also show you how OOB tools can be used.


  1. Proof of concept your Client Script (with getReference callback)
  2. Create a testable GlideAjax Script Include
  3. Refactor & Bring it all together
  4. Make your existing GlideAjax scripts reusable (Part 2)


Proof of concept your Client Script (with getReference callback)

Let's refactor an established OOB Client Script to be GlideAjax - (BP) Set Location to User. Add the Location field to the Incident form, under the Caller field, if it is not already present.

There is nothing wrong with using getReference as a quick way to setup a proof of concept for a requirement. See the client script code below.


function onChange(control, oldValue, newValue, isLoading) {
   if (isLoading)

   if (newValue == '') {
      g_form.setValue('location', '');

   if (!g_form.getControl('location'))

   var caller = g_form.getReference('caller_id', setLocation);

function setLocation(caller) {
   if (caller)
       g_form.setValue('location', caller.location);


Create a testable GlideAjax Script Include
Navigate to "System Definition > Script Includes" and click "New". Populate the 'Name' field with something meaningful (like UserAjaxUtil) and click the 'Client callable' checkbox. This will automatically add the code to extend your script from AbstractAjaxProcessor, so you'll be able to call it from your Client Script. Save the record.


Separate Logic from Input
Create two functions in your Script Include as shown below. The golden rule of programming is that a complex problem is just a many simple problems that you need to solve at the same time. So break everything down into its problems! We have two problems to solve here:

  • Get data to and from the form
  • Do something with the data

As a first step, I like to create separate functions for your input from the client and your server side logic, so it becomes testable with different input parameters.

This is the pattern I use for all my GlideAjax scripts.


var UserAjaxUtil = Class.create();
UserAjaxUtil.prototype = Object.extendsObject(AbstractAjaxProcessor, {

     ajaxClientDataHandler: function() {
          //Get data from the form
          var gformData1 = this.getParameter('sysparm_parm1');
          //Setup data to return to form
          var answer={};
          //Do server side stuff
          answer['location'] = this.doServerSideStuff(gformData1);
          //Encode data to send back to the form
          return new JSON().encode(answer); 

     doServerSideStuff: function() {
          //Put your logic here

    type: 'UserAjaxUtil'


Now, let's put our logic into the doServideStuff() function


doServerSideStuff: function(userUID) {
     var grUser = new GlideRecordSecure('sys_user');
     if (grUser.get(userUID)) {
          return grUser.getValue('u_location');


Notice that I have used GlideRecordSecure. Whenever you expose API to the client side, you want to ensure it enforces ACL's - otherwise you are introducing a security hole into your system.
Now, your function is testable using any given user.


Now you can call this doServerSideStuff() function using your favorite code testing tool.


var userAjaxUtil = new UserAjaxUtil();





This way, we can test our logic is sound before we call the script from the Client Side and just 'hope for the best'.


Now you can simulate values that the client would be sending the Script Include. You can also write all the appropriate Unit Tests if you are doing Automated Testing.

Let's write the ajax input function now too. The whole code will look like this:


var UserAjaxUtil = Class.create();  
UserAjaxUtil.prototype = Object.extendsObject(AbstractAjaxProcessor, {  
     ajaxClientDataHandler: function() {  
          //Get data from the form
          var gformData1 = this.getParameter('sysparm_parm1');
          //Setup data to return to form
          var answer={};
          //Do server side stuff
          answer['location'] = this.doServerSideStuff(gformData1);
          //Encode data to send back to the form
          return new JSON().encode(answer); 
     doServerSideStuff: function(userUID) {  
          var grUser = new GlideRecordSecure('sys_user');  
          if (grUser.get(userUID)) {  
               return grUser.getValue('location');  
    type: 'UserAjaxUtil'  


Refactor & Bring it all together

Next up, we will need to alter our original onChange Client Script.


function onChange(control, oldValue, newValue, isLoading) {
   if (isLoading)

   if (newValue == '') {
      g_form.setValue('location', '');

   if (!g_form.getControl('location'))

     var ga = new GlideAjax('UserAjaxUtil'); //Name of the Ajax Script Inclide
     ga.addParam('sysparm_name','ajaxClientDataHandler'); //Method to call
     ga.addParam('sysparm_parm1',newValue); //Parm1

function userCallback(response) {
     var answer = response.responseXML.documentElement.getAttribute("answer");
     answer = answer.evalJSON(); 

function setLocation(caller) {
   if (caller)
       g_form.setValue('location', caller.location);


You can see below that the old Line of code has been commented out and replaced with the GlideAjax code. This is only for instructional purposes - it is generally a bad idea to comment out code with no explanation. If it's not needed - it should be removed. ServiceNow has Version Control via Update Sets, so you can always go back and look at past state.


Make your existing GlideAjax scripts reusable

You've now been given a procedure for writing GlideAjax that hopefully makes it a much more enjoyable process. You're happy that you've learned something new and that you've been able to implement a requirement using Best Practice. That is until your boss gives you a new requirement. He wants the Callers job title shown on the screen as well. Never fear - we have already written the foundations to make this a whole lot easier!


Part 2 Coming Soon! Feedback Welcome

Please mark response with a Like, Helpful or Correct :)

If you been working with Order guides you know that there is room for improvement on how it works in ServiceNow. I've been looking into this in the view of that a requirement was to set a SLA on the order guide it self. Not on a specific item connecting to it, but the whole "request". Problem here was that on the request, you can't see from which order guide it was generated from or even if it was generated from a order guide or not.


So I made this video showing you how you can to do make this reality. It doesn't require so many steps and hopefully it will be useful to a lot of people out there and making you come up with your own areas to use this.





Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

Filter Blog

By date: By tag: