Skip navigation

If you thought the GlideDateTime holds the time and the time zone, and you would need to convert from one time zone to another or complex task to use them, think again. GlideDateTime holds the time in Coordinated Universal Time (or UTC). It takes time to get used to this "simple" concept. You would only need to consider the time zone for "displaying" the values.


When you retrieve the data from a GlideRecord, date and time fields are populated with all the relevant information. The values on the date and time fields are in UTC. However, when you are creating a new date and time field from scratch, or trying to get the Display value, you may face the task of validating the Display time. If that is the case, this blog is for you. I will show some examples of the data retrieved from an incident, and another when setting the time from scratch in a script.


Simple enough? The browser will show the GlideDateTime Display value, and calculations are in UTC.


GlideDateTime holds the time in UTC


When reviewing the different ways to overcome multiple time zones, converting the stored data as UTC (Universal Time Coordinated) is really simple and effective.

From a client point of view, they need to worry about time zones when displaying the data. In any other calculations, the times are UTC.


Here is one example of a client time and the user profile time:




In UK, the time shown on the PC was 14:38, while on the app it showed 07:38 because the profile was set to the the America/Los_Angeles time zone.

Both times are at different time zones, but on UTC they are both the same. Closed (closed_at) is "2017-03-19 14:38:01 UTC".

The Display time is calculated based on the time zone


The time zone is calculated based on the user profile, the system time zone, or the client time zone. However, the time zone can later be set manually by scripts.

Then based on the time zone, an offset is set and the "Display" value is calculated.


Here is an example:



var gr = new GlideRecord('incident');

gs.print('\ngr.closed_at: ' + gr.closed_at + " UTC\ngr.closed_at.getDisplayValue(): " + gr.closed_at.getDisplayValue() + " system time");



*** Script:

gr.closed_at: 2017-03-19 14:38:01 UTC

gr.closed_at.getDisplayValue(): 2017-03-19 07:38:01 system time


Modifying the time zone will NOT modify the stored UTC data; just the Display value.

Modifying the time zone does not modify the stored data. This is an area that may be confusing. GlideDateTime contains a time zone. Changing the time zone, it will only modify the "Display" value. This key on how the data is displayed on the clients.


Here is an example:



var message = [];
var gr = new GlideRecord('incident');
var vclosed_at = new GlideDateTime (gr.closed_at);
message.push('gr.closed_at: ' + vclosed_at);

// Setting to IST timezone
message.push('\ngr.closed_at: ' + vclosed_at + " UTC\ngr.closed_at.getDisplayValue(): " + vclosed_at.getDisplayValue() + " IST");

// Setting to US/Pacific timezone
message.push('\ngr.closed_at: ' + vclosed_at + " UTC\ngr.closed_at.getDisplayValue(): " + vclosed_at.getDisplayValue() + " US/Pacific");

gs.print(message.join ('\n'));





Display Time

Database time


2017-03-19 20:08:01

2017-03-19 14:38:01 UTC


2017-03-19 07:38:01

2017-03-19 14:38:01 UTC


*** Script:

gr.vgdt_ist: 2017-03-19 14:38:01 UTC

vgdt_ist.getDisplayValue(): 2017-03-19 20:08:01 IST


gr.vgdt_pdt: 2017-03-19 14:38:01 UTC

vgdt_pdt.getDisplayValue(): 2017-03-19 07:38:01 US/Pacific


How to initialize a GlideDateTime with Display values


This is probably the most asked question when dealing with dates and times: How do you create a date and time field when we know the display values? The answer is to set the GlideDateTime time zone before setting the display value. This is performing a reverse offset translation from the "Display" value into the database value.


Here is an example:



var vgdt_ist = setDisplayTime ("2017-03-19 20:08:01", "IST");
var vgdt_pdt = setDisplayTime ("2017-03-19 07:38:01", "US/Pacific");
var message = [];

// Setting to IST
message.push('\ngr.vgdt_ist: ' + vgdt_ist + " UTC\nvgdt_ist.getDisplayValue(): " + vgdt_ist.getDisplayValue() + " IST");

// Setting to US/Pacific
message.push('\ngr.vgdt_pdt: ' + vgdt_pdt + " UTC\nvgdt_pdt.getDisplayValue(): " + vgdt_pdt.getDisplayValue() + " US/Pacific");

gs.print(message.join ('\n'));

// Convert provided date time and timezone into UTC to validate
// Default format is "yyyy-MM-dd HH:mm:ss"
// return: GlideDateTime on vtimezone
function setDisplayTime (originaldt, vtimezone) {
    var a = new GlideDateTime();
    a.setDisplayValue(originaldt, "yyyy-MM-dd HH:mm:ss");       
    return a;





Display Time

Database time


2017-03-19 20:08:01

2017-03-19 14:38:01 UTC


2017-03-19 07:38:01

2017-03-19 14:38:01 UTC


*** Script:

gr.vgdt_ist: 2017-03-19 14:38:01 UTC

vgdt_ist.getDisplayValue(): 2017-03-19 20:08:01 IST


gr.vgdt_pdt: 2017-03-19 14:38:01 UTC

vgdt_pdt.getDisplayValue(): 2017-03-19 07:38:01 US/Pacific


Finally, it is important to simplify date and time fields, considering that they only hold the data in UTC. Everything should be as simple as it can be, but not simpler.


For more information on date and time fields, check out these resources:




Assumes basic knowledge and/or familiarity of the Studio IDE in ServiceNow.



A short time ago I had need of changing my repository URL for one of my projects in the Studio IDE, and much to my surprise found that I was not allowed to do it!  The URL field was read-only!?!  Off to the ServiceNow documentation site I went.  However, no solution was published there.  I promptly went to the Community to find out if this had been mentioned by anyone else.  Sure enough, the question was asked, and ctomasi was able to get an answer (How do you edit/remove a source control repository?).  But the answer there bothered me.  1) it was a work-around, 2) you lost all historical information with the work-around.  After my third project re-point, using this method, I began looking around for the best way to hack it (I was that unhappy).  I began investigating all the pieces I could find concerning where the URL might be located, and what I would have to write in the way of a Fix Script to do the actual hack. 


Well, sometimes the complex way is NOT the solution! 


While messing around looking at tables and their interactions I tripped across a method for doing this that looks like it was intentional; sic. out-of-the-box.  It was too easy.  This method does not hang onto the Source Control Branch or Source Control Tags (a minor loss to me), but just so you are warned. 


This article explains how to change the URL using that method.


So WHY would you want to change the URL in the first place?  Here are some use-cases I can think of just off the top of my head:


  • Changing from a personal repository to a corporate repository.  My original reason for changing the URL.
  • Reorganization of the repository's directory structure requires changing of the Studio url.  This is probably the most important.
  • Unbinding source control from the Studio project.
  • Changing the repository provider.  Let's say from GitLab to GitHub or the reverse.


And I am sure there are more.



BEST PRACTICE: First make a backup of your entire project to an Update Set


1. Navigate to System Applications -> Applications. The Applications list view will be displayed.


2. Click on the link to your Studio Application. This will display the Custom Application properties form.



3. Change the Current Application setting to be the application you are working with. This will activate the Related Links on the form.



4. Scroll down the form to view the related links and click on the Publish to Update Set… link. The Publish to Update Set



5. Fill in the form:


Version: <<keep what was displayed>>

Description: Repoint to new repository.

Click on the Publish button.



6. Wait for the Progress bar to complete and click the Done button. The new update set will be displayed.


7. Under Related Links click the Export to XML button. This will save a copy of the entire project to your local disk.  You might want to rename this with your Application name and version just to be safe.



To Clear out the Repository URL For an Application


Now, on to the good stuff!


1. Navigate to sys_repo_branch.list. The Source Control Branches list view will be displayed.


2. Find the Application you wish to reset the repository URL for, and open that record. The Source Control Branch form will be displayed.



3. Click on the Repository Configuration link button. The Repository Configuration Record will be displayed.



4. Click on the Delete button in the upper right corner of the form. The Confirmation form will appear.



5. Click on the delete button.



6. A “Record Not Found” message will be displayed.  Ignore this, and go back to the Studio to open your application.



7. This clears out the repository URL and information for the Application. You may now use the normal linking of a repository procedure inside of Studio; to point at your new URL.


And there you go!  An out-of-the-box method for clearing and/or changing the repository URL for ServiceNow Studio.  Maybe in a future release we will get a menu-item or editable URL field inside the Studio to do this, but until then this is the best way.

Steven Bell


Combined Logo Graphic_Operations.png

For a list of all of my articles:  Community Code Snippets: Articles List to Date


Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.


Also, if you are not already, I would like to encourage you to become a member of our blog!

In my previous post, I introduced you to Search Sources in Service Portal. Search Sources are new to Istanbul in the Service Portal application. We talked about the components that make a search source work and walked through the basics of configuring a search source for a portal. In this post we’re going to look at scripted search sources and learn how to use them to search an external application or website from inside your portal. This is a very powerful feature because it can allow you to create a single portal. Users can visit the portal to look for answers from any resource used by your organization that you have access to via a REST api.


Script Search Sources to search external websites and apps

To enable Search Sources to crawl external sites, you’ll need to have read part 1 so that you’re up to speed on the basics. You’ll also need some external resource that you have access to via REST, basic knowledge of AngularJS, and an understanding of creating pages and widgets in Service Portal.


Now, we’re going to be doing some custom scripting here so it is important to note that this is not a solution that is officially supported by ServiceNow. My goal with this post is to provide an example to help you get started in creating your own search sources and to share some of the things I learned while configuring one of these in my own instance. This is not intended to be a solution to all your searching needs, just a springboard into your quest to find the solution to all of your searching needs.

I repeat, this customization is not officially supported by ServiceNow

There are a couple of questions we need to ask before we begin:

  1. Do we want to link to the external content in its native application/site? Or do we want to bring the content in and display it in ServiceNow?
  2. Where are we pulling the data from?


For our example, we’re going to search the knowledge base from another ServiceNow instance and display it inside our portal as if it were native content.


The first steps here are really the same as what we did in part 1. The big differences between what we did in part 1 and what we are doing in part 2, is the search page template, the data fetch script, and the page and widget that will display our external content.


Obtain the data that the search source will crawl

Let’s start with the data fetch script. Once you click the “Is scripted source” checkbox on the search source form this field is displayed. This is where you’ll setup the search request for your external site. For this example, we’re going to use the ServiceNow table api. This part will vary greatly depending on the application you’re querying.


We’ll start from this basic template and add in the necessary fields as we go:

(function(query) {
  var results = [];
  /* Calculate your results here. */
  return results;


This defines the search function that is executed by the instance. In order for this to work we’ll need the URL for our request endpoint, and a sn_ws.RESTMessage object. I'm using a recordless REST message but this could be modified to use a pre-configured outbound REST message record if needed. Setting up the request should look like this:

var url= "" + encodeURI(query) + "&sysparm_fields=sys_id%2Cnumber%2Cshort_description%2Ccategory%2Ctext";
var ws = new sn_ws.RESTMessageV2();
   ws.setBasicAuth("search_user", "search");

    var jsonOutput = ws.execute();

Here we’re using basic authentication for our call. We’re passing in the username and password for a user on that instance in order to gain access to the knowledge records. There are probably better and more secure ways to handle the authentication but I’m not going to get in the weeds on that. I’m really trying to keep things simple here that we can focus on the basics of how to set this up. If you want to try a different approach for authentication, you might start here: Create a basic auth profile..


Once we have our request setup we need to define how we’re going to handle the response data. I’m specifying the fields to be returned in my query like this:



Next, we need to decode the JSON object we'll get in response. Once we decode the JSON object, then we iterate through each result and set some fields

  1. result.url – if you were linking to the external site rather than opening the record in ServiceNow this would be where the link leads to.
  2. – this is the target for the link. Here we’re using _blank to open in a new window
  3. Result.primary – this sets what our primary display field will be.


The code to do that will look like this:

if (jsonOutput) {
var response = new JSON().decode(jsonOutput.getBody());
results = response.result;
results.forEach(function(result) {
      result.url = result.svn_url; = "_blank";
      result.primary = result.short_description;


Here is the finished product for the data source:

data source1.jpg


Displaying the search results:

Next, we’ll setup the search page template. We really only need to tweak this a little bit in order to open the search results in a page that is designed to handle our external content. In fact, all we really need to do it modify the link. The default template link looks like this:


<a href="?id=form&sys_id={{item.sys_id}}&table={{item.table}}" class="h4 text-primary m-b-sm block">


We just need to set this up to open a different portal page, and pass in whatever parameters that page will need to get and display the article. We’re going to call this page ext_knowledge and pass a parameter “record” which should be the sys_id of our external article. We’ll also add a target from the variable we set in our results object earlier.


<a href="?id=ext_knowledge&record={{item.sys_id}}" target="{{}}" class="h4 text-primary m-b-sm block">


Again, you could choose the simpler path here and just make this link to the external application or site this item came from but I find the idea of displaying this content as if it is native to our instance to be interesting and that might provide a better user experience so we’ll go that route here. So, the final product for the search source record looks like this:

search source portal1.jpg


Displaying the searched content on the Service Portal

So far things have been pretty straightforward. You need an API to search some external site. You need to decide what data to bring over and setup the data fetch script and you need to make a small modification to the search page template. Displaying the content in my portal is where I really ran into a couple of gotchas that proved a little tricky. We’ll do this by creating a new widget called External Knowledge and adding it to the ext_knowledge page that we referred to earlier in our search page template.


The issue I ran into displaying this content

  • First, you can’t simply pass the article text via the URL and display it.. you have to actually get the record again.
  • Second, the content won’t be rendered as HTML by default. You have to use a little angular magic to allow that.


Get the record using a Server Script

We’ll deal with our first issue in the server script for the widget. First, define a function called getRecord(). That is going to be mostly identical to the data fetch script from before but with a few changes.

  1. Because we’re not doing a search, the URL should change to this:
    var url= ""+rec +"?sysparm_fields=short_description%2Ctext%2Cnumber%2Ccategory";
  2. You’ll notice we are using the variable rec instead of query in our query string.
  3. We don’t need the url and target variables from before so when we handle the response data the only variable we’re setting is result.primary. Then, we just define record variable and get the value from the URL parameter and pass that into our getRecord function. The server script ends up something like this:
(function() {
  data.record = $sp.getParameter('record');
  function getRecord(rec){
    var results = [];
    /* Calculate your results here. */
    var url= ""+rec +"?sysparm_fields=short_description%2Ctext%2Cnumber%2Ccategory";
var ws = new sn_ws.RESTMessageV2();
   ws.setBasicAuth("search_user", "search");

    var jsonOutput = ws.execute();

    if (jsonOutput) {
      var response = new JSON().decode(jsonOutput.getBody());
      results = response.result;
      results.forEach(function(result) {
        result.primary = result.short_description;
    data.article = results;


Basically all this does is make a REST call to get the record and store the result data in data.record. This will make it available to our controller via the $scope.


Use AngularJS to create a HTML Template

Our HTML template is very simple here. We want to show the article title and render the HTML content from the text field in a similar manner to how we display knowledge articles that are in our instance. This takes 4 lines:

<!-- your widget template -->
<div class="kb_article" ng-bind-html="::data.article.text" style="overflow-x:auto;"></div>


The most important thing to note here is that we’re using ng-bind instead of the usual curly braces to bind our article text here. This is basically the angular magic I mentioned earlier. Once we use ng-bind we can add a single line of code to the client controller to allow the HTML content to be rendered as HTML and then we’re done.


Allowing HTML content via the Client Controller

As I said, aside from the boilerplate code declaring the controller function and the c variable we have one line of code here:

$ = $sce.trustAsHtml($;

This tells angular that $ is HTML content and it is okay to render it as such.


Now when we search in the portal, our results will look like this:

portal search sources1.jpg

When we open one of the search results, it looks like this:

portal search sources2.jpg

Again, USE AT YOUR OWN RISK! This is not a ServiceNow official customization.


Again, I have to re-iterate that this is use at your own risk and is not supported by ServiceNow. Hopefully it is a good enough example to help you get started using search sources but keep in mind that my goal here is to share what I learned by setting one of these up myself and my hope is to help you avoid spending time trying to solve the same problems I ran into; not to give you the holy grail of searching. So let's recap what we went over here:

  • We configured a scripted search source using the ServiceNow table api to crawl a knowledge base from another ServiceNow instance.
  • We decoded the JSON response to set some fields that the search widget expects to be set.
  • We created a widget that we could use to display the article content in the portal as if it were native content and the adjusted the search page template to link to a page using that widget.  (again, keep in mind you could just link to the external site if you want to keep things simple)
  • We looked at using ng-bind along with angular's trustAsHTML method to display HTML content.


This can be adapted to work for any REST endpoint you have access to and could really enhance the search on your portal by allowing you to combine all of your resources into a single portal. For more information on search sources please refer to our documentation: Configure search in Service Portal  and for more information on the ServiceNow table apis used in this demo see this article: Table API.




Assumes basic knowledge and/or familiarity of Client and Server-Side Scripting in ServiceNow.



In my previous article (Mini-Lab: Converting a UI Script Library From Global to Scoped - Part 1) I described the whys and wherefores of using the Scoped environment and the Studio versus the old Global environment.  In this article I will present how to convert my old global UI Script function library into a Scoped UI Script library.




The old UI Script library: Mini-Lab: Using Ajax and JSON to Send Objects to the Server


The purpose of this UI Script is to provide a generic AJAX interface between client side (sic. browser) code and server-side functions.



Lab 1.1: Using JSON and AJAX to Transmit a Complex Object to the Server


Based on the design from my Using Ajax and JSON article; we re-work it slightly to indicate the scoped components.



We will be working the application from the bottom-up.  Building from the Script Include function class to finally the UI Action that will be calling the whole chain.  This entire lab was written on my personal developer instance that has been upgraded to Istanbul Patch 1.



The Scoped Application



The Studio, and the UI Script


So, let's get started.  First we will need to create a scoped application.


1. Navigate to System Applications -> Studio.  The Load Application form will be displayed.



2. Click on the Create Application Button.  The Create Application choice form will be displayed.

3. We will be using the Start from scratch option.  Click on the Create button.  The Create Application property form will be displayed.



4. Fill in the form with the following:


Name: Using Ajax and JSON to Send Objects to the Server

Scope: x_1234_using_ajax


Note:  The Scope field will auto-fill using your "company code".  This will be different than the 1234 in my example above.



5. Click on the Create button.  The Studio form will be displayed.


6. Click on the Create Application File button.  The Create Application File choice form will be displayed.



7. Enter Script Include in the search filter text box.  The Filter Results will show Script Include.


8. Click on the Create button.  The new Script Include form will be displayed.



9. Fill in the form with the following:


Name: JavascriptUtils

Accessable from: All application scopes

Active: checked

Description: Various useful utilities. Note: These were adapted from the JSUtil library

Protection Policy: -- None -- (automatically set for personal instances)



var JavascriptUtils = Class.create();

// This nil check works for Scoped applications
JavascriptUtils.nil = function(item) {
  var nilCheck = true;
  try {
    nilCheck = !item 
      || (item == null) 
      || (typeof item == 'undefined') 
      || ('' == '' + item) 
      || (item == 'undefined');
  catch(err) {
    gs.error('---> [{1}-{2}] \n{0}', 
      [err, new GlideDateTime().getNumericValue()], 'SI:' + this.type + '.nil');

  return nilCheck;

JavascriptUtils.notNil = function(item) {
  return !this.nil(item);



10. Click the submit button to save your work.



Note:  This is our "bottom-most" library.  We will be using this in our other Script Includes.  Since JSUtil does not work in the scoped environment (it contains a Packages.Java.String call), we needed to have an analog that would work.  This is a good, not best, but good workaround.


11. Now repeat this step and create another Script Include.


12. Fill out the form with the following:


Name: ArrayHandlingUtils

Accessable from: All application scopes

Active: checked


Various array utilities



getEmailsFromSysIDs - retrieve all user records in the list of sys_ids


Protection Policy: -- None -- (automatically set for personal instances)



var JSUtil = JavascriptUtils;

var ArrayHandlingUtils = Class.create();

ArrayHandlingUtils.prototype = {
    initialize: function() {},
    //retrieve all user records in the list of sys_ids
    getEmailsFromSysIDs : function(listInfo) {
    try {
        var sysIDList = listInfo.list;'---> [{1}-{2}] \n{0}', 
             new GlideDateTime().getNumericValue(), 
             this.type + '.getEmailsFromSysIDs']);

        var userRecords = new GlideRecord('sys_user');
        userRecords.addQuery('sys_id', 'IN', sysIDList);

        var userList = [];

        while ( {
            var user = {};
   = + '';
            var email = + '';
   = JSUtil.notNil(email) ? email : 'no email found';

        var message = '';
        message += 'Location: ' + listInfo.location + '\n';

        for (var i=0; i < userList.length; i++) {
            message += 'EMail[' + i + ']: '
                + userList[i].name + ' - '
                + userList[i].email + '\n';
        }'---> [{1}-{2}] \n{0}', 
             new GlideDateTime().getNumericValue(), 
             this.type + '.getEmailsFromSysIDs']);
        catch(err) {
            gs.error('---> [{1}-{2}] \n{0}', 
                  new GlideDateTime().getNumericValue(), 
                  this.type + '.getEmailsFromSysIDs']);
    type: 'ArrayHandlingUtils'


Note: We do not have to use a qualified call to the JavascriptUtils function to include it since it is already in our namespace.  No need to do gs.include() either. 


13. Click on the Submit button to save your work.


14. Ok, repeat one last time to create our Ajax Script Include that will be called from the client.


15. Fill out the form with the following:


Name: AjaxUtilsServer

Accessable from: All application scopes

Active: checked

Client callable: checked

Description: Description: Method for decoding the passed JSON object and writing it to the system log

Protection Policy: -- None -- (automatically set for personal instances)




var AjaxUtilsServer = Class.create();
AjaxUtilsServer.prototype = Object.extendsObject(global.AbstractAjaxProcessor, {
    pushListToSysLog: function() {
        var listInfo = global.JSON.parse(this.getParameter('sysparm_listInfo'));
        new ArrayHandlingUtils().getEmailsFromSysIDs(listInfo);
    type: 'AjaxUtilsServer'


NOTE: Write down the API Name. You will be using this name to call into this Script Include from your UI Script.


16. Click the Submit button to save your work.



17. Click on Create Application File, and create a new UI Script.


18. Fill out the form with the following:


Script Name: AjaxUtilsClient

Description: Transmits a list of sys_ids to the server to be pushed to the System Log. 

Use Scoped Format: unchecked.



var utils = Class.create();

utils.sendListToSysLog = function(list, location) {
    try {
        var infoOnList = {};
        infoOnList.location = location;
        infoOnList.list = list.split(',');

        var systemLogger = new GlideAjax('x_1234_using_ajax.AjaxUtilsServer');
        systemLogger.addParam('sysparm_listInfo', JSON.stringify(infoOnList));

        systemLogger.getXML(function(result){ });  // no callback action
    catch(err) {
        alert('---> ERROR: ' + err);


NOTE: Replace the x_1234 in line 6 with your own ServiceNow company code.


NOTE: When you save the system will throw an error and tell you that the script is missing the Immediately Invoked code.  Ignore this error, and continue with the save.  The error is bogus.  It appears that the system does not want or like you to use this type of script which is nonsense.


19. Lastly, let's create a UI Action to drive the entire chain.  Click on Create Application File, and create a new UI Action.

NOTE: We will build this as part of our scoped application, but it does not necessarily have to be part of it.  It could be built in the Global namespace.


20. Fill out the form with the following:


Name: Transfer Watchlist

Table: Incident

Active: checked

Show Insert: checked

Show Update: checked

Client: checked

Form Button: checked

OnClick: sendWatchList()

Form Action: Checked (done automatically).



function sendWatchList() {
    try {
        // this is the only way to load a UI Script Library into a scoped Client Script!
        // loads the utils library
        ScriptLoader.getScripts('x_0462_using_ajax.AjaxUtilsClient.jsdbx', followup);
    catch (err) {
    // After the UI Script Library has been loaded then continue
    function followup(result) {
        try {
          utils.sendListToSysLog(g_form.getValue('watch_list'), 'UIA:Transfer Watch List');
        catch (err) {
          alert('---> ERROR: ' + err);


NOTE: Replace the x_1234 in line 6 with your own ServiceNow company code.  Also, notice how much this script differs from how it would be done in the Global environment.  In addition see that there is now a loader step that occurs prior to being able to use the library.  This is done via an asynchronous (async) call-back that loads the script from the server (the on-demand bit).


21.  Click the Submit button to save your work.  We are now done with our Scoped application, and it is time to do some testing!




Unit Test


Our unit test will be what we had in the previous article.


  1. First go and in-activate the original UI Action; if you did the previous article.  We will only be using the new (same named) UI Action we created in the Studio.
  2. Navigate to Incidents -> Open.  The list view of open Incidents will be displayed.
  3. Open one of the Incidents.  You should have a new button labeled: Transfer Watchlist.

  4. Make sure there are four or five people on the watchlist.  Add some if you have to.

  5. Click on the Transfer Watchlist button.  Observe that the spinner (activity indicator) in the upper right of the form activates for a second or two then goes away.  That is your only indicator that something happened.  I guess we could always go back and put in an alert that things have finished, but this is supposed be a lab that emulates something you might do for real! 

  6. Navigate to System Log -> All.  Filter on Message starts with --->.  Order by Message descending.  Note how I placed milliseconds into the messages?  This allows me to sort by true date descending.  This is REALLY important if you are trying to figure out exactly in which order things occurred in your code!
  7. Expected Result:  A list of names and emails matching the watchlist should be present as a log entry.



And there you have it!  Conversion of a Global UI Script and all of it's call chain into a Scoped application.


Steven Bell

Combined Logo Graphic_Operations.png


For a list of all of my articles:  Community Code Snippets: Articles List to Date


Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.


Also, if you are not already, I would like to encourage you to become a member of our blog!




Assumes basic knowledge and/or familiarity of Client and Server-Side Scripting in ServiceNow.



I had already done this particular subject as an Ask-the-Expert session, but thought I would write it up for easier pondering and implementation. 


With Part 1 I will be describing the thought processes that are necessary in understanding the "reasons-why" this particular topic is important.  In Part 2 I will present the actual labs for converting the UI Script described in my article Mini-Lab: Using Ajax and JSON to Send Objects to the Server from the Global namespace to it's own Scoped namespace.


Suggested pre-reading:


Pragmatic Patterns: Ajax - Architecting Your Code

Mini-Lab: Using Ajax and JSON to Send Objects to the Server

Mini-Lab: Writing Entries Into the System Log Using Ajax and JSON


A couple of Ajax best-practice mentions here:


1) Keep Synchronous calls from the client to the server at an absolute minimum.  This is to limit the impact to the user experience.

2) Synchronous calls are okay if you are doing some type of data retrieval with an onSubmit Client Script.  This is pretty much it, though as far as I'm concerned.  Really, though with something like this I would consider converting it into a Before Business Rule, or perhaps pre-loading the data with a Display Business Rule keeping it on the Server in both cases.



Common Practice


So let me describe a common practice with Client Scripts.  I have already gone into some detail on this with my pattern article on Ajax, but I will repeat a bit of it here.


Basically, most developers write their Ajax Client Scripts as a simple very specific one-to-one call from the browser to the server, to handle, and perhaps retrieve data.  This isn't bad, but it isn't very extensible.



You have no normal ability to share a client script between different non-inherited tables, and thus between forms.  So even if you did have some reusable code you would have to at the least create two different Client Scripts, and maintain them, even though the code was the same.  This is a bad practice.  You want to consolidate these into a single reusable client-side library that is loaded and available from the Form. 


The same problem also presents itself from the server-side.  Here an Ajax Script Include is pretty specific in what it is for, and really does not present itself well as a function library.



A Methodology Shift


So, the solution to the problem of a client-side function library has always been the UI Script mechanism.  Here, ServiceNow has provided us a way to create reusable code libraries that are loaded with a form.  This allows the developer to concentrate often reused functions into a single maintainable location (a serious best-practice).  This keeps the maintenance to a single location, and reduces development time-to-completion with reusable code.  The down-side: UI Script libraries are always loaded with the form.  So you get them whether you use them or not.  These are usually quite small in the amount of client-side memory they use, and the load-time involved; so it is generally not an issue.


On the server-side we can push reusable code down a layer by creating a Script Include Function Library or a Script Include Class (yep, there are two flavors).  Either approach is fine; the idea being to centralize reusable code, and reduce maintenance and time-to-completion.


In all cases an attempt should be made to put the calculation and database heavy-lifting on the shoulders of the server, and keep the client-side script to a minimum (a.k.a. thin).




The Scoped Environment


With the advent of Scoped Scripting in the Fuji release we now have a way of isolating our code from the rest of the ServiceNow environment.  This brings along several benefits:


1) The developer can now create code that is easy to install, and remove with minimal affect on the ServiceNow platform.

2) The application can be protected (i.e. black-boxed) in such a way as to keep it from inter-mixing with the Global code-base.  An important concept of name-spaces.

3) The intellectual property can, in-theory, be protected.  In a future article I will show that this is limited, or frankly not the case in some release instances.

4) A way to organize an application project, and easily access the scoped applications.

5) Deployment to a local ServiceNow Application Store that allowed for much easier inter-company instance deployment than the old update set model.

6) More functionality in the Studio (such as an improved code search!)

7) The ability to connect to an external cloud-based repository such as GitLab, GitHub, or BitBucket (to name a few).

8) Good overall core documentation, and API info in both the product documentation, and the Developer community.


This is obviously not an exhaustive list.


The biggies for me were #'s 1, 2, 4, 5, and 7.  Shoot, actually I like the whole list. :-)


So those are the pro's.  What are the con's?


1) Only a subset of the Global-side JavaScript and ServiceNow out-of-the-box (OOB) libraries are available.  There are some interesting, and sometimes frustrating omissions!

2) NONE of the Packages.<<function>> libraries are available (this is a huge deal, and for some of you may represent the reason for not implementing scoped applications - it was a close call for me).

3) Few examples of how-to do certain things in the scoped environment (thus the reason for this and future articles).

4) Inability to re-point a repository URL from the Studio.  No documentation exists on how to do this outside of the studio as well.

5) Intellectual Property protection is only available with internal distribution via the Studio Publish function, or with the ServiceNow store.  This protection switch is not available in any other way.  If you push your application to an update set and upload it to a different (sic. customer) instance your code will be visible to anyone who wishes to look at it.  For more information on this issue see my ask the expert session: Ask the Expert: Scoped Libraries with Steve Bell.



The Scoped Library


Because of the benefits of isolation and organization, that the Studio and Scoped environment bring, it is of great use to build function libraries with these tools.


An interesting, and to me a great feature when moving to a Scoped UI Script; is the on-demand nature of the library.  It is only loaded when called.  The only down-side: there was no documentation on how to do this.  The Global "way" of implementing UI Scripts absolutely fails in the scoped environment! 


Doing a search on the community popped up with the clues I needed on how to do the conversion. coryseering answered a question back in 2015 that actually provided what was needed to make this work (Re: Unable to create UI Action that calls UI Script in Fuji Scoped Application ).  You might want to also check out his other great articles: Scoped Applications and Client Scripts: A Primer, and Client-side GlideRecord replacement for Scoped Applications (sort of).


So, the ScriptLoader function is the key.  It is the mechanism (undocumented) to load a Scoped UI Script library, and it does it on demand!


Then the only change to the server-side is to convert any gs.log messages, which do not work in a Scoped app, into the analog.  The scoped class/library calling turns out to be pretty straight-forward on the server-side.




In my next article (Mini-Lab: Converting a UI Script Library From Global to Scoped - Part 2) I will be describing how to take a Global UI Script library and turn it into a Scoped UI Script library that can be used by both Global and Scoped Client-side scripts.


Steven Bell

Combined Logo Graphic_Operations.png


For a list of all of my articles:  Community Code Snippets: Articles List to Date


Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.


Also, if you are not already, I would like to encourage you to become a member of our blog!

An attractive concept of tables has been implemented as GlideRecord in ServiceNow. The object comprises of functions, elements and methods to work with all fields available. As tables contain fields, and the fields have types, we tend to assume that table [dot] field will inherit that field type (e.g. string). If you think that, you backed the wrong horse! GlideRecord fields are not string, number, or boolean fields. I can confirm a GlideRecord field is represented by a GlideElement, so each field itself is an object. When used, Java will guess and cast them to the correct type. To "cast" means to take an object from one particular type and transform it into another, like dollars ($) to pounds (£), you can lose something in the conversion.


There is no need to cry a river about it; just be aware of this behavior. Even professional developers will fail to spot these problem. Same as when you are driving and looking out for motorbikes, when you are using a GlideRecord, make sure you look out for operations on these GlideElement objects. As it reads on the THINK! campaign, expect the unexpected.



What to look out for when using a GlideRecord



THINK! To look out

THINK! Advice when you are scripting


GlideRecord fields containing strings, numbers, or booleans, especially when passed to other functions as parameters

When passing parameters to functions, force the cast to the required variable input type. For example, to cast to string use .toString(). To cast to decimals use value + 0.


Undefined values 

When a functions does not exist, the result is undefined


The typeof the GlideRecord fields is “object” (e.g typeof gr.incident = object)

Operations that depends on the type of the object like === could fail on GlideRecord fields


Strings operations need to be applied to strings and not the GlideElement object

Ensure you are performing a .toString() when a string operation is required (e.g. gr.short_description.toString().length)


Casting issues when using GlideRecord

Using an example, I will try to validate these three potential casting problems:

  • In a GlideElement for a string field: When trying to use 'String' function 'length' on it, it returns undefined.
  • In a GlideElement for numbers: When trying to compare the value using === , it returns false.
  • In a GlideElement for a boolean: When trying to compare the value using === , it returns false.



Example of GlideRecord fields returning false/undefined results

To demonstrate the cast problems, I have created an example. When scripting, you can explicitly cast your field. Here is a script include I have used to explicitly cast the fields:

Script include: ParseGlideElement

gliderecord example.jpg

Example Script:

var ParseGlideElement = Class.create();
ParseGlideElement.prototype = {
    initialize: function() {},
    // Parse. Input: A glideElement object. Output: A cast of the field value into boolean, decimal, date_time or string based on field internalType
    parse: function(a) {
        if(a.nil())return null;
        var b = a.getED().getInternalType();
        return "boolean" == b ? this.parseBool(a) :
            "integer" == b ? this.parseInt(a) : 
            "glide_date_time" == b ? new GlideDateTime(a) : 
            "string" == b ? a.toString() :
            "decimal" == b ? this.parseFloat(a): 
    parseBool: function(a) {
        return "boolean" == typeof a ? a : /^(true|1|yes|on)$/i.test(a)
    parseInt: function(a) {
        return a + 0
    parseFloat: function(a) {
        return a + 0
    type: "ParseGlideElement"


Here is the record I will be using to test:

test record.jpg

When executing the following background script:

var gr = new GlideRecord("u_test_record");
gr.get('6f46dfb913e576005e915f7f3244b020'); // sys_id of the test created

var vstring = gr.u_string1;
var vinteger = gr.u_integer;
var vboolean = gr.u_truefalse;

gs.print("Test without explicitly casting fields: \n " + testGlideRecord(vstring, vinteger, vboolean).join('\n'));

var gpe = new ParseGlideElement();
vstring = gpe.parse(gr.u_string1);  // casting to string based on the ED internaltype Same as gr.u_string1.toString()
vinteger = gpe.parse(gr.u_integer); // casting to integer based on the ED internaltype. Same as gr.u_integer + 0
vboolean = gpe.parse(gr.u_truefalse); // casting to boolean based on the ED internaltype

gs.print("Test explicitly casting fields: \n " + testGlideRecord(vstring, vinteger, vboolean).join('\n'));

function testGlideRecord(vstring, vinteger, vboolean) {
    var message = [];
        '\nGlide record'

    // Example 1 - Expected cast to String

    message.push("\n****** Example 1  - Expected cast to String ");
    message.push("gr.u_string1: " + vstring + " - typeof: " + typeof vstring);
    message.push("gr.u_string1.length: " + vstring.length + " - expected: 11");

    // Example 2 - Expected cast to Integer
    message.push("\n****** Example 2 - Expected cast to Integer ");
    message.push("gr.u_integer: " + vinteger + " - typeof: " + typeof vinteger);
    message.push(vinteger + " === 77777 :" + (vinteger === 77777) + '- expected: true');

    // Example 3 - Expected cast to Boolean
    message.push("\n****** Example 3 - Expected cast to boolean ");
    message.push("gr.u_truefalse: " + vboolean + " - typeof: " + typeof vboolean);
    message.push(vboolean + " === false :" + (vboolean === false) +
        '- expected: true');

    return message;


... this is the result:




Scripting example simplified















gr.u_integer === 7777





(gr.u_integer + 0) === 7777





gr.u_truefalse === false





parseBool(gr.u_truefalse) === false




As you can see there are a few cases where you need to explicitly "cast" your field types and avoid mixing pears with apples.


How to use GlideRecord when the unexpected happens

In a nutshell, here are some recommendations for using GlideRecord when encountering a casting issue:


GlideElement with
Element Descriptor (ED)

Operations to look out

Areas of problem



concatenations or as parameters to other functions

Operations like startsWith(), endsWith(), length used directly could return undefined

Use .toString()

Integer or Float

When used on math operations against an integer or decimal

On some operations, it could be cast to string incorrectly or validated against the incorrect type

Consider using "value + 0 " or Number (value) to force the cast to decimal


When used on a conditions

When on complex conditions, it could be evaluated to false

Consider transforming the field to a boolean value



I have tested using our Istanbul release and Chrome as the browser.


Want to learn more about GlideRecords? Here are some great resources to check out:


Thanks to reid (Reid Murray) for the pointers

Short answer: Don't name your fields (in scoped apps) 'event'. It will come back to haunt you in notifications.



I was making a demo app. I extended task to a scoped table called x_meetup_event. From there I created a m2m table with sys_user (x_meetup_m2m_invite). This table basically has two fields user (reference to sys_user) and event (reference to x_meetup_event.)


As records are added the table, my plan was to trigger a scriptless notification with details about the event by dot walking through the event field, for example:


Event name: ${event.short_description}

Start time: ${event.work_start}


It took me a while to figure out why I wasn't getting any text next to the labels. After quite a bit of testing in the global scope and with other table names, it comes down to 'event' is a reserved word in notifications. Sure, now that I know, it all sounds so simple. The documentation mentions event.parm1 and event.parm2 are available in mail scripts, but the Notifications page says nothing about ${event}. This is one case where I appreciate the u_ prefix on tables and fields in the global scope.


You may run in to similar issues with fields named 'email', 'template', 'email_action', and 'current'. In general, stay away from naming fields the same as objects. If I were still in charge of the Technical Best Practices docs, I'd add this in there. Right martin.wood and enojardi?

In this episode, Brandon May and Jesse Adams discuss some best practices and tips and tricks for using Service Portal, including new features in Istanbul!







to iTunes


This episode covers:

  • How Service Portal differs from CMS
  • Search groups and search sources
  • Widgets options schema
  • Troubleshooting tips


For more information on Service Portal, see:


Your feedback helps us better serve you! Did you find this podcast helpful? Leave us a comment to tell us why or why not.





To catch clips behind the scenes with our podcast guests, and find out what topics we'll be covering next before they are posted, follow @NOWsupport on Twitter. You can also search Twitter using the hashtag #SNtechbytes for all previous podcasts, video clips and pictures.

Other posts related to Themes

"Helsinki Gray" UI16 Theme

Jakarta Theme Properties Visual Guide

Kingston Theme Properties Visual Guide


In a previous post, "Helsinki Gray" UI16 Theme, I wrote about a new Theme I designed for Helsinki.  I tried to reproduce the look in Istanbul but ran into some issues because of changes to how the Theme properties are used on the controls.  So I spent some time trying to understand which properties set the color of what controls.  Took awhile to do and decided to share what I found here.


Test Theme Record

While trying to figure everything out, I created a new test Theme called "Istanbul Test", similar to the one I created for Helsinki:


I realize it's ugly.  Really ugly, but I made it that way in order for the different controls to stand out from each other, so it did have to be kinda crazy looking.  I used the standard HTML color names to make it easier to spot where they are used.  Here's the CSS for it:


/* Istanbul Test
    Created by Jim Coyne -

    This is to help show what elements are affected by the CSS colors - PLEASE, PLEASE, PLEASE DO NOT ACTUALLY USE THIS THEME FOR REAL  :-)
    It uses HTML color names to hopefully make it a little easier to understand and find the color

    The comments include a copy of the UI16 default value:
    I've added my own comments within () to hopefully clarify what is affected by the color

/* Header Colors */
$navpage-header-bg: DodgerBlue  /* #303a46  Topbar background color (the banner) */
$navpage-header-color: Aqua /* #ffffff  Topbar text color and history hover color (Banner title text, Global Search + Application + Update Set icon outlines) */
$navpage-header-button-color: Coral /* (Logged-in user name text + Connect, Help and Settings icons) */
$navpage-header-divider-color: FireBrick /* #455464  Bottom border color on topbar (banner separator line) */
$navpage-button-color: BlueViolet /* #ffffff  Default button/icon colors  (Nav bar [maximized] buttons, Favorite and History 1st line text, Connect msg record display value and Create a new converstaion icon) */
$navpage-button-color-hover: Yellow /* #7ec24F  Topbar buttons hover color (Global Search, Connect, Help and Gear icons hover + clear search text icon hover + selected Nav bar icon [temporary effect only]) */

/* Search Colours */
$search-text-color: LightGreen /* #e7e9eb  Search text color (+ clear search text icon + Nav bar [minimized] buttons) */
$navpage-nav-border: Magenta  /* #dddddd  Color of outline for search (Global, Nav and Connect search box outlines + outline of logged-in user control when selected) */

/* Left nav and navigation toolbar background color */
$nav-highlight-main: LightSkyBlue /* #3D4853  Navigator hover color (Last Module/Favorite/History/Connect/Help item selected [temporary effect only]) */
$subnav-background-color: SlateGray /* #455464  Background for expanded navigation items (NO IDEA) */
$navpage-nav-bg: BurlyWood /* #303a46  Background for navigator (left side only). (Nav bar, Connect bar and Module backgrounds + History separators) */
$navpage-nav-bg-sub: Pink /* #455464  Background for Favorites list, history list, and Connect list background. (background for Apps/Favorites/History and Connect/Help bars and selected icon when editing a favorite) */
$navpage-nav-color-sub: Tomato  /* #bec1c6  Text color in main navigation (NO IDEA) */
$navpage-nav-mod-text-hover: Purple /* #ffffff  Text color when hovering over items in main nav (NO IDEA) */
$nav-hr-color: YellowGreen /* #303a46  Divider color in Navigator (Separator module [ones without a label]) */

/* Navigator tabs */
$nav-highlight-bar-active: Red /* #278efc  Active nav item underneath search (thin line under selected Apps, Favorites or History icons, selected Connect, Help or Gear icon + number of Connect msgs dot)  */
$nav-highlight-bar-inactive: PaleGoldenRod  /* #828890  Inactive nav items underneath search (thin line under Apps, Favorites or History icon when not selected) */
$navpage-nav-selected-bg: Olive /* #4b545F  Background for currently selected navigation item underneath search (background for selected Apps, Favorites or History icon) */
$navpage-nav-selected-color: OrangeRed /* #ffffff  Color of icon for currently active nav item. (Apps, Favorites or History icon when  selected) */
$navpage-nav-unselected-color: Orange  /* #bec1c6  Color of icons for non inactive nav items. (Apps, Favorites or History icon when not selected and Module title text)  */

/* Navigator Application text */
$connect-latest-message: White  /* #cfd4d8  Color for latest connect messages in right bar. (NO IDEA) */
$nav-timeago-header-color: Cyan  /* #303a46  Timestamp header backgrounds in History tab (NO IDEA) */
$navpage-nav-app-text: Black  /* #cfd4d8  Core content text color (Application title, History hover + 2nd line text + separator text, Connect "OPEN CONVERSATIONS" + "No results found" + message line text, Help bar title + hover text) */
$navpage-nav-app-text-hover: #ffffff  /* #ffffff  Core content text color hover (NO IDEA) */



I used one of the OOB Theme records (I forget which one it was) as a starting point and added my own comments within () to hopefully clarify what is affected by the property.  Some of the comments from the OOB Theme are incorrect now in Istanbul.  I've added a comment of "NO IDEA" to some of the properties because I do not know what they control now, and suspect they no longer affect anything.


Here are the property names, along with it's default value, used in the Theme record and screenshots of the controls/areas they affect.  I used yellow as the value for each property so the controls would stand out in the screenshots (sorry for the flash of yellow about half-way down the article ).


$navpage-header-bg - #303a46

  • Banner frame background



$navpage-header-color - #ffffff

  • Banner frame title text
  • Global Search, Application and Update Set icon outlines



$navpage-header-button-color (no default, not documented)

  • Logged-in user name text
  • Connect, Help and Settings icons



$navpage-header-divider-color - #455464

  • Banner frame separator line



$navpage-button-color - #ffffff

  • Buttons in the Navigator bar (when maximized)
  • Favorite items text
  • 1st line of History items
  • Connect message record display value text
  • Create a New Conversation, Open Connect standalone interface and Close Connect Sidebar icons




$navpage-button-color-hover - #7ec24F

  • Global Search, Connect, Help and Settings icons when cursor is over the control (only the Global Search icon is highlighted in the first screenshot below but the others will highlight when the cursor is over them)
  • Clear search text icon when cursor is over the control in Navigator and Connect sidebar
  • Navigator bar icon when clicked (some browsers [e.g. Chrome] only remove the highlight after cursor is clicked elsewhere)




$search-text-color - #e7e9eb

  • Global Search, Navigator and Connect search text
  • Clear search text icon in Navigator and Connect search boxes
  • Navigator bar icons when minimized



$navpage-nav-border - #dddddd

  • Global Search, Navigator and Connect search box outlines
  • Navigator and Connect search box filter icons
  • Outline of logged-in user control when selecting a drop-down menu item




$nav-highlight-main - #3D4853

  • Module/Favorite (not in Safari)/History/Help item when clicked (each browser has its own quirks with this one - Safari only shows while clicking the item, others will keep the highlight a second or so, and some keep the Help item highlighted until the cursor is clicked elsewhere)
  • Selected Connect item (remains highlighted until another is selected or another record's chat window is selected or the record's chat window is closed)
  • Vertical separator line between main frame and Navigator/Sidebars





$navpage-nav-bg - #303a46

  • Navigator, Connect and Help Sidebar header and footers
  • Unselected Navigator tabs background
  • Module background
  • History time separator background



$navpage-nav-bg-sub - #455464

  • Navigator, Connect and Help Sidebar backgrounds
  • Background for Apps/Favorites/History
  • Selected icon when editing a Favorite



$nav-hr-color - #303a46

  • Separator modules without a label



$nav-highlight-bar-active - #278efc

  • Highlight line under active Navigator tab (Apps, Favorites or History)
  • Selected Connect, Help or Settings icon (only the Connect icon is highlighted in the screenshot below but the others will highlight when clicked/selected)
  • Number of Connect messages dot
  • Outline of logged-in user control when selected



$nav-highlight-bar-inactive - #828890

  • Line under inactive Navigator tab



$navpage-nav-selected-bg - #4b545F

  • Background for currently selected Navigator tab (Apps, Favorites or History)



$navpage-nav-selected-color - #ffffff

  • Active Navigator tab icon (Apps, Favorites or History)



$navpage-nav-unselected-color - #bec1c6

  • Inactive Navigator tab icons (Apps, Favorites or History)
  • Module title text



$navpage-nav-app-text - #cfd4d8

  • Application title text and Application hover title text
  • Edit Application and Add to Favorites icons
  • History time separator text
  • History 2nd line text
  • History hover text
  • Connect message text
  • Connect "OPEN CONVERSATIONS" and other informational text
  • Help sidebar title and hover text




No Longer Used?

Here are some properties used in Helsinki that do not seem to be used in Istanbul anymore:

  • $subnav-background-color - #455464
  • $navpage-nav-color-sub - #bec1c6
  • $navpage-nav-mod-text-hover - #ffffff
  • $connect-latest-message - #cfd4d8
  • $nav-timeago-header-color - #303a46
  • $navpage-nav-app-text-hover - #ffffff


The default values listed are from Helsinki.



Unkown Properties

Here are a few controls or places in the UI that I do not know the name of the property that controls the color:

  • Search boxes placeholder text
  • Unselected Apps/Favs/History icon hover background
  • Last Application selected background
  • Application hover background
  • Module hover background
  • Icon hover backgrounds (Banner frame, Nav bar, Connect, etc...)



Setting a Default Theme for Users

The currently selected Theme for each user is saved in a User Preference called "glide.css.theme.ui16".  The value of the preference contains either "system" for the "System" Theme or the sys_id of one of the other Theme records.


You can set a default Theme for everyone by creating a new User Preference record with the "System" field checked and the "User" field left empty.  That will set the Theme for each user until they actually select a different one, which will then be saved in a User Preference record of their own.


Setting a default Theme does not, however, affect the look of the login page - it will use the properties of the "System" Theme - and that would look a little odd, the colors switching on the user as they login.



Overriding the "System" Theme

You may have noticed that all the Themes that are listed in the System Settings popup window have a corresponding Theme record (System UI \ Themes) except for the "System" one:

That's because the "System" Theme uses hard-coded values and System Properties to override them.  To override a "System" Theme color, you must create or edit one of the System Properties and not an actual Theme record.  The name of the System Property would be "css." + name of the css property from above (e.g. "css.$nav-highlight-bar-active")



What's Missing?

The most obvious omission with the Theme concept, I believe, is the ability to set a banner frame logo right in the Theme record because some themes may have a light background, requiring a dark logo, while others may have a dark background requiring a lighter logo.


I also believe some properties should not be shared.  For instance, the Global Search and Navigator search text colors should be controlled by different properties because the banner background may be a light color requiring a dark text color for the Global Search text box and the Navigator background may be darker, requiring a lighter color for it's search text.  Same thing for their control outlines.



Please Create a System Theme record

It would be a lot more useful if the "System" Theme was actually a record instead of all the System Properties.  Would make editing the default Theme a lot easier.




Here are a couple links to the Istanbul docs and other pages that may be useful:




I'll try to keep this post updated with anything new that I find.  Please let me know if I've missed anything.  Thanks in advance.



Updated Thursday, April 20, 2017

  • added "$navpage-header-button-color" property

*** Please Like and/or tag responses as being Correct.
And don't be shy about tagging reponses as Helpful if they were, even if it was not a response to one of your own questions ***

It was 140 days between the CreatorCon Challenge announcement on Sept 28th, 2016 and the entry deadline of February 15th, 2017.


Reaching out to startups in both the existing ServiceNow partner and developer ecosystem as well as outside to ISVs and developers on other platforms, with a call to action to transform their business on the ServiceNow platform, was a lot of fun and in the end, very rewarding for me personally and the broader ServiceNow team that made this possible - including senior execs CTO Allan Leinwand , SVP DevOps and Platform BU Pat Casey, VP of Finance and Corporate Development Dominic Phillips (who heads ServiceNow Ventures, the major sponsor of this competition), and Avanish Sahai (VP Worldwide ISVs and Technology Alliances).


But most importantly, the goal for the CreatorCon Challenge is that it be incredibly rewarding for the ISV startups that took on the Challenge and put forth tremendous innovation and effort in creating their apps and entry materials that were required in those seemingly really fast/blurred 140 days. I suppose that's a testament to the fact that you really can build high-impact, platform services-rich business apps really really fast on the ServiceNow platform (Ok, at Lightspeed). I'm sure we could put together a mathematical formula that could take the # of lines of Javascript and Angular JS code, # of workflows, # of  tables rows, # of API calls, # of orchestration activities, etc. etc. and divide by 140 to reach the magic number of 186,000. But I have better things to do.


Suffice to say, the Shortlist Startups with awesome leadership teams all built their revenue generating, high business impact delivering,  large and growing total available market-addressing, competitively differentiating and sustainable competitive advantaging (think moat), deep and wide platform service-using, technical viability-evidencing apps in just 140 days. That's pretty awesome.


To throw some meta-numbers on the board, we received 226 entries that completed at least the Personal/Company profile. 37 of those also completed the App profile and submitted App Materials (investor and/or on-screen app demo video).  Out of those 37, 17 met all of the by-design strict/high-bar eligibility criteria for both entity and entry eligibility.  And of those 17, 9 were selected to the Shortlist.


So without further adieu or to do, here are "The Nine", in alphabetical order.



Skillmix is an intelligent spatial management software. Its main objective is to boost productivity and innovation through smart desk assignment. Skillmix organizes and manages employee seating plans according to expertise, experience or any other criteria that meet company objectives.



Clear Skye's Identity Lifecycle Management (ILM) for ServiceNow helps organizations navigate towards a state of compliance, by managing any type of account, across any type of environment. ILM is available on the ServiceNow Store.


cofigure logo.jpg

CoFigure Applicant Tracking System (ATS) is the first in the suite of HR applications designed to create a single system of action for a consumerized enterprise that crosses HR, IT and all enterprise services. ATS is available on the ServiceNow Store.



Vendition is a vertical solution for Consulting companies to manage customers, contacts, opportunities, engagements, and support of their clients.



CourseLoop provides flexible workflow management and a 'single source of truth' for University curriculum information management - eliminating reliance on paper forms, spreadsheets and email.



Help-Full provides employees caring for aging parents with peace of mind when they can't be there and respite when they need a break. We connect older adults with their neighbors (including your employees) to find fun, friendship, and fulfillment.



PlatCore is a full-featured learning management system (LMS) built entirely on the ServiceNow platform. PlatCore is designed to easily deliver training through various methods including video, knowledge base articles, and assessments.



BariAPPtric is a  healthcare team portal and a patient app. The portal provides a data rich, unified dashboard benefiting from a patient-centric design including traceability systems to track and document activities along the patient pathway.



Cloud EQMS enables companies to manage their quality processes in a Cloud/Saas model. Examples of quality processes are: Deviations, CAPAs, Change Controls, Complaints and Audits.



So What's Next?


During the rest of March, each company will participate in a WebEx with ServiceNow executives, provide a live pitch that highlights the company, business proposition, the app, and why ServiceNow should consider investing, and answer any questions that the execs may have. From there, the execs will determine the three finalists that will be on-stage in the CreatorCon Challenge finale, immediately following the CreatorCon keynote on Thursday, May 11th in a general session in front of a few thousand Knowledge17 attendees. The finale will be a "shark tank" style event and the 1st, 2nd, and 3rd place winners will be determined by the "Legends of Tech" judging panel. The winners will receive their share of $500K in cash investments from ServiceNow Ventures, plus over $300K worth of marketing and business development prizes.


All nine of the above companies will get free trips to Knowledge17 (flight, hotel, transport to/from the airport/hotel) for two people from each company, plus full Knowledge17 passes and demo space in a sweet part of the centrally-located (smack dab in-between the ServiceNow Pavilion and the ExpoNow floor) CreatorCon Developer Hub (sweet = premium location, high foot-traffic).


The three winning apps will all be published to the ServiceNow Store on the Istanbul or Jarkarta release by the end of August for delivery to all ServiceNow customers globally. We also expect the six shortlisted apps that aren't selected as finalists/winners to be published to the Store so that they too can sell and deliver their innovative apps securely and efficiently to ServiceNow customers globally.


A hearty congratulations to the shortlist and a hearty thank you to all the startups and developers who participated in the Challenge. There were some really awesome apps that didn't make the shortlist but that we have identified and called out to the ServiceNow partner sales team twho will be in contact with them about go-to-market planning and sales engagement as we fully expect that ServiceNow customers will want to see those apps on the Store as well.


Stay tuned to hear who the three finalists are - they will be announced on April 12th!

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

The Import Set Deleter job cleans records in the import tables. This is a default job which cleans the data of sys_import_set_row and its child tables as these tables tend to grow big very often. Sometimes there is a possibility, although rare, that orphan records are left in sys_import_set_row table and they are not cleaned in future scheduling of the job. This is a problem that was fixed in Fuji addressed out of memory exceptions that arise from MultipleDelete, which the Import Set Deleter job utilizes. However, the fixes did not completely prevent memory concerns for the particular job.


You could be affected by the issue if you are on certain early patches of Geneva up to Geneva Patch 9. It is also seen in Helsinki, up to Helsinki Patch 5. The severity may depend on the number of child tables that the import set row tables have for an instance.


Here are a few scenarios that may help determine if you may be affected by the issue:

  1. Long-running import set delete job
  2. JAVA in use memory goes up
  3. Localhost logs statements


All 3 scenarios need to be experienced in order for this issue to apply.



Long running import set deleter job

In checking the stats page(/, you see that the Import Set Deleter was the longest running job on that particular node.


Snippet from the



Current job: Import Set Deleter

Job started: Tue Sep 06 18:00:02 PDT 2016

Job duration: 1:11:24.846

Total jobs: 7612

Mean duration: 0:00:01.276


The same can be checked via the sys_trigger table to understand which node this job has been running on.

url: /



JAVA HEAP in use memory is high / free memory low

Check for memory usage patterns for the affected node. If you notice that this is the only transaction running for long and the in use memory is on an uptick, make a note of that and save the information.


Snippet from the


Servlet Memory

Max memory: 2022.0

Allocated: 2022.0

In use: 1926.0

Free percentage: 5.0


Local Host Logs statements

Download the node files and verify the logs for information/statements such as the ones below:


Snippet from local host log files


2016-09-29 00:00:01 (609) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning import set ISET6612326

2016-09-29 00:00:01 (633) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=cf96b7b0db46aa00e93ff2e9af9619e6

2016-09-29 00:00:01 (637) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (639) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612283

2016-09-29 00:00:01 (641) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=55e8f7b4db8a2e4090b0ff1aaf96195c

2016-09-29 00:00:01 (645) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (647) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612284

2016-09-29 00:00:01 (649) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=f32b3738db8a2e4090b0ff1aaf961938

2016-09-29 00:00:01 (652) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (654) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612285

2016-09-29 00:00:01 (656) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=5a7d33b8db8a2e4090b0ff1aaf9619b0

2016-09-29 00:00:01 (659) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (662) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612286

2016-09-29 00:00:01 (664) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=81cf3f34db46aa00e93ff2e9af961975

2016-09-29 00:00:01 (667) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (669) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612287

2016-09-29 00:00:01 (671) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=5f02484ddb8a2e4090b0ff1aaf96193d

2016-09-29 00:00:01 (674) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (676) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612288

2016-09-29 00:00:01 (678) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=7d5484cddb8a2e4090b0ff1aaf961900

2016-09-29 00:00:01 (681) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (684) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612289

2016-09-29 00:00:01 (686) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=6ca60885db46aa00e93ff2e9af96194b

2016-09-29 00:00:01 (689) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (692) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612290

2016-09-29 00:00:01 (693) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=fee88c81dbca2e4090b0ff1aaf9619a6

2016-09-29 00:00:01 (696) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (699) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612291

2016-09-29 00:00:01 (701) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=d53bc8c5db46aa00e93ff2e9af96193e

2016-09-29 00:00:01 (705) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (707) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612292

2016-09-29 00:00:01 (709) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=088d0409db46aa00e93ff2e9af9619e9



** Script: Import Set Cleaner:: Deleting orphaned import set row records if any by querying for import set sys_id being empty for table: ldap_import


Workaround long-running Import Set Deleter job causing performance issues on the affected node:

  1. Navigate to Script Includes > ImportSetCleaner.

  2. Change the line:

Once you change the ImportSetCleaner line, you will notice that the line is now commented out.


Releases this issue is currently fixed in:

  • Geneva Patch 10
  • Helsinki Patch 6
  • Istanbul


We recommend that you upgrade to one of the releases, mentioned in ServiceNow KB: Import Set Deleter job causing instances to run low or out of memory, causing performance issues, where the fix is found. Upgrading to one of these releases will prevent or improve performance caused by this particular issue.

This Dilemma is coming from a community member who was asked to put in a "confirm message" to a user where they had filled in information on a catalog item in the Service Portal and tried to move away from the page before "saving" it. We are talking pretty much the same functionality like we have on for example the incident form.


Before I continue, I would like to do a shout out to Nabil & Jace(jacebenson) at our slack channel ( for helping me getting the last pieces together.


I was going to record this as a video, but google seems to be messing with hangout so that wasn't possible, so bare with me. There can be a few screen shot coming this way.


So, we are looking at the page OOB "sc_cat_item" in the Service Portal.


It looks something like this when you selected an item:


If I press back now or reload, the page will just do that without caring that there is info in a form that hasn't been saved.


So we need to customized the widget that are showing the catalog item. First thing is to clone that widget.

For those who doesn't know, you can "CTRL+right click" on the widget to get a menu like this and where you can easy go to the widget editor.

Now, we can't edit OOB widgets, so we need to clone the widget first to be able to edit it.

Choose a new name and we are almost ready. Remember when you cloned, the cloned version doesn't load in the widget editor automatically.


Now we need to put our widget on the page before we start to do anything. Easier for testing and not so much confusion when you think you have change it later on..

This widget is kind of weird, since when you go to page through designer mode, you will not see it there, but it is there..

I would say this is kind of buggy and there is different ways of removing it, you can for example go through the page editor, there it is visible like here:


Anyway, I usually just mark the whole container, deletes it and then put in a new container, a 12 "column" and then my widget in there.

Then you can go back to your catalog page, reload it and when you "CTRL-right click" you should see your new widget name there.


So, open up your widget in the editor so we can do the real magic.


Before we start there might be some other way as well, I'm no Uber coder, but at least I got it to work =)

We need to do the following:


1. Add a name to the form that is holding the first. This is since angular is putting a variable on the form if it contains data that isn't saved and it's called $dirty. So if form-name.$dirty == true, it has unchanged data. I found on my world tour of google some example for angular to handle this, but couldn't get it real to work. In this example I name the form "c.myCatTest". if I just named it "myCatTest" I couldn't get it to work even if I should get it. So the only change in html template is this on line 21:


Then we need to do some client script as well. Now, you can put this pretty much where you want, but I put it here between line 62 & 74:


Now, This is two parts. The $scope.$on('$locationChangeStart'.... should be enough if I read the docs correctly,

but that triggers only if the user presses the back-button or presses on another link on the page, like breadcrumbs, menu etc.

I also tried here to change my code on line 64 to if ($scope.myCatTest.$dirty) when I named my form to "myCatTest" but couldn't get it to work, this works at least =)



The second part $window.onbeforeunload.... handles if the use trying to write in a URL manually or hit the reload button.
It doesn't care about the back-button....Se we need both.


So with this code you now should be able to save, reload the catalog form on the portal and you should need to confirm if you have some unsaved data and try to leave.



Links for more info:

$dirty ->

$locationChangeStart ->$location

$windows.onbeforeunload ->


Hope this will save some time for other people as well.




Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

I wrote about this on my own blog for some month ago and I think it still is valid after reading post at the forum. So for some it is old news and for some it's new news =)


The more I read about Business rules I notice that they are being used the wrong way by many people. Every rule has it's exceptions and I bet there is one here as well. But I can say that I haven't yet come up to a reason why you should use current.update in a Business Rule. I written a post about this and Business rules in general.



I guess I need to explain what I mean with my headline.


I like this picture from the Wiki




It's pretty simple if you look at it. Let's go through what kind of business rules (BR) we have.



This type of BR is used to modify the queries that are send to the database and this is before any data is collected, which means you can't do things like "if (" since the BR have no idea what value has. You pretty much use query BR to restrict what kind of data that the user is allowed to see.



Table: Incident

Condition: !gs.hasRole("admin")

Script: current.addActiceQuery();


What will happen here is that if the user query the incident table and don't have the role admin, it will add "current.addActiveQuery(); to the query and the user will only get back records that are active.

Here we can see that we shouldn't use current.update to anything.



So what can we use the Display BR to? Well, if we know that we will need data to use in a client script, but the data isn't available in the form. The we can use the display BR to run server script and put the info in the g_scratchpad and then use it later in the client script. Just be careful, since it put the data in the scratchpad when the form is loading, that also means that if then the use don't do anything for like 10 min and the do something and the client script uses that data. It might be old and not correct. If you instead use a Ajax call, you will get "live" data for your client script.


So current.update shouldn't be used here either.



Now this BR hits before the data is saved into the database. But since the data hasn't been saved, we don't need to use the current.update here either since the record is about to be updated automatically. Here we can modify fields like current.u_no_idea = 'bad imagination';


We can also use a before BR to validate the information to check if it should be allowed to be saved. Doing this here instead having a client script doing it. If the data isn't correct, you can abort the action with current.setAbortAction(true);


I know I'm repeating myself, but no use for current.update here either



Looking at the picture you can see that the BR runs after the data has been saved to the database. Now is a perfect place to update other records beside the current one. Yes, you have access to the current object, but that is more to be able to use this data to do conditions and for example IF-statements to decide what other records to update.


Which leads to that we don't use  current.update here. What will happen if we use current.update? All the "before" BR will run again, since we are trying to save/update something in the database... And again, is there anything in the current object we can't do in a before BR? that we need to do in a after? Right now I'm saying no until someone else corrects me.



Looking at all of the above, we are hitting user experience, making big BR of any kind of the above will affect the user experience. Even a after BR doesn't let the user do anything else until it's finished. That leaves us to the last one.



Async BR is something that I myself probably could use more instead of the after BR. I have a note on my desk to go through all my BR's when I got time over to see if I can move over some after to async. Async (Asynchronous) is simular to after BR but here it let's the user go and do other stuff and put this on the "let's do it when we got time" list. If the condition on the async BR hits true, it understands that there is work to do and creates a schedule job to handle all that work. it's schedule to run immediately, but it doesn't guarantee when it's finished.

What's good to know that the BR isn't run within the user's own active session. When the schedule job is ready to go, it impersonate the user and a new session is created for this job. This will probably not have any affect, but can be good to know. So if you got things that needs to get updated after a record is saved and not in a extremely time critical way, async is the way to go.

But then again, it's only different from a after BR is that the user doesn't need to wait for it to finish.


And guess what... We don't do current.update here either



So, that all about Business rules and I have at least not found any way of having the need of current.update in a BR.




Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

Do you get Reference fields getting the wrong matching? Then use setDisplayValue. Enviably developed within this highly regarded field type that displays as a string is just one click away from another table that includes our famous dot walk features. Dot-walking provides an ideal way to join table records. When you define a reference field, the system creates a relationship between the two tables. Adding a reference field to a form makes the other fields in the referenced table available to the form. When on the incident table, both Caller and Assigned to are referenced to the sys_user table.


Here is the story of an example of what happens when you try to set a reference field value like any other ordinary field! Reference fields are special. I mean, they are REALLY special. A lion disguised as cat.



Example of populating a reference field by setValue vs setDisplayValue

I have created two users on sys_user (in this order):

First user: user_name = 2, First name = Mike, Last Name = Yes   (Display value : "Mike Yes") - (e.g. sysid:de5e388d4f1932002c9e4b8d0210c7f3)

populate ref.jpg

Second user: user_name = user_2, First name = "", Last Name = 2  (Display value : "2")

populate ref1.jpg



When executing the following background script :


---background script----

// Create a new Incident
var gr = new GlideRecord('incident');

// Set the caller ID by setValue but using the Display value
gr.caller_id = "2";
gr.short_description = "testing and sys_user.user_name matching display value";
var vsid = gr.insert();
// Now, retrieving the record inserted
var gr2 = new GlideRecord('incident');

gs.print ('inserted ' + gr2.number +' OK. Caller: ' + + ' Caller sys_id: ' + gr2.caller_id.sys_id);


The result is the incident is created with caller_id = "Mike Yes". It matches the incorrect value.

> [0:00:00.228] Script completed in scope global: script

> *** Script: inserted INC0010002 OK. Caller: Mike Yes Caller sys_id: de5e388d4f1932002c9e4b8d0210c7f3

background script.jpg


On reference fields to sys_user, user_name is used to match the user before using the "Display value." This is because I have use setValue ( = is setValue)

> gr.caller_id = "2";

Please note gr.<field> = "xxx" is the same as gr.setValue('field',"xxxx")


Same behavior will be seen on email inbound actions, scripts and data imports (loading the data into reference fields). If you are not using the sys_id on reference fields, the appropriate method is:

> gr.setDisplayValue('caller_id', "2");

Use setDisplayValue to assign Reference fields or Choice values when the data provided it is the "Display" value


Here is the correct script for this example

---background script----

// Create a new record
var gr = new GlideRecord('incident');

// IMPORTANT: As "2" is the Display value of the user "2", setDisplayValue needs to be used.
gr.setDisplayValue('caller_id', "2"); 
// as short_description is just string, setValue is enough.
gr.short_description = "testing and sys_user.user_name matching display value";
var vsid = gr.insert();

// Retrieving the information back to review:
var gr2 = new GlideRecord('incident');

gs.print ('inserted ' + gr2.number +' OK. Caller: ' + + ' Caller sys_id: ' + gr2.caller_id.sys_id);


The result is the incident is created with caller_id = "2" . Yeah!

>[0:00:00.067] Script completed in scope global: script

>*** Script: inserted INC0010003 OK. Caller: 2 Caller sys_id: d1de7cc94f1932002c9e4b8d0210c7ce

It correctly matches the Caller "2".

caller id.jpg


When dealing with reference fields, please ensure to use setDisplayValue if you are not passing the sys_id to avoid surprises! If you are creating a script, or a inbound action, a data source (to import data), etc


More information here:



In this episode, Chuck and Josh discuss some API methods with GlideSysAttachment and how to make Service Portal widgets talk to each other.





00:00 Introductions

03:12 API of the Week

11:04 Service Portal app introduction and issue

13:55 Construction begins - create spinner widget and place on pages

20:53 Add listener to spinner widget

28:17 Add $broadcast to new page widget

34:53 Convert .success() to .then()

38:20 Add $broadcast to output page widget

38:57 Sidebar discussion about localization

47:56 Testing

50:38 Experiment without $timeout

51:49 Refactor opportunity?

54:30 Wrap up & recap

59:17 End


I've seen a lot of questions on the community where people would like to for example set a specific number of attachments as mandatory before the end user can submit the order.


Of course this can be done in more than one way, but here I'm showing a way where you as a sys admin or developer can leave the future configuration to the catalog admins. You don't need to do anything if they want to change the amount of attachment on an item or remove it. They can do it them self.


I basically just do 2 things:


1. Add a new field to the sc_cat_item table to handle the amount of attachment the limit should be.

2. I copy the widget SC Catalog Item and modify it to handle this functionality when the user submits.


So here is the video and I'm sorry about the quality, but I thought it was going to record in HD, but after I don't have time to record a new 30 min video But if there is anything you wonder about, just let me know.



Take care,




sn-community-mvp.pngSymfoni Logo Color Box.jpg


ServiceNow Witch Doctor and MVP
For all my blog posts:

I wrote this on my personal blog before ending that and start writing here. I noticed that people still wonder about this and thought I might as well post it here.

This started from a community question which was how you could check if the valid to date is 30 days from now and then send a notification to the KB managers.


Now. The functionality that was wanted: When valid date is 30 days from now, the KB Managers should receive an email notification about it.

This can be solved with a schedule job which runs once a day and triggers an event for each KB article that matches the conditions and finally a notification is trigger of that event.


1. Register the event

First we need to register the event that we are going to put in the queue. This is done by going to the System Policy->Events->Registry.

Here we create a new event that looks like this:



That was pretty easy, so let's head to the list view to get the encoded query.


2. Encoded Query

Quickest way is just to type kb_knowledge.list in the navigator and you get the to the list view of the KB articles. Set up the conditions and when you're done, right click and copy the query.



3. Scheduled Job


Now we need to create the scheduled job which should once every day to see if there is any knowledge articles which is get close to the valid to date. This job goes through the encoded query and for each record it finds that matches the query it will fire of an event so the notifications can be sent. It looks like this and the code is below the picture.



var gr = new GlideRecord('kb_knowledge');





while ( {

gs.eventQueue("knowledge.expiring", gr);



As you can see in the gs.eventQueue we only have "gs.eventQueue("knowledge.expiring", gr)" in many cases you also specify parameters 1 & 2 like this: "gs.eventQueue(“incident.commented”, current, gs.getUserID(), gs.getUserName());". But since we don't need those, we just skip them.


4. Notification

Now we only have the notification left. Its pretty, simple. set that it will fire on the event and specify who will recieve it and finally the text for the notification. I'm just showing the section with the "when". The rest is purely case specified, but if there is any questions about it, just post a comment and I'll get back to you.



Well, that it. =)





Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

This post was inspired by a question posted here - Dependent Choice List in Requested Items


The poster wanted to add 2 extra choices to the State field based on what Catalog Item was ordered. If a particular item was ordered, he wanted to add "On Hold" and "Awaiting User Info" as valid choices to the State field.  Of course the often used "g_form.addOption" and "g_form.removeOption" methods were suggested (guilty) as a means to solve the issue, but anything running client-side really runs the risk of causing issues.  One issue is the choice list would not be updated in list views.


So after some playing around, the solution can actually be handled with Dependent Values.  I spent a bit of time experimenting with it and came up with the following that works:


1. Add a new field called "State Set" on the Task table (you might want to use it elsewhere) with a default value of "default"


2. Add a Dictionary Override on the Task.State field for the Requested Item table with "Override dependent" selected and "u_state_set" in the "Dependent" field:


3. Add 2 new sets of choices for the sc_req_item.state field, one where the Dependent value field is "default" for your default set of choices and the other "hold" that includes the default values and the 2 new ones:


Be careful when adding new "Values" for the State field - use a unique and higher number for any custom choices you add, as shown above (I like to start at 100), and do NOT re-use any out-of-box values as they mean something to the OOB code base.


4. Build a Business Rule to populate the "State Set" field based on whatever criteria you need (e.g. set to "hold" when the "Item" field is the particular catalog item, etc...)


5. The proper set of choices will now display in the form and list views:


6. Add an Access Control rule on the State Set field that only allows users with the "admin" role to edit it.  There is an issue with this, however - the State field will not be editable either in a list view because it is dependent on the State Set field which is not editable.  I consider this to be a bug or at least an oversight.  I think it is an acceptable use case to restrict write access to the parent field in the dependency but allow the child to be edited.  The reverse is not, however.


So this allows you to create a number of choice list "sets" based on the State Set field without having to worry about adding or removing options through client scripts.  It means some extra choices to maintain, but usually you will not have to edit them very often, if ever, once configured.


This was tested in Istanbul.

*** Please Like and/or tag responses as being Correct.
And don't be shy about tagging reponses as Helpful if they were, even if it was not a response to one of your own questions ***

Filter Blog

By date: By tag: