Skip navigation

Today I will be covering how to use Google reCAPTCHA on a form that is in the Service Portal. There might be occasions where this is needed and for myself it was because we had a form that was open to use even if you weren't logged on.


So I started test a lot and trying to learn how do use the reCAPTCHA, since I never used it before

I quickly realized that this post would have been very long to describe what I do, so instead I made my very first video. So lets just say that there is a couple of things that I can improve on the video arena.


Before you look at the video I just wanted to say that I don't show it in the video, but I have created a custom table that I use that is called u_wishlist and it has two fields(u_name & u_wish).

I've tried to divide the video in the following parts:


1. Create widget and dependency with google api

2. Create the HTML Template

3. Create the Client Controller

4. Create the REST Message

5. And for last the Service Script


So, here comes my first video.




Take care,



Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

Last time, we built a simple app called GQL Pad that provided user interface for editing and running GQL statements; it also showed results in various formats that we could quickly inspect on the same web page. This time, let's walk through the steps for installing the app on your own instance from GitHub using the Helsinki Studio. If you're already familiar with importing files from GitHub, you may skip most steps below.





If you don't already, you'll need to have an account at You can sign up for free as long as your repositories are public. GitLab provides private repositories for free and you can use it with the Helsinki Studio. I'll be using GitHub throughout this post.


After logging into GitHub, point your browser to GitHub - snowaid/GQL_v0.1: Glide Query Language initial version from GQL blog where I saved a copy of GQL v0.1 in its own repository. Towards the top right corner, click on "Fork" as shown in the screenshot below:



This makes a copy of the repository in your own account, which should take only several seconds. After it's done, you'll be back in your account and see a screen similar to below. Towards top left, make sure you see {your username}/GQL_v0.1 and "forked from snowaid/GQL_v0.1" below it.



Then click on the green "Clone or download" button and copy the URL as shown below (we'll paste it shortly below):





Now, let's open ServiceNow and navigate to System Applications > Studio and click on "OPEN STUDIO" as shown below:



In the next screen, click on "Import from Source Control":



Then paste the URL coped from above into the "URL" field and fill in "User name" and "Password": Next, click "Import" (NOTE: although the dialog box says "The account credentials you supply must have read access to the remote repository", you must provide credentials with read/write access or you'll get an error):



After several seconds, you'll see a success message like below:



Click on the blue "Select Application" button and then click on "gql" as shown below:



You'll then be taken to the "Welcome to Studio" page with the "gql" application loaded in the Application Explorer to the left:



Now you can modify the application files to your liking in the Studio and stash or commit back to GitHub using the "Source Control" menu.




Once the GQL app is loaded onto your instance, you can run GQL Pad by navigating to GQL > GQL Pad as shown below:





The GQL app is open source code licensed under a very permissive MIT license and you're welcome to make contributions. While the app you just installed above will be frozen in GitHub in the GQL_v0.1 repository (to ensure it stays in sync with what's been mentioned in this blog series), the working copy of the app is maintained in the separate GQL repository. If you'd like to make contributions, please fork this repository.




This has been an attempt at demonstrating a proof of concept and we've successfully shown what we set out to accomplish: build a web service that takes SQL-like statements and returns result sets. Along the way, we created the GQL class as a Script Include, which can be called from any server-side scripts. We then built a processor to handle web requests and responses. To interactively demonstrate this, we built a UI Page app called GQL Pad. Everything was kept simple, with very little error handling. The app will work OK for positive cases, but may not perform gracefully when errors are encountered.


Here are some limitations and potential enhancements:


  1. GQL syntax checking: add to UI and GQL class.
  2. GQL syntax highlighting: add to GQL Pad.
  3. Error checking and handling: check for invalid syntax / column / table / encoded query.
  4. Security: may need to use GlideRecordSecure() instead of GlideRecord() to enforce ACL.
  5. Add ability to run SELECT *.
  6. Add calculated fields.
  7. Add aggregate functions: COUNT(), MAX(), MEAN(), etc.
  8. Add UNION: return result sets from multiple SELECT statements as one.
  9. Add JOIN.
  10. Add NEST: allow nesting of result sets from reference tables or related lists.
  11. Add SAMPLE: return random rows based on statistical sampling.
  12. Add XML output format using XMLHelper (can't use in scoped app); alternative might be to use libraries like x2js.


This concludes this series for now. I'll continue to update the GQL repository as I receive your open-source contributions and add enhancements. I may also post follow-up articles if any progress is worth sharing. Thanks for staying with me throughout the series!


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Last time, we saw how the processor handles web service requests and responses, executing GQL statements and returning encoded result sets. This time, let's build a simple app where we can edit a GQL statement, run it, and see the results in various formats; we'll call it GQL Pad. This will showcase everything we've done so far and interactively demonstrate the capabilities of GQL.





Here are some requirements for GQL Pad:


  1. GQL Pad is a web application that runs on the ServiceNow Helsinki release.
  2. User must have the "itil" role.
  3. GQL Pad provides a user interface (UI) with the following features:
    1. User can enter and edit a GQL statement in a text editor.
    2. User can select the desired output format among CSV, HTML, JSAN, JSON.
    3. User can execute the GQL statement in the editor.
    4. User can see the results of the execution on the same web page (rather than downloading).
    5. User can see how long it took for execution.
  4. GQL Pad provides GQL syntax checker in the editor. [future enhancement]
  5. GQL Pad provides GQL syntax highlighting in the editor. [future enhancement]
  6. GQL Pad provides Database Schema Browser showing a list of tables and columns that can be entered into the editor. [future enhancement]




To get started, let's create a new UI Page in the "gql" application scope, as shown in the screenshot below:



Endpoint is what we need to use in the URL to open this UI Page; we'll use it when we create an application module for this. Be sure to check the Direct checkbox. This allows us to bypass Jelly, since we'll be using AngularJS instead, and use simpler HTML of our own without any ServiceNow additions (you can later view the page source and compare with other out-of-the-box pages).




The HTML content looks like


  body { margin-bottom:20pt; }

  label { margin-right: 9px; }

  table { font:9pt Arial; border-collapse:collapse; }
  th,td { padding:3px; border:1px solid #aaa; empty-cells:show; vertical-align:top; }
  th { font-weight:bold; text-align:left; background-color:#eee; }
  tr:nth-child(even) { background-color:#fafafa; }
  tr:hover { background-color:lightyellow; }

<body ng-app="gqlpad">
  <h3>GQL (Glide Query Language) Pad</h3>
  <div ng-controller="EditorCtrl">
    <textarea ng-model="gql" rows="20" style="font:9pt Consolas; width:100%;"></textarea>
    <label>Output format:
      <select ng-model="format" ng-options=" for format in formats"></select>
    <button ng-click="run()" style="width:80px;" accesskey="r"><u>R</u>un</button>
    <span ng-bind="timer" style="color:gray; font:9pt Consolas; margin-left:30px;"></span>
    <div ng-bind-html="result" style="font:9pt Consolas; white-space: pre-wrap;"></div>

<script src=""></script>


The HTML adds various AngularJS attributes that start with "ng-"; we will use them later in script.


Line 13 adds the AngularJS directive "ng-app" and is the start of the GQL Pad app.


Line 15 is the start of the ng-conroller "EditorCtrl", which we'll see more of in the script.


Line 16 adds a text editor using the "<textarea>" tag.


Line 19 adds a <select> list of desired output formats (the options will be added by the script later).


Line 21 is the Run button (access key of "r" allows pressing Alt-R on Windows keyboard to invoke the click button action).


Line 22 is a placeholder for displaying execution duration.


Line 24 is a placeholder for displaying result.


Line 28 adds the <script> tag for the AngularJS source, which we're fetching from a CDN (content delivery network).




The Client Script looks like


* GQL (Glide Query Language) Pad

var app = angular.module('gqlpad', []);

app.controller('EditorCtrl', ['$scope', '$http', '$sce', function($scope, $http, $sce) {

  // set initial value
  $scope.gql = 'SELECT  -- sample GQL statement\r  number [Number]\r  , short_description [Title]\r  , dv_state [State]\r  , [Caller Email]\r  , [Manager Email]\r  , dv_cmdb_ci [CI]\r  , cmdb_ci\rFROM incident\rWHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber\rLIMIT 5,10';
  $scope.formats = [
    {name:'CSV' },
  $scope.format = $scope.formats[1];  // default to HTML

  $ = function() {
    var timer =;
    $scope.timer = '';
    $scope.result = '';
    var params = { gql:$scope.gql, format:$, };  // add now() to prevent caching
    $http.get('/', { params: params }).then(function(response) {

      var data =;
      $scope.result = $sce.trustAsHtml(typeof data == 'object' ? JSON.stringify(data) : data);  // stringify to show JSON on page
      $scope.timer  = 'took ' + (( - timer) / 1e3) + ' seconds';


This is a simple AngularJS script that works with the above HTML to do the following:


Line 5 defines the GQL Pad app from HTML Line 13.


Line 7 is the beginning of the editor controller, which was declared on HTML Line 15.


Line 12 lists the available formats for the output; these will populate the <select> tag on HTML line 19.


Line 20 is executed when the Run button is clicked. It clears the output fields and sends a GET command to the processor we saw in Part 6, passing on the parameters gql and format in the query string.


Line 27 is the start of the callback function that's executed when a response is returned from the processor. It simply shows the result in the result <div> on HTML line 24; the result is converted to a string to prevent it from opening up in Excel (in case of CSV) or in another window (in case of JSON) to conform to Requirement c.4. It also shows the execution duration in the <span> defined on HTML line 22.




To cap this off, let's add application and module menus. Create a new application menu with the settings in the screenshot below:


We set the Roles to "itil" as specified in Requirement b and set Category to "Custom Applications". After saving the application menu configuration page, let's add a new module using the New button in the Modules related list. In the Module configuration page, set the values as in the screenshot below:



We set Roles to "itil", again as per Requirement b. For Link type, set it to "URL (from Arguments)" and set Arguments to "", which is the Endpoint from the GQL Pad UI Page configuration setting.


After refreshing the Navigator (or the entire page), let's make sure the new "GQL" application and "GQL Pad" module menus show up in the Navigator.




Here's a screenshot of GQL Pad in action. When you navigate to GQL > GQL Pad, the GQL Pad UI Page opens up with the default GQL statement in the editor. When you click on the Run button, the result is displayed below the Run button. The screenshot below is showing Incident demo data; your results may look different.



You can change Output format to "CSV" and click Run, which gives the below output:


You can change Output format to "JSAN" and click Run, which gives the below output:


You can change Output format to "JSON" and click Run, which gives the below output:


You can use GQL Pad to try out various GQL statements from other tables and inspect data, which may be useful for some debugging scenarios. You can also use it to test encoded query strings. The beauty is it's very interactive and fast.


This wraps up our work to date. Next time, I'll show you how to get the app from GitHub and discuss limitations and potential enhancements for the future.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Last time, we worked on the retriever that takes a GQL query and returns the result set. This time, let's take a look at the data format of the result set.


For the GQL query below


  , short_description [Title]
  , dv_state [State]
  , [Caller Email]
  , [Manager Email]
  , dv_cmdb_ci [CI]
  , cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10


the retriever returns the 10 records in JSON shown below (from Incident demo data):


{"records":[{"CI":"EXCH-SD-05","Caller Email":"","Manager Email":"","Number":"INC0000050","State":"Active","Title":"Can't access Exchange server - is it down?","cmdb_ci":"281190e3c0a8000b003f593aa3f20ca6"},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000031","State":"Active","Title":"When can we get off Remedy? UI is killing us","cmdb_ci":""},{"CI":"IBM-T42-DLG","Caller Email":"","Manager Email":"","Number":"INC0000025","State":"Active","Title":"I need more memory","cmdb_ci":"469facd7a9fe1981015830c43428ca2d"},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000018","State":"Active","Title":"Sales forecast spreadsheet is READ ONLY","cmdb_ci":""},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000017","State":"Awaiting User Info","Title":"How do I create a sub-folder","cmdb_ci":""},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000016","State":"Active","Title":"Rain is leaking on main DNS Server","cmdb_ci":""},{"CI":"Saints and Sinners Bingo","Caller Email":"","Manager Email":"","Number":"INC0000015","State":"Active","Title":"I can't launch my VPN client since the last software update","cmdb_ci":"46c7318aa9fe198100c76003f0bc82e9"},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000007","State":"Awaiting User Info","Title":"Need access to sales DB for the West","cmdb_ci":""},{"CI":"","Caller Email":"","Manager Email":"","Number":"INC0000003","State":"Active","Title":"Wireless access is down in my area","cmdb_ci":""},{"CI":"FileServerFloor2","Caller Email":"","Manager Email":"","Number":"INC0000002","State":"Awaiting Problem","Title":"Unable to get to network file shares","cmdb_ci":"b0c25d1bc0a800090168be1bfcdcd759"}]}


There are two things to note in this JSON output:


  1. When we get a result set from a SQL query, the columns are ordered. That is, if a SQL statement's SELECT clause lists column_1, column_2, etc., we expect the result set to show those columns in the same order. In the above JSON output, the columns are listed as object properties, which are not ordered by definition in JavaScript.
  2. I like working with JSON primarily for two reasons: a) its format is native to JavaScript so I can readily work with it in JavaScript; b) it's not as verbose as XML, in most cases. What we see above, however, is verbose because the column headings (object property keys) are repeated for every row. As the row count increases, there will be more redundant information.


To address both of these points, I'd like to use JavaScript arrays, instead of objects, to represent each row, calling it JSAN (JavaScript Array Notation). This is still based on JSON, but instead of returning object key-value pairs, we'll use an array of values, which may be of any type and are ordered by definition in JavaScript. The column headings will be in its own array, separate from the data records. The savings in bytes, even just for 10 records, is pronounced in the same output below, in JSAN (1,282 vs. 1,909 characters):


{"labels":["Number","Title","State","Caller Email","Manager Email","CI","cmdb_ci"], "records":[["INC0000050","Can't access Exchange server - is it down?","Active","","","EXCH-SD-05","281190e3c0a8000b003f593aa3f20ca6"],["INC0000031","When can we get off Remedy? UI is killing us","Active","","","",""],["INC0000025","I need more memory","Active","","","IBM-T42-DLG","469facd7a9fe1981015830c43428ca2d"],["INC0000018","Sales forecast spreadsheet is READ ONLY","Active","","","",""],["INC0000017","How do I create a sub-folder","Awaiting User Info","","","",""],["INC0000016","Rain is leaking on main DNS Server","Active","","","",""],["INC0000015","I can't launch my VPN client since the last software update","Active","","","Saints and Sinners Bingo","46c7318aa9fe198100c76003f0bc82e9"],["INC0000007","Need access to sales DB for the West","Awaiting User Info","","","",""],["INC0000003","Wireless access is down in my area","Active","","","",""],["INC0000002","Unable to get to network file shares","Awaiting Problem","","","FileServerFloor2","b0c25d1bc0a800090168be1bfcdcd759"]]} 


JSAN is different than CSV (comma-separated values) in that JSAN is still based on JSON, so no special processing (encoding/decoding) is necessary; we can still use the same JSON encode/decode methods, which are now ubiquitous in both client (JSON.stringify(), JSON.parse()) and server-side ( JSON.encode(), JSON.decode()) JavaScript. JSAN may also contain metadata to provide additional information about the data. For example, we can include not just column headings, but field names as well as data types. Since the data elements (rows and columns) are ordered, it now becomes possible in some cases to generate a hash, such as MD5, or a CRC32 checksum based on the encoded data; this may be useful if an additional layer of data integrity is required, such as for critical systems in life sciences, aerospace, military, or nuclear sectors (this may not work if a column contains unordered data like objects since they can be encoded in any arbitrary order).


JSAN would look like this for 2 records of 3 columns (although arrays are 0-based, I'm showing the first column as "column_1" below):




To incorporate this, let's modify the GQL class in the Script Include from last time slightly as below (only the changed sections are shown):


GQL.prototype = {
  initialize: function(format) {
    this.format = format || 'JSAN';  // { JSAN | JSON }

GQL.prototype.query = function(gql, format) {

  format = format || this.format;
  var isJSON = /JSON/i.test(format);

  return { fields: fields, labels: labels, records: records };

  function getRow() {
    var row = isJSON ? {} : [];
    for (var i = 0; i < fields.length; i++) {
      var field = fields[i], isDV = /(^|\.)dv_/.test(field);
      var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
      isJSON ? row[labels[i]] = value : row.push(value);  // default to JSAN
    return row;


Line 3 allows us to optionally set the default format for the instance at the time of instantiation; if not set, it defaults to JSAN.


Line 9 allows us to optionally set the format at the time of calling the query() method; if not set, it defaults to the instance default.


Line 12 adds the additional metadata (fields and labels) to the returned data.


Line 19 adds to the row either an object key-value pair if JSON or an array element if JSAN.


Next time, we'll look at the processor and see how the returned data from the GQL class are consumed; we'll also see how JSAN makes it easy to encode the result sets into HTML and CSV formats.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we derived the JSAN (JavaScript Array Notation) data format from JSON and saw how the GQL class from the Script Include can return result sets in either JSON or JSAN data format. This time, we'll look at the processor and see how it handles the requests coming from the browser and consumes the result sets from the Script Include, encoding them for transmission.


We have a choice between two ServiceNow tools when it comes to processing inbound web requests:


  1. Scoped GlideScriptedProcessor
  2. Scripted REST APIs


While Scripted REST APIs provide a nice UI with nifty features, let's use Scoped GlideScriptedProcessor since our processor will be simple. The processor handles the following tasks:


  1. Receive GET requests from the browser or other sources.
  2. Extract parameters from the query string.
  3. Instantiate and execute the GQL class from the Script Include.
  4. Receive the result set from the GQL class in JSAN or JSON data format (the return value is a JavaScript object and it's not yet encoded in JSON).
  5. Based on the requested format, encode (stringify) the result set into JSON or convert to CSV or HTML table.
  6. Transmit the result via response.


Please note that JSAN is an object data format and is still encoded as JSON for transmission.


Below is the processor script that captures the above:


* GQL (Glide Query Language) processor

(function process(g_request, g_response, g_processor) {

  try {
    var gql = g_request.getParameter('gql');
    var format = g_request.getParameter('format');
    var result = new GQL().query(gql, format);

    g_response.setHeader('cache-control', 'no-cache');  // disable page caching to avoid stale result

    if (/CSV/i.test(format)) {
      var filename = result.query && result.query.table || 'gql';
      g_response.setHeader('Content-Disposition', 'attachment;filename=' + filename + '.csv');
      g_processor.writeOutput('text/csv', getCSV());
    else if (/HTML/i.test(format)) g_processor.writeOutput('text/html', getHTML());
    else g_processor.writeJSON(result);
  } catch (e) { g_processor.writeOutput('text/plain', 'ERROR: ' + e + '\r' + e.stack); }

  function getCSV() {

    var columns = [], rows = [];
    result.labels.forEach(function(label) { columns.push(escapeCSV(label)); });

    for (var i = 0; i < result.records.length; i++) {
      columns = [];
      result.records[i].forEach(function(column) { columns.push(escapeCSV(column)); });
    return rows.join('\r\n');

    * Takes raw field value and returns CSV escaped value 
    * based on RFC 4180 Common Format and MIME Type for CSV Files
    * October 2005 
    * @param {string} raw - raw field value 
    * @return {string} escaped CSV field value, if applicable per RFC 4180 
    function escapeCSV(raw) {
      var out = (raw + '').replace(/"/g,'""');  // escape all double quotes  
      if (/[\r\n",]/.test(out)) out = '"' + out + '"';  // if it has special characters, enclose in double quotes
      return out;

  function getHTML() {

    var columns = [], rows = ['<table><thead><tr><th>'];
    result.labels.forEach(function(label) { columns.push(escapeHTML(label)); });
    rows.push(columns.join('</th><th>'), '</th></tr></thead><tbody>');

    for (var i = 0; i < result.records.length; i++) {
      columns = [];
      result.records[i].forEach(function(column) { columns.push(escapeHTML(column)); });
      rows.push('<tr><td>', columns.join('</td><td>'), '</td></tr>');
    return rows.join('');

    function escapeHTML(raw) {  // escape reserved HTML characters
      var MAP = { '&':'&amp;','<':'&lt;','>':'&gt;'};
      return (raw + '').replace(/[&<>]/g, function(c) { return MAP[c]; });
})(g_request, g_response, g_processor);


Lines 8 and 9 extracts the two parameters gql and format from the request object g_request.


Line 10 instantiates the GQL class, executes the query() method using the two parameters extracted, and saves the result set.


Lines 14 through 20 encodes the result set into CSV, HTML, or JSON (for both JSAN and JSON formats) and transmits it back to the requester; notice the content types are set according to the encoding type. The writeJSON() method handles the JSON encoding and also sets the content type.


This sums up what the processor does; it doesn't get much simpler. The rest lines are for the CSV and HTML encoders. These don't really need to be part of the processor, but I wanted to illustrate how JSAN data can be easily turned into other formats. The arrays used in JSAN can be quickly converted to a formatted row using the forEach() array method. Both CSV and HTML encoders work very similarly, iterating over records and columns as row and column delimiters are inserted. HTML tables can be as easily generated on the client side from JSAN data using AngularJS, for example; this may be the preferred approach if you want tighter interactions with the tables in the browser.


Let's now take a look at the Processor configuration page whose screenshot is shown below:



I'd like to point out a few things:


  1. Application is read-only and automatically set to "gql".
  2. Set Type to "script" since we're using JavaScript in the Script section.
  3. Set Path to "proc", short for "processor".
  4. Path Endpoint is read-only and is automatically set to "x_64935_gql_proc". Here, "64935" is the "Vendor prefix", which, in this case, is a numeric id automatically assigned based on the personal developer instance being used. "gql" is the application name and "proc" is the Path.


Using the Path Endpoint, a web service call can be made using a URL similar to (replace "instance" with your own instance)[Number],short_description[Title],dv_state[State],[Caller%20Email],dv_cmdb_ci[CI],cmdb_ciFROMincidentWHEREactive=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumberLIMIT10&format=HTML


This returns an HTML table similar to this (from Incident demo data):


NumberTitleStateCaller EmailCIcmdb_ci
INC0000055SAP Sales app is not accessibleIn Progresscarol.coughlin@example.comSAP Sales and Distribution26e494480a0a0bb400ad175538708ad9
INC0000054SAP Materials Management is slow or there is an outageOn Holdchristen.mitchell@example.comSAP Materials Management26e44e8a0a0a0bb40095ff953f9ee520
INC0000053The SAP HR application is not accessibleIn Progressmargaret.gray@example.comSAP Human Resources26e51a2f0a0a0bb4008628d2254c42db
INC0000052SAP Financial Accounting application appears to be downIn Progressbud.richman@example.comSAP Financial Accounting26e426be0a0a0bb40046890d90059eaa
INC0000051Manager can't access SAP Controlling applicationIn Progressemployee@example.comSAP Controlling26e46e5b0a0a0bb4005d1146846c429c
INC0000050Can't access Exchange server - is it down?In Progressjerrod.bennett@example.comEXCH-SD-05281190e3c0a8000b003f593aa3f20ca6
INC0000031When can we get off Remedy? UI is killing usIn
INC0000025I need more memoryIn Progressdon.goodliffe@example.comIBM-T42-DLG469facd7a9fe1981015830c43428ca2d
INC0000018Sales forecast spreadsheet is READ ONLYIn
INC0000017How do I create a sub-folderOn


At the end of the URL is the format parameter, which is set to "HTML" in the above example. This can be changed to "JSAN" or "JSON" to get the JSON encoded object, which opens up as a file in the browser that you can inspect. When the format is set to "CSV", a CSV-formatted file downloads and opens up in Microsoft Excel, if you have it set as the default application for CSV files.


I'm sure you can think of some use cases where a simple GET command via URL returns data you can easily consume. How about an email notification template with an embedded table listing stale tickets assigned to the individual recipients? Again, GQL can be used as a web service, as we just saw, or in any server-side scripts, just like what the above processor is doing. Either way, you pass a GQL statement and get back a result set without having to work with GlideRecords directly.


With this, we have now completed what we set out to do in Part 2:


  1. GQL syntax - define GQL syntax to be used
  2. Processor - handle bidirectional HTTP traffic
  3. Parser - parse GQL query for processing
  4. Retriever - query and retrieve result set
  5. Encoder - encode result set for output


addressing all of the requirements from Part 1:


  1. Focus on SELECT for now.
  2. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  3. Select the raw and/or display value of a field.
  4. Be able to rename column headings.
  5. Support dot walking of reference fields.
  6. Be able to filter the result set.
  7. Be able to sort by one or more columns.


I haven't covered the use of the asterisk or calculated columns in the SELECT clause; these will be covered in the future.


Next time, we'll look at GQL Pad, an interactive GQL statement editor and executor built using a very simple UI Page with a touch of AngularJS; this will be a small "app" that's built on top of what we have seen so far. I'll also show you how to get the entire working app from GitHub in another blog. We're almost there, so please stay tuned!


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

I'm pleased to announce that two new significant prizes have been confirmed for the three CreatorCon Challenge winners:


1) ServiceNow Sales Awareness: ServiceNow will produce a video (3-5 minutes) with the three winners where you can pitch and show off your app to the global ServiceNow sales team. ServiceNow sales is incentivized to help Technology Partners (that would be your company if you win) sell ServiceNow Store apps as they receive 20% of the app license ACV in quota/commission credit when one of their customers purchases an app on the Store. The video will be distributed through the ServiceNow Sales Newsletter and posted to our Sales Enablement Info Center.




2) Customer Awareness & Leads: ServiceNow will promote and host a customer webinar (60 minutes) where you can show off your app to our global customer base and drive leads to your app listing page on the ServiceNow Store. The webinar will be recorded and posted to the on-demand webinar page  and will be promoted on the ServiceNow Community.




Both of the above are designed to accelerate the growth of your startup and drive demand for your winning apps on the ServiceNow Store. The official entry system opens on Nov 10th so we encourage you to step up your efforts and continue to build the next great new app for the ServiceNow platform.


Please take advantage of the elite technical support team in the private CreatorCon Challenge subspace who are standing by to assist you with app design, architecture, or technical questions or challenges. (All entrants have received instructions and are authorized to access the subspace).

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

In this post, I am going to outline how I created an interactive heat map in a Service Portal widget by leveraging D3.js. This heat map displays a matrix of colored blocks that indicate how many incidents were created at a given time on a given week day. This could help you forecast service desk scheduling needs, plan when to implement changes, or even identify your highest risk periods.


Post 5 Total.png


Given that this post is using incident data from my instance, I also added in the capability to click a button and transition the heat map to only focus on a particular incident priority. I'm sure that you can imagine multiple use cases for this heat map, so I suggest you view this post as a framework which you can modify to your own needs.


Since there are previous posts giving a more in-depth introduction to using D3 and Service Portal together, I won't go into much detail with the code. Here are screenshots and/or pasted code for my HTML, CSS, Client Script, and Server Script:




Post 5 HTML.png


<div class="centered-chart row">

     <div id="chart"></div>

     <button class="btn" ng-click="c.updateMap(, 'total')">Total</button>

     <button class="btn" ng-click="c.updateMap(, 'p1')">Priority 1</button>

     <button class="btn" ng-click="c.updateMap(, 'p2')">Priority 2</button>

     <button class="btn" ng-click="c.updateMap(, 'p3')">Priority 3</button>

     <button class="btn" ng-click="c.updateMap(, 'p4')">Priority 4</button>

     <button class="btn" ng-click="c.updateMap(, 'p5')">Priority 5</button>





Post 5 CSS.png


rect.bordered {

     stroke: #E6E6E6;




text.mono {

     font-size: 9pt;

     font-family: Consolas, courier;

     fill: #aaa;



text.axis-workweek {

     fill: #000;



text.axis-worktime {

     fill: #000;



.btn {

     background-color: white;

     border: 1px solid gray !important;



.centered-chart {

     text-align: center;



Client Script


function() {

            /* widget controller */

            var c = this;


            var margin = { top: 50, right: 0, bottom: 100, left: 30 },

            width = 960 - margin.left - margin.right,

            height = 430 - - margin.bottom,

            gridSize = Math.floor(width / 24),

            legendElementWidth = gridSize*2,

            buckets = 9,

            colors = ["#ffffd9","#edf8b1","#c7e9b4","#7fcdbb","#41b6c4","#1d91c0","#225ea8","#253494","#081d58"],

            days = ["Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"],

            times = ["1a", "2a", "3a", "4a", "5a", "6a", "7a", "8a", "9a", "10a", "11a", "12a", "1p", "2p", "3p", "4p", "5p", "6p", "7p", "8p", "9p", "10p", "11p", "12p"];


            // Create the chart svg using the defined sizes

            var svg ="#chart").append("svg")

            .attr("width", width + margin.left + margin.right)

            .attr("height", height + + margin.bottom)


            .attr("transform", "translate(" + margin.left + "," + + ")");


            // Create the update function that takes the data and count property as parameters

            c.updateMap = function(data, count) {

                        // Create day labels

                        var dayLabels = svg.selectAll(".dayLabel")



                        .text(function (d) { return d; })

                        .attr("x", 0)

                        .attr("y", function (d, i) { return i * gridSize; })

                        .style("text-anchor", "end")

                        .attr("transform", "translate(-6," + gridSize / 1.5 + ")")

                        .attr("class", function (d, i) { return ((i >= 0 && i <= 4) ? "dayLabel mono axis axis-workweek" : "dayLabel mono axis"); });


                        // Create time labels

                        var timeLabels = svg.selectAll(".timeLabel")



                        .text(function(d) { return d; })

                        .attr("x", function(d, i) { return i * gridSize; })

                        .attr("y", 0)

                        .style("text-anchor", "middle")

                        .attr("transform", "translate(" + gridSize / 2 + ", -6)")

                        .attr("class", function(d, i) { return ((i >= 7 && i <= 16) ? "timeLabel mono axis axis-worktime" : "timeLabel mono axis"); });


                        // Creates color scale based on the number of buckets and

                        var colorScale = d3.scaleQuantile()

                        .domain([0, buckets - 1, d3.max(data, function (d) { return d[count]; })])



                        // Enter and update blocks

                        var blocks = svg.selectAll(".hour")

                        .data(data, function(d) {return':'+d.hour;});



                        .attr("x", function(d) { return (d.hour - 1) * gridSize; })

                        .attr("y", function(d) { return ( - 1) * gridSize; })

                        .attr("rx", 4)

                        .attr("ry", 4)

                        .attr("class", "hour bordered")

                        .attr("width", gridSize)

                        .attr("height", gridSize)

                        .style("fill", colors[0])


                        .style("fill", function(d) { return colorScale(d[count]); });



                        .style("fill", function(d) { return colorScale(d[count]); });


                        // Create the new legend and remove the existing legend           

                        var legend = svg.selectAll(".legend")

                        .data([0].concat(colorScale.quantiles()), function(d) { return d; });


                        var legendEnter = legend.enter().append("g")

                        .attr("class", "legend");



                        .attr("x", function(d, i) { return legendElementWidth * i; })

                        .attr("y", height)

                        .attr("width", legendElementWidth)

                        .attr("height", gridSize / 2)

                        .style("fill", function(d, i) { return colors[i]; })



                        .attr("class", "mono")

                        .text(function(d) { return "≥ " + Math.round(d); })

                        .attr("x", function(d, i) { return legendElementWidth * i; })

                        .attr("y", height + gridSize);





            // Function that sets the initial blocks while waiting for the server data to be returned

            function setInitialBlocks() {

                        var intialBlocks = [];


                        for (i=1;i<8;i++) {

                                    for (j=1;j<25;j++) {

                                                intialBlocks.push({"day": i, "hour": j, "total": 0});




                        c.updateMap(intialBlocks, "total");



            // Function to retrieve block data from the server script

            c.display = function() {

                        c.server.update().then(function(data) {

                                    c.updateMap(, "total");








Server Script


(function() {


            if (input) {

                        var incData = [];


                        for (i=1;i<8;i++) {

                                    for (j=1;j<25;j++) {

                                                incData.push({"id": i + ":" + j, "day": i, "hour": j, "total": 0, "p1": 0,

                                                "p2": 0, "p3": 0, "p4": 0, "p5": 0});




                        // Query the incident table and start totaling the number of records for each priority

                        var incGR = new GlideRecord('incident');




                        while ( {

                                    var hour = incGR.opened_at.slice(11,13);

                                    var gdt = new GlideDateTime(incGR.opened_at);

                                    var day = gdt.getDayOfWeek();

                                    var key = day+':'+hour;

                                    var block = incData.filter(findBlock(key));

                                    if (block) {


                                                switch(incGR.priority+'') {

                                                            case '1':



                                                            case '2':



                                                            case '3':



                                                            case '4':



                                                            case '5':







                        data.blocks = incData;



            function findBlock(key) {

                        return function(element) {

                                    return == key;





D3 Transitions


By clicking one of the priority buttons, we can transition our heat map to only look at our incidents with that priority. Although screenshots won't demo the aesthetically pleasing, gradual D3 transitions used in our widget, here is an example:


All incidents:

Post 5 Total.png


Priority 4 incidents:

Post 5 Priority 4.png




- Day / Hour Heatmap


For a full collection of my posts, visit



This week's episode of TechBytes covers integrations. Cameron Stone and Gabriel Finney discuss their favorite integrations and answer some of the most common questions about integrations.







to iTunes


This episode covers:

  • What kind of integrations are possible?
  • How does integration work?
  • Planning and scoping integrations
  • Utilizing subject matter experts
  • Using import sets strategically
  • Trying the REST API Explorer
  • MID server - the Swiss Army knife of ServiceNow
  • Orchestration and integration
  • Using a sandbox


For more information on Integrations, see:


Your feedback helps us better serve you! Did you find this podcast helpful? Leave us a comment to tell us why or why not.





To catch clips behind the scenes with our podcast guests, and find out what topics we'll be covering next before they are posted, follow @NOWsupport on Twitter. You can also search Twitter using the hashtag#SNtechbytesfor all previous podcasts, video clips and pictures.


Enable your no-code and low-code developers to create solutions to commonly scheduled tasks. With no scripting, you can select and update a set of records on a table, trigger notifications, and more. If the scriptless solution doesn't completely answer your questions, check "Advanced" and get a head start on your JavaScript code to unleash the power of your low-code and pro-code developers.


How this never made it in to the platform before today, I don't know. I was tinkering around with another project and asked myself "How do scriptless business rules work?" With a little reverse engineering and a few emails to my friends in development I was able to marry that with the condition field to come up with this solution.


Common use cases

  • Send approval reminders if not approved in 7 days
  • Cancel approvals after 30 days
  • Send Knowledge base article reminders 30 days before expiration





  • October 20, 2016: Tested with Helsinki P4 and Istanbul (pre-release)


Explanation/Demonstration Video

Last time, we designed the parser that decomposes a GQL statement into the four main parts:


SELECT column_list
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count


We saw how the SELECT column_list would be parsed and which Scoped GlideRecord methods we might use for the FROM, WHERE, and LIMIT clauses. This time, let's work on the retriever that runs the query and retrieves the resultset.


Let's take a look at how we may put this together:


// run query & retrieve data
var records = [];  // array of rows
var gr = new GlideRecord(table);
if (encodedQuery) gr.addEncodedQuery(encodedQuery);
if (offset) gr.chooseWindow(offset, offset + limit);
else if (limit) gr.setLimit(limit);
while ( records.push(getRow());


This gets quite simple with the help of Scoped GlideRecord; it iterates over all glide records and returns an array of record objects. The getRow() function returns the columns, either in raw or display values (if the field name is prefixed with "dv_"):


function getRow() {
  var row = {};
  for (var i = 0; i < columns.length; i++) {
    var field = columns[i].field, isDV = /(^|\.)dv_/.test(field);
    var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
    row[columns[i].label] = value;
  return row;


The gr.getDisplayValue() and gr.getElement() methods handle dot-walking for us. gr.getElement() returns GlideElement so we need to convert it to string to get the raw value (if not converted, the JSON encoder will treat it as an object, which is not what we want). The columns array is obtained from the column_list and each array element is an object representing a column with field and label (column heading) properties:


// parse column_list
columns = columns.split(/\s*,\s*/);  // split column_list column_1, column_2 [column_2_heading], column_3
for (var i = 0; i < columns.length; i++) {  // parse column heading
  matches = /([.\w]+)\s*(?:\[(.+)\])?/.exec(columns[i]);
  columns[i] = { field: matches[1], label: matches[2] || matches[1] };  // if no label, use field name


Putting everything together, we have a Script Include with the GQL class:


* GQL (Glide Query Language) class

var GQL = Class.create();
GQL.prototype = {
  initialize: function() { },
  type: 'GQL'

* Takes a GQL statement and returns the resultset in an array of records
* @param {string} gql - GQL statement 
* @return {object} resultset in an array of records

GQL.prototype.query = function(gql) {

  // parse gql
  gql = gql.replace(/\s*--.*/g, '');     // remove comments
  var rxParser = /SELECT\s*([^]+?)\s*FROM\s*(.+?)\s*(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?(?:LIMIT\s*([,\d\s]+?)\s*)?$/;
  var matches = rxParser.exec(gql);
  var columns = matches[1] || '';        // SELECT column_list
  var table = matches[2] || '';          // FROM table name
  var encodedQuery = matches[3] || '';   // WHERE encoded_query_string
  var limit = matches[4] || '';          // LIMIT [offset,] row_count
  var offset = 0;
  if (limit) {  // parse offset, row_count
    limit = limit.split(',');
    if (limit.length > 1) offset = parseInt(limit[0], 10) || 0;
    limit = parseInt(limit[limit.length > 1 ? 1 : 0], 10) || 0;

  // parse column_list
  columns = columns.split(/\s*,\s*/);  // split column_list column_1, column_2 [column_2_heading], column_3
  for (var i = 0; i < columns.length; i++) {  // parse column heading
    matches = /([.\w]+)\s*(?:\[(.+)\])?/.exec(columns[i]);
    columns[i] = { field: matches[1], label: matches[2] || matches[1] };  // if no label, use field name

  // run query & retrieve data
  var records = [];  // array of rows
  var gr = new GlideRecord(table);
  if (encodedQuery) gr.addEncodedQuery(encodedQuery);
  if (offset) gr.chooseWindow(offset, offset + limit);
  else if (limit) gr.setLimit(limit);
  while ( records.push(getRow());

  return { records: records };

  function getRow() {
    var row = {};
    for (var i = 0; i < columns.length; i++) {
      var field = columns[i].field, isDV = /(^|\.)dv_/.test(field);
      var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
      row[columns[i].label] = value;
    return row;


This doesn't yet have any GQL syntax checking or run-time error handling, but will give us a good starting point. By having this in a Script Include, we can use it not only for web services, but also in any server-side scripts.


Next time, we'll take a look at the encoder.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

In this post, we are going to build a Service Portal widget with an interactive bar chart with multiple data sets. This is the third post in my series that focuses on using D3js within ServiceNow's Service Portal. If you haven't already worked through the previous posts and are finding yourself lost, you might want to check those out here.


New concepts to the series that we will touch on in this post are entering, updating, and exiting data with D3, working with transitions to change our data in a pleasing manner, and adding basic interactive aspects such as hovering effects. The key pieces of our widget that we are going to be focusing on are our server script and our client script.


Server script


For this chart, we are going to have the ability to view the counts of active, inactive, and all incidents by category. There are multiple ways to get this data from our server script, but for simplicity's sake we are just going to use 3 separate GlideAggregate calls and push that data into one of 3 arrays defined on the data object. Each of these arrays will contain an object for each category returned from the GlideAggregate call. Each of these objects will look like this after our server script:


{category: "Software", "value": 14}


Below is a screenshot of my server script as well as the pasted code:


Post 4 Server Script.png


(function() {

       /* populate the 'data' object */

       options.width = options.width || 600;

       options.bar_height = options.bar_height || 20;

       options.left_margin = options.left_margin || 100;

 = [];

       data.inactive = [];

       data.all = [];


       // Get count of active incidents

       var count = new GlideAggregate('incident');

       count.addQuery('active', 'true');

       count.addAggregate('COUNT', 'category');


       while ( {

            var category = count.category.getDisplayValue();

            var categoryCount = count.getAggregate('COUNT', 'category');

  {category: category, "value": categoryCount});



       / / Get count of inactive incidents

       var inactiveCount = new GlideAggregate('incident');

       inactiveCount.addQuery('active', 'false');

       inactiveCount.addAggregate('COUNT', 'category');


       while ( {

            var category = inactiveCount.category.getDisplayValue();

            var categoryCount = inactiveCount.getAggregate('COUNT', 'category');

            data.inactive.push({category: category, "value": categoryCount});



       // Get count of all incidents

       var allCount = new GlideAggregate('incident');

       allCount.addAggregate('COUNT', 'category');


       while ( {

            var category = allCount.category.getDisplayValue();

            var categoryCount = allCount.getAggregate('COUNT', 'category');

            data.all.push({category: category, "value": categoryCount});




Client script


Since we are working with multiple data sets, we are going to set up a function that takes a data set as a parameter and then updates our bar chart accordingly. This gives us the ability to call this function from buttons in our widget so the user can navigate through these data sets. You might also notice at the end of our client script that we are calling this function with one of the data sets so that the bar chart will be present after the widget loads.


When working with multiple data sets in D3, there are 3 major pieces to the process of changing data sets: entering new data, updating existing data that is also present in the new data set, and exiting existing data that isn't in the new data set. For this example, I enter the new data, then I exit any of the bars that are no longer needed, then enter any new bars, and finally update the bars that are leftover from the previous data set.


Something else that is new from the previous post is that we will use a key function when we bind our data set. For this chart, we are going to use our category name as the key value:


var bar = chart.selectAll("g").data(data, function(d) { return d.category;});


What this allows us to do is update a bar that has values in multiple data sets. For example, the category "Software" appears in both the active and inactive data sets. Since we are keying off of the category name value, we will just update the existing "Software" bar to its new size instead of removing it and then adding a brand new bar.


Now that we are able to update existing bars we will use D3 transitions to gradually change from the existing width to the new width. We will also use transitions for entering new data bars.


The last new piece of functionality that we are adding in this post is attaching mouse-over and mouse-out functions to our bars to give an extra layer of interactive capabilities. Our mouse-over function is going to change the color of the bar that is being hovered on as well as show a tool-tip that provides the exact count of records for that given bar.  Below is the code for our function:


function($scope) {

  /* widget controller */

  var c = this;


  // Grab our category counts from our Server Script

  $scope.activeData =;

  $scope.inactiveData =;

  $scope.allData =;


  // Set the width of the chart along with the height of each bar

  var width = c.options.width,

  barHeight = c.options.bar_height,

  leftMargin = c.options.left_margin;


  $scope.updateBars = function(data) {

       // Set the dimensions of our chart

       var chart =".chart").attr("width", width)

       .attr("height", barHeight * data.length + 50);


       // Remove existing axis and tooltip".x.axis").remove();".counter").remove();


       // Add a placeholder text element for our tooltip

       var counter = chart.append("text").attr("class", "counter")

       .attr("y", 10)

       .attr("x", width-20);


       // Set the domain and range of the chart

       var x = d3.scaleLinear()

       .range([leftMargin, width])

       .domain([0, d3.max(data, function(d) { return d.value; })]);


       // Bind our new data to our g elements

       var bar = chart.selectAll("g").data(data, function(d) { return d.category;});


       // Remove existing bars that aren't in the new data



       // Create new g elements for new categories in our new data

       var barEnter = bar.enter().append("g")

       .attr("transform", function(d, i) { return "translate(0," + i * barHeight + ")"; });


       // Enter new rect elements


       .on("mouseover", highlightBar)

       .on("mouseout", unhighlightBar)

       .attr("class", "chart-bar")

       .attr("height", barHeight - 1)

       .attr("x", leftMargin)


       .attr("width", function(d) { return x(d.value) - leftMargin; });


       // Enter new text labels


       .attr("x", leftMargin - 5)

       .attr("y", barHeight / 2)

       .attr("width", leftMargin)

       .attr("dy", ".35em")

       .style("fill", "black")

       .style("text-anchor", "end")



       .text(function(d) { return d.category; });


       // Update existing bars


       .attr("transform", function(d, i) { return "translate(0," + i * barHeight + ")"; });



       .on("mouseover", highlightBar)

       .on("mouseout", unhighlightBar)

       .data(data, function(d) { return d.category;})


       .attr("width", function(d) { return x(d.value) - leftMargin; });


       // Create the x-axis and append it to the bottom of the chart

       var xAxis = d3.axisBottom().scale(x);



       .attr("class", "x axis")

       .attr("transform", "translate(0," + (barHeight * data.length) + ")")

       .attr("x", leftMargin)






       // Define functions for our hover functionality

       function highlightBar(d,i) {

  "fill", "#b0c4de");

            counter.text(d.category + ' ' + d.value);



       function unhighlightBar(d,i) {

  "fill", "#4682b4");












Below is a screenshot and pasted code for our HTML:


Post 4 HTML.png


<div class="centered-chart row">

     <h1>D3 Bar Chart</h1>

     <div class="chart-container">

          <svg class="chart"></svg>


     <div class="button-container">

          <button class="btn" ng-click="updateBars(activeData)">Active</button>

          <button class="btn" ng-click="updateBars(inactiveData)">Inactive</button>

          <button class="btn" ng-click="updateBars(allData)">All</button>






Below is a screenshot and pasted code for our CSS:


Post 4 CSS.png


.btn {

     background-color: white;

     border: 1px solid gray !important;



.chart rect {

     fill: #4682b4;



.chart-container {

     height: 200px;



.chart text {

     font: 10px sans-serif;



.centered-chart {

     text-align: center;



.counter {

     text-anchor: end;



.axis text {

font: 10px sans-serif;



.axis path,

.axis line {

     fill: none;

     stroke: #000;

     shape-rendering: crispEdges;



Finished product


Now that we have everything in place, we can test it out. If you followed along on your own instance correctly, your widget should look similar to this:


Post 4 Bar Chart.png


If yours looks different, it could be that the data in our instances are different. Otherwise, check out the previous posts in this series and see if you missed a step.


We now have what we need to work with multiple data sets to create interactive data visualization widgets in the Service Portal. Keep an eye out for future posts that will build on these foundation blocks of D3 and Service Portal.



- ServiceNow GlideAggregate



For a full collection of my posts, visit



We are constantly working hard to improve the user interface and make it more intuitive for users. Therefore, the user interface in ServiceNow has evolved a great deal over the years. Some of the most notable stops on the timeline include:

  • Classic: Original ServiceNow product interface
  • UI11: The Edge, split screen, bookmarks, and flyout windows added (2011)
  • UI14: Redesigned icons, buttons, and color themes added (Eureka release)
  • UI15: Redesigned icons and usability improvements added (Fuji release)
  • UI16: Redesigned banner, Favorites tab, and different access to application menus added (Helsinki release)


On certain versions of the product, users can switch between different versions of the user interface whenever they want. For example, if you are using the Helsinki release, you can switch between UI15 and UI16 by clicking the gear icon gear_settings.png in the banner frame, and then clicking the button at the top of the System Settings window.



There is currently a user interface version issue for customers using Safari 10.x and MacOS Sierra.


Version switch in Safari 10.x

When ServiceNow customers using MacOS Sierra and Safari 10.x log in to certain versions of a Fuji, Geneva, or Helsinki instance, they are automatically redirected to UI11. The user interface should not change, but remain on UI15 or UI16 based on the user's settings.


The UI version switch is occurring because Safari 10.x is currently a beta version and not the latest production release. Safari 10.x does not match the latest supported version of the browser (9.1.3), so we do not recognize the build. Because we cannot determine if the browser supports the latest product features, the old UI (UI11) is used, not UI15 or UI16. After Apple officially releases Safari Version 10.0, we will add it to the list of supported browsers and UI15 or UI16 (as specified by the user) will be displayed.


Safari 10 was released by Apple in September 2016. The Sierra MacOS was released by Apple in Fall 2016 (a public beta version was available in Summer 2016).


Options to resolve the issue

Currently, there are no workarounds for this issue. The only way to resolve the problem is to upgrade to a version in which the issue is fixed.


For complete information, including current lists of versions where the issue occurs and where it is fixed, see KB0597917: Safari 10.x in MacOS Sierra sends users to UI11 written by the fabulous tyler.jones. To keep up with the latest information, subscribe to KB0597917 and receive notifications when more information is available.

Additional information

On the Apple website:

General information about the different versions of the user interface and how to use them is available in the product documentation:

Last time, we looked at the GQL syntax and how to handle raw and display values of columns. This time, we'll look at the parser, which is closely tied to the GQL syntax, and see how GQL statements can be parsed into various parts for execution. We're not skipping the processor; we'll come back to it later.




Let's go step by step and see how we can parse a GQL statement. The below example we saw last time may help you follow the steps:


SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , dv_state [State]
  , [Caller Email]
  , [Manager Email]
  , dv_cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10


A. Remove comments


A comment starts with two dashes "--" and ends with the end-of-line; all comments and their zero or more preceding white spaces will be removed first before further processing using


var no_comments = query.replace(/\s*--.*/g,'');


This is not foolproof in case there are double dashes elsewhere not meant to be for comments (e.g., column headings), but let's keep it simple for now.


B. Breakdown into parts


A GQL statement has the following four main parts in the following order we defined last time:


  1. SELECT - mandatory; always starts with this.
  2. FROM - mandatory
  3. WHERE - optional (also contains ORDERBY from encoded query strings)
  4. LIMIT - optional


Let's try to pick out these four parts using a regular expression. Regular expression may not be the best tool for parsing, but it may work adequately for our simple logic.


// parse "SELECT select_list FROM table WHERE encoded_query_string LIMIT [offset,] row_count" where WHERE and LIMIT are optional
var regexp = /SELECT\s*([^]+?)\s*FROM\s*(.+?)\s*(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?(?:LIMIT\s*([,\d\s]+?)\s*)?$/


Here's what this does in more detail:


var regexp = /
SELECT\s*([^]+?)\s*  // look for SELECT and capture what follows ([^]+?) including white spaces [^], but ignore zero or more surrounding white spaces \s*
FROM\s*(.+?)\s*      // look for FROM and capture what follows except white spaces (table names don't have spaces)
(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?  // look for optional WHERE clause; if found, capture what follows until LIMIT is encountered or to end of line
(?:LIMIT\s*([,\d\s]+?)\s*)?  // look for optional LIMIT clause and capture numbers separated by a comma
$/  // continue to the end of the last line


Let me clarify a few things since regular expressions, while compact and powerful, are not always self explanatory.


In Line 2, "[^]" represents ALL characters, both white spaces and the rest, so the SELECT list can be written in single or multiple lines. This could also be written as "[\s\S]" but "[^]" is a shorthand (technically, the first caret inside square brackets indicates "exclude the following characters"; since there's no following characters in this case, it excludes none, leading to represent all characters). This leads us to use the lazy quantifier "?" in "([^]+?)" to tell it to "match as few times as possible" and not to capture trailing white spaces; without "?", it gets greedy and keeps going until it sees "FROM" since "[^]" means all characters.


Line 4 is a non-capture group, indicated by "(?:" meaning "match what follows but don't capture." However, what's in the inner parentheses "([^]+?(?=\s*LIMIT)|[^]+)" will still be captured and it'll capture the encoded query string that follows WHERE. The inner parentheses could've been simply "([^]+?)" if we mandated that encoded query strings must be on a single line. Since we're not mandating it, we have to use the positive lookahead "(?=\s*LIMIT)" to tell it to "stop when you see LIMIT". The second part in the inner parentheses "|[^]+?" means to "go all the way to the end if you don't see LIMIT". Line 4 ends with "?" to match 0 or 1 time since the WHERE clause is optional.


Line 6 ends with an anchor, the end of last line "$", to tell lazy quantifiers not to be too lazy and go all the way to the end.


This works well for both single or multiple lines, and whether white spaces are removed or not. We'll add only a simple syntax checker later. A good way to test if a regular expression works as intended is to use an online tool like hifi RegExp Tool; it shows the capture groups in different colors, which is useful for testing. Also, you can view a saved copy of this regex at


When this regular expression is executed, it returns four capture groups corresponding to the four main parts of a GQL statement.




The SELECT part consists of a list of one or more column names, with optional column headings enclosed in square brackets, separated by commas. We'll use


// split column list "column_1, column_2 [column_2_heading], column_3"
var columns = select.split(/\s*,\s*/);


to get an array of columns. For each column, we'll check for column heading and, if it exists, parse it out using:


// parse out "column_name [column_heading]" where "[column_heading]" is optional
var regexp = /([.\w]+)\s*(?:\[(.+)\])?/




The FROM part simply contains the table name, which we can pass on to either of


var gr = new GlideRecord(table);
var gr = new GlideRecordSecure(table)  // enforce ACL


from Scoped GlideRecord.




The optional WHERE part contains an encoded query string with filter and/or order conditions. We can pass this straight to




from Scoped GlideRecord.




The optional LIMIT part gives the offset, which is optional, and row_count. After parsing the integers using parseInt(), we can pass these on to either of


gr.chooseWindow(offset, offset + row_count);


from Scoped GlideRecord.


Next time, we'll continue with the rest of the design.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

I ran into this issue by accident and after looking through the community I noticed that I ain't the only one looking for a good solution for this.


The scenario is like this: We made a nice looking end user portal. Either in CMS or the new Service Portal. Now we have links point to the portal by using a suffix like "/ess" or "/sp" or what combination you can come up with. Might even have a special DNS-name like pointing to


The problem occurs when we got curious user that gets creative and want to see where they come when they remove this suffix and just go to "" and tada, they are get inside the UI that our technicians are using. And that is something we don't want to happen. I have found solutions like having a UI Script that checks if the user has roles and if not, it redirects, a ui macro pretty much doing the same, a specific homepage showing the users that they are in the wrong spot etc. But no one is happy with the workaround they made.


We all know that ServiceNow got A LOT of system properties and what I think is really annoying is that even if you go through them all at sys_properties.list, you still haven't seem them all. Some of them ain't there OOB and you need to create them yourself to get the magic going.


And I bet you start to see where I'm going here.


There is a property to handle this. To redirect users without any roles. But you need to create it yourself.


So go into the table sys_properties. Press the"new" button and make the record that looks like this:



So the name of the property is "glide.entry.loggedin.page_ess" and the value should be the suffix for your portal.


So now if a user without role would go to it will get redirected automagically to ""





Symfoni Logo Color Box.jpgsn-community-mvp.png


ServiceNow Witch Doctor and MVP
For all my blog posts:

Last time, we discussed the concept behind GQL as being a web service with SQL-like syntax and listed use cases along with high-level requirements. We want to use it similar to JSONv2 by sending a command using GET and retrieve a result set in JSON. This time, let's look at some design considerations.




Here are some components we need to consider:


  1. GQL syntax - define GQL syntax to be used
  2. Processor - handle bidirectional HTTP traffic
  3. Parser - parse GQL query for processing
  4. Retriever - query and retrieve result set
  5. Encoder - encode result set for output


Let's also add that this will be a scoped application. The design will evolve over time as we start simple and refactor as needed.




Under the hood, we know that all database transactions in ServiceNow are being done in SQL. However, we don't have direct access to it. Even if we did, it may not prove to be useful in all cases. For example, selecting a display value may require joins or subqueries and the SQL statement may quickly become unwieldy. Instead, we'll use GlideRecord or, more specifically, Scoped GlideRecord. This means GQL will be constrained by what we can do with Scoped GlideRecord.


In Part 1, our requirements called for


  1. Focus on SELECT for now.
  2. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  3. Select the raw and/or display value of a field.
  4. Be able to rename column headings.
  5. Support dot walking of reference fields.
  6. Be able to filter the result set.
  7. Be able to sort by one or more columns.


In its simplest form, we have


FROM table


This returns all columns from table; we'll define what "all" means later when we discuss raw values vs. display values. Since we'd also like to embed this in a query string and use GET, we may rewrite this as




Let's further stipulate that white spaces are optional and all keywords must be in caps. This also helps with readability since ServiceNow table and column names are in lowercase. So the above can be rewritten as




In this article, for readability, we'll continue using white spaces, though. Now, let's add the optional WHERE clause. Instead of reinventing it, we can just use Encoded Query Strings, which I've always found to be compact and powerful:


FROM table
WHERE encoded_query_string


Encoded Query Strings may also contain one or more ORDERBY clauses for descending or ascending order, so we can sort the result set using them. We'll look at Encoded Query Strings and the syntax in more detail later.


Let's also add optional LIMIT, which we'll borrow from MySQL:


FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count


Now, let's turn to the field list instead of the asterisk. One or more columns may be listed, using dot-walking, if needed. Dot-walking allows access to related tables through reference fields (although dot-walking can be nested multiple levels, recommended limit is three levels). We'll also optionally allow renaming column headings by enclosing a new name in square brackets (this is not the same as aliases in SQL since the renamed column headings don't act as new column names that can be used in queries):


SELECT column_1, column_2 [column_2_heading], column_3
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count


Lastly, let's allow for comments, using double dashes, as in SQL, in multi-line mode:


SELECT column_1, column_2 [column_2_heading], column_3  -- this is a comment
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count


Summing it all up, here's an example:


SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , state [State]
  , [Caller Email]
  , [Manager Email]
  , cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10


Without white spaces, which may be useful for including in a GET query string, it is


SELECTnumber[Number],short_description[Title],state[State],[Caller Email],[Manager Email],cmdb_ciFROMincidentWHEREactive=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumberLIMIT5,10


Let's work with what we have so far for now and discuss other features later. If you'd like anything specific, please leave a comment below.




Some fields have both raw and display values. The State field for Incident, for example, has numeric raw values while the display value might say "Resolved". Other field types, such as date, duration, etc., also have both raw and display values. Also, a reference field's raw value is the sys_id from the related table while the display value is from the display value field of the related table.


In our SELECT statement, we'd like to specify raw and/or display value of each field. JSONv2, by default, returns raw values unless the "displayvalue" option is set to "true" (display, not raw, values are returned) or "all" (both raw and display values are returned and the display column names are prefixed with "dv_").


In GQL, let's return raw values unless column names start with "dv_", keeping consistent with JSONv2 (the same convention also applies to ODBC SQL). Since what we want in most cases are display values, this will add a bit of extra typing, forcing us to rewrite the above GQL as (see "dv_" for state and cmdb_ci)


SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , dv_state [State]
  , [Caller Email]
  , [Manager Email]
  , dv_cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10




FROM table


we'll return both raw and display values with the column names of the display values starting with "dv_", similar to JSONv2 with "displayvalue" set to "all".


Next time, we'll continue with the rest of the design. Please note that the design may change over time, especially with your input/feedback.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Let's learn today how to use one of the powerful concepts of Service Portal called Record Watcher. This feature allows a widget developer to respond to table updates in real-time.


Now let us create a simple widget to keep count of our Critical Incidents to find out how Record Watcher works.


  • Step1: Create a widget

       Click on widgets under "Service Portal" application menu (or you can also create widgets using widget editor). Create a new widget by clicking on the new button.

          Below is the example widget I have created.


Screen Shot 2016-10-09 at 6.14.24 PM.png



Please use the below snippets to create your widget




<div class="critical-incidents col-md-2">

    <div class="panel panel-default">

        <div class="panel-heading">Number of Critical Incidents</div>

        <div class="panel-body text-center">

            <div class="circle-bg" ng-class="{'red-bg' : incidata.changed}">

                <p class="inci-number"> {{}}</p>





<div class="col-md-9 ">

  <p class="changed-rec">Changed Record {{incidata.changedRecord.display_value}}</p>



ng-class is triggered whenever data changes to change the background to red

We also print the Changed/Inserted record information when the watcher is fired


Client Controller:


function(spUtil, $scope, $timeout) {

    /* widget controller */

    var c = this;

    $scope.incidata = {};

    $scope.incidata.changed = false;

    spUtil.recordWatch($scope, "incident", "priority=1^state!=7", function(name, data) {



    $scope.incidata.changedRecord = data;


        $timeout(function() {


        }, 500);  



    $scope.changeBg = function() {

        $scope.incidata.changed = true;

        $timeout(function() {

            $scope.incidata.changed = false;

        }, 500);




As you can see, we need to inject spUtil into our client controller. We have defined our watcher above like

spUtil.recordWatch($scope,tableName,filter, function(name,data){

//name: event information

//data:  information about updated or inserted record


Second Argument is the tableName you want to attach the watcher for

Third Argument is the filter for the table

Fourth is a callback function that is executed when watcher is fired, this function has two arguments.


In our example, the second argument is Incident table, and the filter is "priority=1^state!=7", meaning we only care about incidents with priority critical and not closed.


Inside our watcher, we have a timeout just to change the background color.



.critical-incidents {

    margin-top: 20px;

    width: 300px;

    .inci-number {

        font-size: 36px;

        margin-bottom: 0px;

        padding: 25px;

        color: #E51B24;


    .circle-bg {

        margin-left: auto;

        margin-right: auto;

        width: 100px;

        height: 100px;

        border-radius: 50%;

        background: #fff;


    .red-bg {

        background: #E51B24;

        .inci-number {

            color: #fff;




.changed-rec {

    font-size: 18px;

    margin-top: 30px;



Server Script:



(function() {

    /* populate the 'data' object */

    /* e.g., data.table = $sp.getValue('table'); */

    data.incidentCount = '';


    var gr = new GlideRecord('incident');

    gr.addQuery('priority', 1);

    gr.addQuery('state','!=', 7);


    data.incidentCount = gr.getRowCount();



After populating your widget with above code, go ahead and save it.



  • Step 2: Add widget to a service portal page

     Please go ahead and add the widget we created above to a page using designer.

     After you have added this widget to a page, let us check it out how it works


Screen Shot 2016-10-09 at 7.39.09 PM.png


I have 10 critical incidents in my instance, thus the page looks like this for me.


Now let's try to create a new Critical Incident to see if our watcher fires.




As you can see, our record watcher is fired every time a new Critical Incident is created. Whenever record watcher is fired, information about the record is returned as well.

We are logging this data to the console, if we check the browser console, you can see what the data of the inserted/changed record look like.Screen Shot 2016-10-09 at 7.49.09 PM.png


Record watchers in Service Portal are very powerful. Can be used to display real-time data.




No, GQL isn't Google Query Language; it's Glide Query Language, and it's exclusively for ServiceNow.


Having been a long-time SQL user who appreciates the simple, yet powerful construct of the language, I occasionally miss not being able to use it when I'm working with web services or data export in ServiceNow. Don't get me wrong; I like using JSONv2 Web Service. But how can I select only the columns I want? Or, how can I rename column headings? Can I show only the display values? Can I have calculated columns? Hmm, it's not trivial to do any of these with web services while they are trivial in SQL. So I dream. But I don't dream too big in that I really only want to be able to SELECT data (read only) for now and not UPDATE, DELETE, etc. Yes, there are ways to work with SQL today; although the infamous gs.sql() has now been retired (although I liked using it occasionally), the ODBC still works. But it's not a web service (technically it is SOAP under the cover, but we still have to deal with installing the ODBC driver, configure, etc.). And, yes, there are other ways to partially accomplish this, by using Export Sets, Database/Form Views, etc., but you have to configure them on the server, typically as an admin. Ideally, I'd like to have the convenience of both web services and SQL so data can be readily consumed with ease. So begins my quest.


I'd like to send a SQL-like query, similar to the example below, to ServiceNow using web service calls and receive the result set:


  number [Number]
  , short_description [Title]
  , state [State]
  , [Caller Email]
  , [Manager]
FROM incident
WHERE active=true
ORDERBY priority


Some use cases might include:


  1. Retrieve Incident (or any table/view) data with specific columns in the order I want.
  2. Build a dynamic report.
  3. Create a CSV file with custom column headings.
  4. Add a calculated column; for example, convert one currency to another.
  5. Apply a custom format to a column.


Here are some high-level requirements:


  1. Preserve the JSONv2 API as much as possible.
  2. Utilize the GlideRecord syntax as much as possible.
  3. Mimic the SQL syntax as much as possible.
  4. Send SQL scripts using GET.
  5. Focus on SELECT for now.
  6. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  7. Select the raw and/or display value of a field.
  8. Be able to rename column headings.
  9. Support dot walking of reference fields.
  10. Be able to filter the result set.
  11. Be able to sort by one or more columns.


The above lists may not be exhaustive and I'll update them over time. If you have more ideas, please add comments below. If you like this concept and you might be able to use it, please hit Like. This will be a highly collaborative effort, so your support and feedback would be absolutely essential to get this moving forward.


Next time, we'll look at some design considerations.


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

List v3 is a very powerful tool that was recently added to ServiceNow’s growing list of plugins. With its redesigned filter interface, it can be used to bring real-time communication and data to you and your users. Most ServiceNow clients currently utilize List v2, and it is applied to all new instances. With List v3, you will need to activate the List v3 and List v3 Components plugins in order apply it to your instance. The differences between List v2 and List v3 are pretty obvious just by look.


From time to time, great plugins like List v3 can truly enhance one aspect of the platform while causing something else to no longer function as it should. In this case, the functionality of List v3 with iFrames in CMS, and as our good friend tltoulson noted regarding List v2 in CMS, iframe issues may never go away.


Your List view after upgrading to List V3

This particular iframe issue is specific to List v3. As you upgrade to List v3 in Helsinki, you may notice that your list view in your CMS sites looks a little different that you’re used to…and a lot smaller.

ist v3 iframe resize.png


Set the iFrame size manually

For example, if you have List v3 enabled and  you navigate to https://[instance_name], you will notice the iframe does not resize. In Istanbul, content pages with list records will be fixed to resize upon opening. As a temporary workaround, you can directly modify the style sheet attached to your site (or page) with following:


iframe {




A common minimum height that I have seen is 50em, which I have used in this example, but this can be adjusted to your liking. Here's and example of what it looks like in the CSS style sheet:

css style sheet css iframe.png


By setting the iframe size to 50em or larger you remedy the tiny iframe that didn't really allow for us to see or do much. When I set the iframe to open at 50em the list of records open to show us this:

50em iframe resize.png


This will allow the users to see the list view at the height that is manually set by you, and based on the size of the list, they may or may not need to scroll down.


Disable List v3

Another workaround is to disable List v3 for specific lists being utilized on your CMS site. List v3 allows you to disable its functionality on independent list instead of shutting it off entirely across all list views. To do this, go to the list you want to disable List v3 on,  click the menu icon, and go to ‘List Control’, as shown below:

disable list control.png


At the bottom of the List Control page under List v3, you will see the ‘Disable list v3’ option. Check it and update.

disable list v3.png


This list will now load under list v2 and will also apply itself to the iFrames in our CMS site.


I personally recommend utilizing the CSS modification so that List v3 can be used across the platform and not just on specific lists. When users are able to utilize the features on one table but not the other, it can become confusing and really affect the user experience.


As mentioned earlier, this will be fixed in Istanbul. For more information on List v3, be sure to check the docs site as well these other great blogs:

Top 5 CMS solutions for sizing iFrames CMS101: Content Management System overview

I have come across several questions posted on the Community related to the CSV (comma-separated values) format and felt it might be helpful to review the specifications and some gotchas. You might ask what's so complicated about the CSV format. After all, it's simply field values separated by commas. While it's true, there are a few more details that might come handy if you ever have to troubleshoot CSV import/export issues.




The CSV format is quite simple and defined in IETF RFC 4180 Section 2. While I encourage you to read the original text, here's a condensed version:


  1. Each line is terminated by CRLF.
  2. The last line doesn't need to be terminated by CRLF.
  3. An optional header line is allowed in the first line, useful for adding field names.
  4. Each line may contain one or more fields, separated by commas. All lines must contain the same number of fields.
  5. Each field may be enclosed in double quotes (optional, but see below).
  6. Fields containing line breaks (CRLF), double quotes, and commas must be enclosed in double quotes.
  7. If a field is enclosed in double quotes, all double quotes inside the field must be escaped by another double quote (replace all " with "").


6 and 7 are in bold, as they tend to cause issues in malformed CSV data.




Here are some comments and gotchas to watch out for, in addition to 6 and 7 from above:


  1. Depending on locales, commas may be used as decimal points or thousand separators for the display values of numeric/currency fields. Be sure to account for those cases. This also applies to date fields.
  2. If you're seeing more fields than expected, look for commas in unenclosed field values.
  3. If field values are split across multiple lines, look for unenclosed CRLF.
  4. While RFC 4180 only specifies commas, some CSV formats (including some cases in ServiceNow) allow other delimiters.
  5. RFC 4180 Section 2 incudes this caveat: "While there are various specifications and implementations for the CSV format, there is no formal specification in existence, which allows for a wide variety of interpretations of CSV files.  This section documents the format that seems to be followed by most implementations."




Here's a simple JavaScript function based on RFC 4180 that can be used in processors or script includes for generating CSV data:


 * Takes raw field value and returns CSV escaped value
 * based on RFC 4180 Common Format and MIME Type for CSV Files October 2005
 * @param {string} raw - raw field value
 * @return {string} escaped CSV field value, if applicable per RFC 4180
function escapeCSV(raw) {
  var value = (raw + '').replace(/"/g,'""');  // escape all double quotes
  if (/[\r\n",]/.test(value)) value = '"' + value + '"';  // if value has special characters, enclose in double quotes
  return value;


Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Let's talk a bit more about Widget options! b-rad already published a great article about the usage of options, which I always like to compare to System Properties. Those can hold values to design content a bit more dynamic, rather than hard-coding values.


The available option types, as of today, are:

  • Integer
  • String
  • Boolean
  • Choice
  • Field name & Field List (both dependent on the "table" option on the according Widget instance)
  • Reference


Within this post we want to look a bit more into the Reference option type.


Use Case #1 - Understanding the Reference option

Before we start with advanced options, let's just have a look what you could do with a standard reference type option. Assume we would want to build a Menu block based off the Instance with Menu [sp_instance_menu] table. Instead of writing a Widget for each and every Menu block, how about we just "optionize" (yes, I think that should be a word moving forward in the Service Portal context ) in a way, that an SP Admin can simply pick the Menu from a reference field, rather than changing the sys_id in the Widget?


Here's what we could do (no need to rebuild this yourself, it's more about how to use the reference option - you will build a widget later on):

  1. Let's assume we'll create a widget called "Menu Block"
  2. Then we will create a reference option in the widget pointing to the Instance [sp_instance_menu] table
  3. Have the Server Script utilize the newly created option value to retrieve the menu record, which will be defined in the widget instance. Retrieve the basic values of that menu with the $sp server-side API and have the HTML display the values within the data object
  4. Create a new Menu record (for demo purposes I added an Image field to the sp_instance_menu table, that we'll pull into the Widget)
  5. Create a widget instance by dragging & dropping the widget onto a page & define a menu
  6. Modify the Menu of an existing widget instance by Ctrl + right-clicking on the widget instance (this does not work in the Designer, you'd have to be on the actual page)
  7. Voila, we changed the instance of that menu block to a different menu.


This example is not meant to be particularly pretty, it should more explain the concept of how to effectively utilize reference options.

I also refrained from providing the code to evaluate the child items for this very reason.


So far so good - now that we know how to use Reference options, let's move on to the advanced use-case.

Probably almost everybody of you has used Reference Qualifiers in the platform view/backend (you name it), to filter down the results of a reference field - e.g. just return all operational CI's on a configuration item reference field or just display active callers on a user reference field.


Now, how would we apply those to a service portal option? The bad news: not at all. The option schema does not accept reference qualifiers. The good news: you can create your own "Option tables". You might have come across the Data Table field, which is defined on the Widget record itself. Per default you will mostly find Instance [sp_instance], but you'll also find e.g. Instance of Simple List [sp_instance_vlist], which is used by the Simple List Widget. All fields on the table defined as the data table, are available options for the widget that uses this table .



In addition, you will still be able to define options via the Widget Options. Be aware that those are not automatically fields on the according instance table, those will be transformed into options represented in the JSON format - here is an example how this would look like based on the first use-case described in this post.



This is great for multiple reasons:

  1. As this is a "real" field, no JSON option, we can apply Reference Qualifiers!
  2. Looks like a form doesn't it? Well, it is! That also means that we can leverage UI Policies to conditionally hide & show options. Sweet! Something else, that we would not be able to achieve, if we would stick to the default Option Schema.


Use-Case #2 - Creating your own Instance Table to apply Reference Qualifiers & leverage UI Policies

Enough talk, let's create a custom "options" table.

  1. Navigate to System Definition > Tables, click Create New
  2. Pick a Name & Label for your new table, given the naming convention of all existing tables I'd suggest something along those lines: "Instance of xyz" / "Instance with xyz". Personally I always went for SP <Name of the Widget> Instance so far.
  3. Extend the new table from Instance [sp_instance]
  4. If you want create a new Module for it within the Service Portal application. Make sure to disable the role creation, numbering etc. - we don't really need this in the context of options.
  5. Click Submit
  6. The table will inherit all fields from the Instance table, so make sure you do not create duplicate fields for something that's already there (e.g. HREF/URL, Bootstrap Color etc.).
  7. Now you can go ahead and create whatever field you want from the Tables view, via the Form Designer or Form Layout - I created the following fields for the demo:
    1. Reference - to Business Services [cmdb_ci_service]
    2. Override Styles - Boolean
    3. Font Color - Color
    4. Border Color - Color



If you navigate to the form of that table, you will most likely see the following (unless you directly created and formatted the form via Form Designer / Layout).



At this point we can start applying our UI Policies and the Reference Qualifier on the Reference field. Let's start with the UI Policy, which should hide the color fields if Override Styles is not checked.



For the reference qualifiers you can Right-click on the label of the reference field and then choose Configure Dictionary.

If you switch into the Advanced View of the dictionary record you will also be able to define an Advanced Reference Qualifier (e.g. calling your own filter function defined in a Script Include), rather than only a Simple Reference Qualifier.




At this point we have prepared our option/instance table: once you've done this, you will be able to choose from the fields on that table right below Data Table, by using the Fields field list.

The Option Schema will not automatically expose all fields as options - quite the contrary: it will only use the fields that you pick within that field list and expose them as options in the widget instance. Another thing that I found is, that the field will have to be on the Widget Instance form - only selecting it within the Fields won't work.


Let's create a new widget to test our new instance table. Open the Widget editor via Service Portal > Service Portal Configuration or

Give your new widget a name, e.g. "Widget Option Test" - if you create a Test Page, be aware that you will have to delete the widget instance on the page and re-add the widget after you changed the Data Table, otherwise it will still point to Instance, rather than to your new table.

Once the widget is created navigate to the platform view/backend and open up the widget.




Scroll down to Data Table & Fields and define your new option table. Also select the fields (and essentially now options) we created.




Now we are almost done - but at this point we have a blank widget, that wouldn't display anything, so let's just add some HTML and data based on the selected service.

Again, this widget will not really make a whole lot of sense, but should rather convey what you will be able to do.


Go back to the Widget Editor. If you had your Widget Editor open when you associated the new data table with your widget record, make sure to reload the widget editor so that it has the correct context, i.e. the new data table.


Write the following code into the Server Script:

function() {

  //initialize serviceValues object within the data object
  data.serviceValues = {};

  //Business Criticality has indeed only one "s" in the tech field name, no error.
  var fields = "name, owned_by, busines_criticality, version, used_for, location";

  //the option name is going to be the technical field name on the table
  var service = options.u_business_service; 
  var grService = new GlideRecord("cmdb_ci_service");
  if(grService.get(service)) {
       data.serviceValues = $sp.getFieldsObject(grService, fields);



If you are curious what $sp.getFieldsObject() does, I suggest doing a Ctrl + Right-click on the Widget instance and then do a Log to Console: $


Open the developer tools of your browser, go to Console and review the serviceValues object, which will contain e.g. the display values and labels of all the fields we defined.



Note: you can also use $sp.log(), which only outputs if the user has the sp_admin role or is impersonating. You can even use console.log() in the server script to debug objects, variables etc. If you are debugging an object make sure not to prefix it with a string (i.e. console.log("This is my serviceValue object: " + data.serviceValues)), since that would convert it into a string and you would not be able to move through the hierarchy within the console.



Now, we write the following HTML:

  <div class="panel panel-default">
    <div class="panel-heading" ng-style="{'background-color': options.u_border_color}">
      <h3 class="panel-title">{{}}</h3>
        <!-- Because the <dt> tag closes we have to extend the ng-repeat range by using ng-repeat-start, otherwise the <dd> tag could not access the values -->
        <dt ng-repeat-start="item in data.serviceValues" ng-style="{'color': options.u_font_color}">{{item.label}}</dt>
        <dd ng-repeat-end>{{item.display_value}}</dd>


Save your Widget and go to Service Portal > Service Portal Configuration and open up the Designer or the platform view of the Widget Instance.

Navigate to your test page and modify the Widget Instance Options by providing a title to the widget, picking a service and providing a font color via the color picker (another great side effect of using real fields instead of JSON options) + a border color, that we'll use for the panel heading (slight misusage of the name, but you get the idea ). We are not using it within the widget, but it's always good to have a title if you go to the list of widget instances - helps distinguishing between instances based of the same widget.



Go to your test page and reload it. It should now look like this, most likely it would contain different data dependent on the service that you chose:



At this point you have successfully created a new Widget Instance table, added fields to be used as options on that particular table and you made use of them within the Widget. I hope you found this article useful and that it will help you to create more dynamic widgets in the future.

YouTube video:


The Live Coding Team: josh.nerius, dave.slusher, ctomasi


In last week’s Live Coding Happy Hour, we picked up where we left off previous week and figured out how to programmatically get an OAuth Access Token from the oauth_credential table for our integration.


Video Index



Useful Resources





A Better Approach for Getting Access Tokens


In the Live Coding session, we used a manual approach to query the oauth_credential able to get an Access Token using GlideRecord, but there’s actually a much better way to accomplish this using GlideOAuthClient.


Before (manual GlideRecord lookup)

function getToken(profile, user) {   
    var gr = new GlideRecord('oauth_credential');    
    gr.addQuery('oauth_requestor_profile.oauth_entity_profile', profile);    
    gr.addQuery('user', user);

    if ( {      
        gs.debug('found it');       
        return gr.getValue('token');   
    } else {      

var token = getToken(oauthProfileId, userId); 


After (using GlideOAuthClient API)

// Initialize client and get a token object  
var oauthClient = new sn_auth.GlideOAuthClient();
var tokenObject = oauthClient.getToken(restMessageId, oauthProfileId);  
var accessToken = tokenObject.getAccessToken();  
gs.debug('Access Token: ' + accessToken);



Handling a non-expiring token


I mentioned that I’d show you how to handle the non-expiring token sent by, but didn’t get around to it during the broadcast.

To accomplish this, we need to add a single line to the postProcessAccessToken method in our custom OAuthBitlyHandler Script Include:


// Manually set the expire time to 1 year in seconds (60 * 60 * 24 * 365) 
paramMap.put(‘expires_in’, ‘31536000’); 


In this example, I’ve set the expiration time to 1 year, but you should set this to whatever value makes sense for you and your password rotation schedule (you should go and regenerate this periodically for security reasons, even if the the API doesn’t enforce token rotation).

Josh Nerius | Developer Evangelist | @NeriusNow | Get started at

In July 2016, Microsoft issued the critical security bulletin MS16-084. This was an important security update for users of Internet Explorer 11. The update fixed vulnerabilities that allowed remote code execution if a user viewed a specially crafted webpage with Internet Explorer. Attackers that made it into systems where the current user was logged on as an administrator could gain control of the system and, for example, install programs, delete data, and create new accounts.




The MS16-084 update changed security settings so URLs that contain a javascript() function call are now blocked. This has some ramifications in ServiceNow.


Determining if the MS16-084 security update affects your instance

After installing the MS16-084 patch, if you access a Fuji, Geneva, or Helsinki instance using Internet Explorer 11 and try to apply a template to a record, the pop-up window for selecting templates is blank. The issue has not been reproducible consistently, but there are multiple reports of the blank template window. For example, customers have seen the issue when creating a new incident and selecting Template > Apply template from the context menu.


How to workaround the blank templates window

Try using one of the following workarounds if suitable for your organization:

  • Upgrade to a Geneva (or later) release and use UI16
  • Use a different browser such as Chrome or Firefox
  • Navigate to Internet Options > Security Settings > Custom Level and clear the option Enable XSS Filter
  • Back out the MS16-084 Critical Patch


Additional information

General information about forms and templates is available in the product documentation:

I got this question about how to be able to put more advanced condition on a UI Action.

In this case specific we want to have a UI Action that will only show on incidents if there isn't any active childs incidents connected to it.


It will be pretty straight forward and use it as a simple example on how to use this for more complex situations you will get yourself into =)


first we need to create the Script Include. If you don't know exactly how your code to to validate if your function should return true of false, play around with it in Script - background or fix script. By that way you can easy and fast test the code to make sure it will return the right values.


I have chosen to call my Script Include for condUtil, so it will say something about what it is and I can add more functions in it later that are connected to conditions.


Don't forget to at least write a few lines about what you expect the function to get as input and what it is suppose to return


Here is what the Script Include looks like:




As you can see here I created a function called "checkIncidentChilds" which takes in one parameter and this uses this to see if that sys_id has any active childs connected.

And returns true or false depending on there is any active childs.


Now I can use this on the condition field on an UI Action like this:


since it will return true if the incident doesn't have any active childs, the button will then be visible, otherwise it will return false and the button will not be seen.


By doing this you can easy create reusable conditions which can do complex checks and just return true or false.


I hope this will get your imagine going and make some magic happen.


ServiceNow Witch Doctor and MVP
For all my blog posts:

Filter Blog

By date: By tag: