Skip navigation

ServiceNow offers a number of ways to control what and how a user views data in forms and lists. This can be based on roles/groups the user has or is in, or the type of record being viewed. I will go through View Rules and Navigation Handlers in this blogpost, as well as a scenario that may arise causing your View Rules not be be applied. You can control the view applied via a global Business Rule; however, this is not supported by ServiceNow support.

 

View Rules

View Rules are a simple condition based method to determine which View a user will be presented with when viewing a form or list. If the user has the ability to switch Views, normally the View they select will then be saved and used for each subsequent visit to a record on that table. A View Rule will override the User Preference and always forces the same View of the form or list.

 

view rules.jpg

 

You can specify the device for this to run on (browser/mobile etc) and the desired View among other options. Starting with Geneva scripts can also be used in View Rules.

 

Navigation Handlers

A Navigation Handler is a scripted View Rule essentially and is run each time data from the specified table is requested in the form view. The scripts are typically in the form below and are fairly self explanatory in checking for a role and specifying the view:

var gr = new GlideRecord(hr.TABLE_CASE);
if (gr.get(g_uri.get('sys_id'))) {
     if (!gs.getUser().hasRoles()) 
          g_uri.set('sysparm_view', 'ess');
     else
          g_uri.set('sysparm_view', '');
}


answer =  g_uri.toString('hr_case.do');



The script above will force the ESS view for users with no roles, and use the default view for all other users. This example is taken from an out of the box Navigation Handler installed with the HR Plugin.

 

View Rules vs. Navigation Handlers, which takes precedence?

The order in which View Rules and Navigation Handlers are applied is controlled by a system property that was introduced in Dublin. The system property is: glide.ui.view_rule.check_after_nav_handler

 

The type is "true/false" and must be set to value "true" to process View Rules AFTER Navigation Handlers. If it is set to false, then Navigation Handlers will take precedence over View Rules. If this system property does not exist in your instance, then regardless of View Rules for the table in question, the Navigation Handler will always take precedence.

 

There is a further caveat with this system property, in that it will only override the Navigation Handler if the Navigation Handler scripted function does not return an answer. In the example script above, the property will have no effect as the Navigation Handler will always return an answer due to the "answer" line being outside the if statement.

 

To force the Navigation Handler Script above to honour View Rules for the table you will need to add the property above (and set it to true) and also update the code to only return an answer when the view needs to be changed/forced:

var gr = new GlideRecord(hr.TABLE_CASE);
if (gr.get(g_url.get('sys_id'))) {
     if (!gs.getUser().hasRoles()) {
          g_url.set('sysparm_view'),'ess');
          answer = g_url.toString('hr_case.do');
     }
}

 

The order in which the View Rules and Navigation Handlers are executed depends on the system property and the structure of the Navigation Handler script. View Rules will be applied when the system property (glide.ui.view_rule.check_after_nav_handler) is false and the Navigation Handler script does not return an answer. The Navigation Handler will take precedence if the script returns an answer or if the system property (glide.ui.view_rule.check_after_nav_handler) is set to false.

 

 

In the case that you run into View Rules not being followed for individual records on the Request [sc_request] table, see View Rules are not followed for individual records on the Request [sc_request] table (KB0522746).

One of the very powerful directives available in Service Portal that we will be covering today is the snRecordPicker. This directive generates a field very similar to a reference field in the platform. This is very useful when creating custom widgets that will be interacting with tables and records in ServiceNow.

The Directive:

<sn-record-picker field="location" table="'cmn_location'" display-field="'name'" value-field="'sys_id'" search-fields="'name'" page-size="100" ></sn-record-picker>


It supports the following properties:

PropertyDescription
fielda JavaScript object consisting of “displayValue”, “value” and “name”
tablethe table to pull records from
default-querythe query to apply to the table
display-field (or display-fields)the display field
value-fieldthe value field (usually sys_id)
search-fieldsthe fields to search
page-sizenumber of records to display in dropdown


To use the snRecordPicker you will also need to create the “field” object in your controller as well as listen for the “field.change” event.

 

The Controller:

 

$scope.location = {
    displayValue: c.data.loc.name,
    value: c.data.loc.sys_id,
    name: 'location'
};


$scope.$on("field.change", function(evt, parms) {
    if (parms.field.name == 'location')
        c.data.setLocation = parms.newValue;
    
    c.server.update().then(function(response) {     
        spUtil.update($scope);
    })
});


 

The Widget:

snrecordpicker.jpg

I’ve created a sample address picker widget that allows the user to select a location, and then retrieves the record from the server and populates several other fields with the information. The widget is available for download on Share: https://share.servicenow.com/app.do#/detailV2/86b23f151370e2001d2abbf18144b0aa/overview

--------------------------------
Nathan Firth
Principal ServiceNow Architect
nathan.firth@newrocket.com
http://newrocket.com
http://serviceportal.io

Note: This blog post reflects my own personal views and do not necessarily reflect the views of my employer, Accenture.

 

With the Helsinki release came the highly anticipated release of Service Portal. Service Portal is most widely known as being billed as the successor to CMS, but encompasses much more. ServiceNow defines Service Portal as a visual layer application that is used to render ServiceNow in a visually appealing, approachable way for non-Admin users.

 

We'll come back to that definition, but let's talk about Service Portal first as a successor to CMS. The bad news is that there is no migration path from CMS to Service Portal as they are built on different technologies. The good news is that if you have a current CMS portal built using the Bootstrap framework you can use most of that design work and styling to give you a head start on building a Service Portal.

 

One of the biggest differences between CMS and Service Portal is the underlying technology that allows you to retrieve data from the ServiceNow DB and display it on the page. CMS used Jelly which was not a very widely used framework, and thus hard to find any resources on the web as well as developers with Jelly experience. Service Portal, on the other hand, uses AngularJS to take data retrieved from the server and show it dynamically on the page. AngularJS is a very widely used JavaScript framework resulting in many resources, plugins, and experienced developers. Given this choice it should be a lot easier to ramp up a web developer with little ServiceNow experience on Service Portal compared to CMS.

 

One of the other big differences between the two is that CMS relied heavily on using iframes to display ServiceNow content along with a themed header and possibly a footer. This allowed us to show ServiceNow content fairly easily, but iframes are notoriously difficult to work with and we had little control over what was shown inside the iframe. For example, if you want the catalog content to look more like one of your other internal tools, you didn't really have a lot of options. Service Portal on the other hand is a complete visual layer between the ServiceNow content and the user, so THERE ARE NO IFRAMES, and you have total control over the formatting of the content that is displayed on the page.

 

The one downside there is that if you have a large catalog with complex catalog items, it may take more work to make those work correctly with Service Portal since it will have to interpret all of those complexities through the visual layer. With CMS we didn't have to worry about how it would be interpreted as it was just being shown in an iframe.

 

There are a few other major advantages to Service Portal.

 

  • Firstly, it was designed with a mobile first mentality, so it is truly responsive. It was technically possible to make CMS completely responsive, but it took a lot of custom development in order to make catalog items responsive.
  • Another major advantage is that Service Portal is a self-contained app on top of ServiceNow, so it will not be as susceptible to upgrade issues as CMS was. CMS typically had default ServiceNow styling applied to it, so as the styling changed throughout releases the CMS site would need its styling changed to accommodate. Service Portal has its own styling and isn't showing any content in iframes, so it's really only reliant on the underlying data structure being consistent.

 

Overall, if you're starting a new portal build and don't have a large and complex catalog you'll want to use Service Portal. If you have an existing CMS site and/or large existing service catalog you'll need to consider those things before making the decision.

 

I'll be following this post with some specifics around Service Portal. If you enjoyed this post feel free to like, share, or bookmark it!

One of the ongoing issues that we deal with in the Developer Program is the continuity of the free developer instances. No matter how well intentioned you are, it is always possible to have a time period where you are out of the office and miss the email about your Developer Instance expiring. I don't like the idea of anyone losing their work but there is only so much we can do to prevent it. There aren't enough resources to give developers free instances that last forever, so we do the best we can.

 

However, you as the developer have the ability to mitigate this. I've been advocating for developers to periodically take update set backups to prevent data loss. If you have a backup of everything important to you, then losing your instance is a trivial operation. You request another, reinstall from backup and proceed ahead. Now that we are in the Helsinki era, you don't even need to deal with the update sets anymore. You can set yourself up a Git repository, save any of your work to it, and away you go. Not only does this keep you from losing your work, it gives you the ability to revert your code if you ever find yourself in a position where you have broken something that previously worked. You can tag commits and then later use those as branch points, all the great functionality of source control. Nowadays, a free GitLab account will allow you to have private repositories so there is no reason to not do it.

 

Below is a video we put together to show you the very simple steps it takes to get your app committed to source control. In the video where it says "Install on another instance", think in your head "Another developer instance after I some day lose mine." That's where the real power comes in. Rather than it being a devastating loss, it should be a 5 minute hiccup if your developer instance is reclaimed. If you do any work on there that is important to you, treat it as important and save a backup.

 

 

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  ADVANCED

Assumes good knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

Some time ago I wrote on this topic (Where Does All The Code Hide?), where I attempted to identify all of the possible places that code could reside inside of the ServiceNow platform.  That was three years ago, and the platform has come a long way since then.  At that time I used a somewhat manual process to pull together a list of variables that were of a particular type:

 

  • Script
  • XML
  • HTML
  • Condition

 

and so on.  It took me a bit to assemble the whole thing, but I found that the result was remarkable.  At that time there were 187 possible locations!

 

I have become somewhat more sophisticated in my approach these days.  Here is a coding example of how to dissect the sys_dictionary table to obtain the list.  I thought I would share a bit of that to give you an idea where this could be taken. 

 

I have included several interesting techniques and best practices while working with the GlideRecord object that you might find of interest.

 

var listOfFieldsToCheck = ['script', 'xml', 'html', 'html_script', 'script_plain', 'html_script', 'conditions', 'variable_conditions', 'condition_string'];
var listOfDoesNotContain = ['var__', 'bsm_', 'log', 'metric', 'ecc', 'clone', 'content_', 'sys_update_', 'u_'];

var message = '--->Script Location List\n';
var scriptLocationList = buildScriptLocationList(listOfFieldsToCheck, listOfDoesNotContain);
message += '---> scriptLocationList.length: ' + scriptLocationList.length + '\n';

for (var i=0; i < scriptLocationList.length; i++) {
  message += scriptLocationList[i].table + ' - ' + scriptLocationList[i].column_name + ' - ' + scriptLocationList[i].type + '\n';
}
gs.print(message);

message = '\n\n--->Exception Location List\n';
var exceptionLocationList = buildExceptionLocationList(listOfDoesNotContain);
message += '---> exceptionLocationList.length: ' + exceptionLocationList.length + '\n';

for (var i=0; i < exceptionLocationList.length; i++) {
  message += exceptionLocationList[i].table + ' - ' + exceptionLocationList[i].column_name + ' - ' + exceptionLocationList[i].type + '\n';
}
gs.print(message);


function buildScriptLocationList(listOfFieldsToCheck, listOfDoesNotContain) {

  var scriptLocations = new GlideRecord('sys_dictionary');
  // what fields do we want to check for?
  scriptLocations.addQuery('internal_type.name', 'IN', listOfFieldsToCheck);
  // tables we want to exclude
  for (var j=0; j < listOfDoesNotContain.length; j++) {
  scriptLocations.addQuery('name', 'DOES NOT CONTAIN', listOfDoesNotContain[j]);
  }
  // this is how you do multiple order bys with a gliderecord
  scriptLocations.orderBy('name').orderBy('element');
  scriptLocations.query();

  scriptLocList = [];

  while(scriptLocations.next()) {
  var scriptLocation = {};
  scriptLocation.table = scriptLocations.getValue('name');
  scriptLocation.column_name = scriptLocations.getValue('element');
  scriptLocation.type = scriptLocations.getValue('internal_type');
  scriptLocList.push(scriptLocation);
  }
  return scriptLocList;
}

function buildExceptionLocationList(listOfDoesNotContain) {

  // this is a best practice for allowing your encoded query to be easier to maintain
  var sql = 'internal_type=string' +
  '^elementNOT LIKEdescript' +
  '^elementNOT LIKEsubscript' +
  '^elementNOT LIKEjavascript' +
  '^elementNOT LIKEcondition_type' +
  '^elementLIKEscript' +
  '^ORelementLIKEcondition' + 
  '^ORelementLIKEhtml' + 
  '^ORelementLIKExml';


  var exceptionLocations = new GlideRecord('sys_dictionary');
    exceptionLocations.addEncodedQuery(sql);
  for (j=0; j < listOfDoesNotContain.length; j++) {
  exceptionLocations.addQuery('name', 'DOES NOT CONTAIN', listOfDoesNotContain[j]);
  }
  exceptionLocations.orderBy('name').orderBy('element');
  exceptionLocations.query();

  exceptionLocList = [];

  while(exceptionLocations.next()) {
  var exceptionLocation = {};
  exceptionLocation.table = exceptionLocations.getValue('name');
  exceptionLocation.column_name = exceptionLocations.getValue('element');
  exceptionLocation.type = exceptionLocations.getValue('internal_type');
  exceptionLocList.push(exceptionLocation);
  }
  return exceptionLocList;
}

 

Some GlideRecord Notes:

 

You can:

1. Use an array with the IN statement to act as an "OR" condition.

2. Dynamically add extra conditions based on an array.

3. Stack .orderBy statements to do more sophisticated ordering of the data (<--- this should have worked and did not - investigating)

 

And:

4. Demonstration of how an Encoded Query can be ordered to provide better maintenance.

 

I created a Fix Script to execute the code.  Executing brought back the following results:

 

 

359!  Wow!  ServiceNow has significantly grown their coding presence in the last three years!

 

If you scroll further down you will see that there were 59 results in the Exception List.  These are worthy of investigating, if you want, to see if they truly contain script.  These will be string fields that contain code.  Ugly.  The ones labeled with Script should probably be Script fields as, at the very least, to be able to utilize the Script editor.

 

 

This, overall,  is probably not a comprehensive list, and is worthy of extra exploring, but you get the general idea: There are a LOT of different places where code can exist inside the ServiceNow platform!

 

I have attached my Fix Script for your perusal.  :-)

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

I have been asked this question a few times when teaching Scripting:

 

With Script Includes what is “type” for, and why is it automatically generated when I create one?

 

The answer is actually pretty straight-forward!

 

  1. Type is for allowing any developer using your Script Include to identify the type of object it is.
  2. ServiceNow did a nice thing and placed it into their non-client callable Script Include template for us.  I used to have to do this myself.  Now, it is done for me, and they tweaked the Script Include editor so that if I should change the name, then the type changes as well.
  3. Type is not necessary, there is one JavaScript built-in alternative.


Ok, so, demonstrating what it is I am talking about:


First let’s create a simple Script Include that we can use to play with detecting its type.

 

  1. Create a new Script Include
  2. Name: type_test
  3. Accessible from:  All Application Scopes
  4. Client Callable: false
  5. Script:

 

var type_test = Class.create();
type_test.prototype = {
    initialize: function() {
    },

    type: 'type_test'
};

 

You will note that when you initially created the Script Include that a template was automatically created for us, and that the name field is propagated throughout the script.  If you change the name field; the script will follow suit.

 

Next fire up Scripts-Background or create a new Fix Script.

 

The tests I tried:

 

First we have to instantiate our Script Include object

 

var typeTest = new type_test();

 

Now lets try the typeof command and see what we get back

 

gs.print(typeof typeTest);  // object

 

We get back “object”.  That makes sense.  The typeof command is limited on what it can detect.  When it comes to a user-based object; that is all that you will get back.

 

I don’t like the following command as it is sometimes inaccurate in JavaScript, but we will try it here anyway.

 

gs.print(typeTest.constructor.name); // Object

 

This returns “Object”.  This, again, reflects the overall type.  All user created objects will show thusly.  So, once again, my feeling toward this command has been proven correct.  :-p

 

The following command I DO like, and it still does not return anything useful!

 

gs.print(Object.prototype.toString.call(typeTest)); // object Object

 

This returned “object Object”.  Sigh.

 

Now let’s tap into the type variable inside the object.

 

gs.print(typeTest.type == 'type_test');  // true

 

Usage:

 

if (typeTest.type == 'type_test') {
   // do something
}

 

 

That’s better!  However, it requires that extra bit of code; whether it is created by ServiceNow or ourselves.  So is there a way of doing this, that exists, where we don’t need that internal type variable?  Yes!

 

Here is an alternative method for the type variable:

 

gs.print(typeTest instanceof type_test);  // true

 

The JavaScript command “instanceof” does the job nicely, and it does not require that you program the value into your Script Include.  So really the type variable is not necessary.

 

Usage:

 

if (typeTest instanceof type_test) {
   // do something
}

 

For further reading on instanceof: link

 

Here also, is a great side-by-side explanation of typeof vs. instanceof: link


I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.

 

Steven Bell

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

In a related article, "Related Attachments" Related List, I talk about creating a Defined Related List to pull together attachments from related records so they can show up on the form view of all those records (e.g. Service Desk Calls and Incidents):

 

 

It's nice to have them all in one place, but the "Table name" and "Table sys ID" fields are not very useful.  Luckily we can improve on that.

 

We simply need to create a new calculated field called "Record" of type "Document ID" on the Attachment table.  To do this, enter "sys_attachment.list" in the Navigator filter - this will display a list of attachment records.  Then right-click on the list header and select Configure \ Dictionary.  Create a new record with the following settings (you may have to click on the Advanced View Related Link to show some of these fields):

 

Table:                   Attachment [sys_attachment]

Type:                    Document ID

Column label:            Record (or whatever else you prefer)

Use dependent field:     checked

Dependent on field:      Table name

Calculated:              checked

Calculation:

(function calculatedFieldValue(current) {
  return current.getValue("table_sys_id");
})(current);

 

I normally steer everyone away from Calculated fields because they can be expensive in terms of database cycles, however, I am told that creating a calculated field from values that are available on the same record is indeed safe.

 

Now we can have a useful Related List on the appropriate forms by replacing the fields that are displayed (right-click Configure \ List Layout) by removing the "Table name" and "Table sys ID" fields and adding the new "Record" field:

 

 

Now you can see the record the attachment is actually on, and even click on the link to go to that particular record.

 

 

 

NOTE: My earlier blog post, A Better Requested Item Attachments Related List, got a little messy so I split it into 2 different posts so it would be easier to read and update.  This post is the second of those 2 new posts.

*** Please Like and/or tag responses as being Correct.
And don't be shy about tagging reponses as Helpful if they were, even if it was not a response to one of your own questions ***

I've seen a lot of requests in the Community to see related attachments on multiple forms.  For instance, people want to see Requested Item attachments on the related Catalog Task records as well.  Most "solutions" that are suggested involve copying the attachments from one record to the other, which you really do not want to do (synching problems, duplicate records for no reason, etc...).  My solution is to create a Defined Related List (Creating Defined Related Lists - ServiceNow Wiki) which can display attachments from multiple records.

 

We start by adding a new Relationship record (System Definition \ Relationships):

 

Name:                 Related Attachments

Applies to table:     Global [global]

Queries from table:   Attachment [sys_attachment]

Query with:

 

(function refineQuery(current, parent) {
  var tableName = parent.getTableName();
  var queryString = "table_name=" + tableName + " ^table_sys_id=" + parent.getValue("sys_id");  //default query

  switch (tableName){
    //add your table-specific blocks from below
  }

  current.addEncodedQuery(queryString);


  function u_getRelatedRecords(table, field, sysId){
    var result = "";
    var gr = new GlideRecord(table);
    gr.addQuery(field, sysId);
    gr.query();
    while (gr.next()){
      result += "," + gr.getValue("sys_id");
    }
    return result;
  }

})(current, parent);

 

The script checks the table name for the record being displayed and then builds the appropriate query.  As a safety measure, the queryString variable is given a default query to display the attachments for just that one record, otherwise all attachments would appear in the list if the Related List was added to a form that did not have any specific "case" block.  I created the private "u_getRelatedRecords" function to simplify the whole script as we use the same GlideRecord query to retrieve the appropriate sys_ids regardless of the table.

 

The above script is just the starting block - we'll add table specific examples next.  Each of the next blocks of code should be inserted within the "switch" block at line 6:

 

Request, Requested Item and Catalog Task Tables

 

    //===== Requests =====
    case "sc_request":
    queryString = "table_nameINsc_request,sc_req_item,sc_task^table_sys_idIN" + parent.getValue("sys_id");

    //find the related Requested Items
    queryString += u_getRelatedRecords("sc_req_item", "request", parent.getValue("sys_id"));

    //and then the Catalog Tasks
    queryString += u_getRelatedRecords("sc_task", "request_item.request", parent.getValue("sys_id"));
    break;


    //===== Requested Items =====
    case "sc_req_item":
    queryString = "table_nameINsc_request,sc_req_item,sc_task^table_sys_idIN" + parent.getValue("request") + "," + parent.getValue("sys_id"); 

    //find the related Catalog Tasks 
    queryString += u_getRelatedRecords("sc_task", "request_item", parent.getValue("sys_id"));
    break;


    //===== Catalog Tasks =====
    case "sc_task":
    queryString = "table_nameINsc_request,sc_req_item,sc_task^table_sys_idIN" + parent.request_item.request.toString() + "," + parent.getValue("request_item");

    //find the related Catalog Tasks
    queryString += u_getRelatedRecords("sc_task", "request_item", parent.getValue("request_item"));
    break;

 

 

 

Incident and Service Desk Call Tables

 

    //===== Incidents =====
    case "incident":
    queryString = "table_nameINincident,new_call^table_sys_idIN" + parent.getValue("sys_id");

    //find the related New Call
    queryString += u_getRelatedRecords("new_call", "transferred_to", parent.getValue("sys_id"));
    break;


    //===== Service Desk Calls =====
    case "new_call":
    queryString = "table_nameINincident,new_call^table_sys_idIN" + parent.getValue("sys_id") + "," + parent.getValue("transferred_to");
    break;

 

 

 

Idea and Demand Tables

 

    //===== Idea =====
    case "idea":
    queryString = "table_nameINidea,dmn_demand^table_sys_idIN" + parent.getValue("sys_id") + "," + parent.getValue("demand");
    break;


    //===== Demand =====
    case "dmn_demand":
    queryString = "table_nameINidea,dmn_demand^table_sys_idIN" + parent.getValue("sys_id") + "," + parent.getValue("idea");
    break;

 

 

 

Project and Project Task Tables

 

  //===== Project =====
  case "pm_project":
  queryString = "table_nameINpm_project,pm_project_task,idea,dmn_demand^table_sys_idIN" + parent.getValue("sys_id");

  //find the related Project Tasks
  queryString += u_getRelatedRecords("pm_project_task", "top_task", parent.getValue("top_task"));

  //find the related Idea and Demand
  queryString += u_getRelatedRecords("dmn_demand", "project", parent.getValue("sys_id"));
  queryString += u_getRelatedRecords("idea", "demand.project", parent.getValue("sys_id"));
  break;


  //===== Project Task =====
  case "pm_project_task":
  queryString = "table_nameINpm_project,pm_project_task,idea,dmn_demand^table_sys_idIN" + parent.getValue("top_task");

  //find the related Project Tasks
  queryString += u_getRelatedRecords("pm_project_task", "top_task", parent.getValue("top_task"));

  //find the related Idea and Demand
  queryString += u_getRelatedRecords("dmn_demand", "project", parent.getValue("top_task"));
  queryString += u_getRelatedRecords("idea", "demand.project", parent.getValue("top_task"));
  break;

 

 

 

HR Case and HR Task Tables

 

    //===== HR Case =====
    case "hr_case":
    queryString = "table_nameINhr_case,hr_task^table_sys_idIN" + parent.getValue("sys_id");

    //find the related HR Tasks
    queryString += u_getRelatedRecords("hr_task", "parent", parent.getValue("sys_id"));
    break;


    //===== HR Tasks =====
    case "hr_task":
    queryString = "table_nameINhr_case,hr_task^table_sys_idIN" + parent.getValue("sys_id") + "," + parent.getValue("parent");

 

 

 

Now you can see all the attachments from related records if you add the "Related Attachments" Related List to a form:

 

The above blocks of code are just examples of what you can do and there are quite a few more that can be added.  I'll add some more as I come across some more ideas or people ask for more.

 

 

If you want a better looking and more useful list view, you will want to read this post - Improving the Attachments List View:

You will be able to see the record the attachment is actually on (instead of a sys_id), and even click on the link to go to that particular record.

 

 

NOTE: My earlier blog post, A Better Requested Item Attachments Related List, got a little messy so I split it into 2 different posts so it would be easier to read and update if required.  This post is the first of those 2 new posts.

*** Please Like and/or tag responses as being Correct.
And don't be shy about tagging reponses as Helpful if they were, even if it was not a response to one of your own questions ***

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

Something I see as a common Scripting error is the use of adding a couple of integers and the values getting implicitly converted to a string.  For example:

 

var integerOne = "15";  // string
var integerTwo = 30;  // number

gs.print(integerOne + integerTwo);

 

Will produce:

 

1530

 

Nice, huh?  Welcome to JavaScript.  :-p

 

JavaScript is a loosely “typed” computer language (i.e. there is no way to explicitly specify a variable as an Integer, Decimal, and so on).  The language has to make a judgement call on what to do with the values during run-time.  If it runs into a situation like that demonstrated above, it takes the lowest common denominator (string) and converts everything to that.  THEN since the plus (“+”) symbol is overloaded in functionality to mean “Add” (if two numbers) or “Concatenate” (if two strings) the language happily concatenates the two values together.

 

This can lead you to times where you say things like:

 

“I HATE JAVASCRIPT!”

"JavaScript Stinks!"

"I Wish JavaScript Were a Real Language!"

 

So, let’s explore this topic a little (implicit conversion, not the “I hate…” part).

 

JavaScript provides us with ways to force the language to view the variable values as they were intended.  This can be tedious, but if you must have proper handling of values it is the only way.

 

The two ways I will present here are:

 

  parseInt

  parseFloat



parseInt

 

The first of these I want to describe is parseInt.  This allows you to force a string integer value to be an actual integer value.  In the following example integerValue is an actual integer.

 

var integerCheck = "15";
var integerValue = parseInt(integerCheck); // convert to actual integer

 

However you can mess this up by trying to then add it to a string integer.  If you do that it converts it to a string and concatenates it to the other string value.

 

var integerCheck = "15";
var integerValue = parseInt(integerCheck); // convert to actual integer

integerValue = integerValue + integerCheck;  // casts back to a string!

gs.print(integerValue); // 1515

 

Doing a typeof will verify that it is string:

 

gs.print(typeof integerValue); // string

 

Nice, huh?  But before you get to cussing… this will fix it:

 

integerValue = parseInt(integerValue) + parseInt(integerCheck);
gs.print(integerValue); // 30

 

Doing a typeof will verify that it is truely a number:

 

gs.print(typeof integerValue); // number

 

Here is another example of careful conversion:

 

integerValue = 15; // re-cast it to an integer
integerValue = integerValue + parseInt(integerCheck);  // works correctly

gs.print(integerValue); // 30



So that covers integers pretty well.  What about decimal numbers?



parseFloat

 

Starting out with a string representation of a decimal we can convert it to an actual decimal by doing a parseFloat.  Example:

 

var decimalCheck = "20.5";
var decimalValue = parseFloat(decimalCheck); // convert to actual decimal

gs.print(decimalValue); // 20.5
gs.print(typeof decimalValue); // number

 

We can see it works exactly the same as parseInt.  Now, notice what happens if we don’t convert all of the values:

 

decimalValue = decimalValue + decimalCheck; // casts back to a string!
gs.print(decimalValue); // 20.520.5
gs.print(typeof decimalValue); // string

 

Yup!  Same crummy behavior.  We get a useless string-i-fied value.

 

However with decimal strings; if you convert everything to to a decimal number you will notice an error creeping into the result.  This is because of the way JavaScript handles the conversions under-the-hood.  It does not do the greatest job with this sort of stuff.  Because of this I tend to stay far away from parseFloat.

 

decimalValue = parseFloat(decimalValue) + parseFloat(decimalCheck); // now it works correctly
gs.print(decimalValue); // 41.019999999999996



parseBool

 

First, let me make it clear, there is no such thing in the JavaScript language as parseBool!  That doesn’t keep me from wishing I had the functionality from time-to-time though.  So here you go.  An example of the one I use.

 

function parseBool(value) {
    return (/^(true|1|yes|on)$/i).test(value);
}

 

Some testing:

 

var boolCheck = 'false';

// various tests for parseBool

var boolValue = parseBool('YES');  // true
//var boolValue = parseBool('no');  // false
//var boolValue = parseBool('on');  // true
//var boolValue = parseBool(boolCheck);  // false
//var boolValue = parseBool('true');  // true
//var boolValue = parseBool('false');  // false

if (parseBool(boolCheck) === true) {
  gs.print('parseBool worked');
}

 

You can play with it. 

 

Anyway, there you go, now you know a bit about the JavaScript parseInt, and parseFloat and why you might need them while Scripting.

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity. 

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

Screen Shot 2016-07-13 at 16.28.43.png

One of the most requested features of the Developer Program is the ability to choose the version of a developer instance that one gets assigned. The current state of affairs is that you request an instance, are assigned a random one from the available instances. If you choose you can upgrade from there but it is difficult to go downwards once you do that. Some people want specific versions of an instance to match the training documentation they have, or to match production at their workplace or a client.

 

Screen Shot 2016-07-13 at 16.41.39.png

As of today, you can now do exactly this. You request an instance as you do currently, either from the button on the sidebar or from Manage -> Instance in the navigation bar. You'll now be presented with a dialog that asks you which version you want to request. There is text in the dialog that suggests that if you don't have a reason otherwise, Helsinki is your best bet. However if you have a desire for Fuji or Geneva you can certainly choose whichever you want/need.

 

Once you click the version, your request will begin being processed. After a few seconds, you will be presented with the page that shows you information about your instance including it's name, URL, and the temporary password. NOTE: log in to the instance immediately with that password. You'll be asked to change it on first login, and please keep a record of that. Pretty much every day I get requests from people who get through this process without knowing what their password is. Make my life easier, keep up with the password. If you don't, you can reset it from the Actions dropdown on the Manage -> Instance page.

 

If you have a instance currently and you want a different version. here is what you do. If it is a higher version, you can upgrade your currently assigned instance. Included in this release is the ability to upgrade specifically to Geneva or Helsinki. Previously, the only option you had was to upgrade from what you had to Helsinki, now you can do either if you are starting from a Fuji instance.

 

f-blurred.pngg-blurred.png

If you want an instance that is a lower version from what you have currently assigned, you will first release your current instance. This means you will have no more access to it, so please backup anything you want to keep via update sets, integration with Git and/or data exports. Once you release, that instance will be wiped and is no longer yours. There will be a 15 minute waiting period, and then you can request a new instance. Using the steps above, select the exact version you want and you'll be assigned an instance of that version.

 

On this post you'll see screenshots of the results of two requests I made. I first picked the Fuji version then I turned around, released it and requested specifically a Geneva version. At this point the power is in your hands. As the text of the dialog says, if you aren't sure then you should request Helsinki. However, if you have a reason for wanting another you can now go to it.

 

Do be aware, this allows you to pick lettered versions, or selectively upgrade to lettered versions. It does not allow you to select patch level within those families. That is maintained across all developer instances and is outside of control of the individual developers. All developer instances of a given lettered version are at the same patch level, and they will all upgrade at once (give or take some processing time), managed at the program level.

 

Screen Shot 2016-07-13 at 18.00.41.png

This should help solve some of the issues of developers wanting to get an instance of specific versions, or especially when that need changes over time. And don't forget, there is a version selector on the Developer Portal that controls what you see. Whether you are looking at training courses, documentation or API docs, look at the lower right corner of the website and you'll see that version selector. If you are ever looking at a context other than what you want, you always have the option to change it there. Flip it to the version you want, and all docs on the site will reflect that.

 

Hopefully this release will make it easier for people. Happy developing, developers!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

In this episode of TechBytes, @Martin Barclay, chats with Markus Zirn, VP of Business Development at Workato

 

This episode covers:

 

·        Workato modern enterprise-grade integration platform

·        Recipes and integration apps

·        Slack and IBM Watson integration at CreatorCon Hackathon

 

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

Over and over I get asked why I don’t use the .getValue() method when working with GlideRecords.  My pat answer has always been:  PERFORMANCE!  The method I use is usually a JavaScript force-to-string; the ol’ plus-tick-tick method.

 

I finally got fed up and decided to do an article showing once-and-for-all that this method trounces all the others, and is the reason all of us old JavaScript users use it!

 

So I came up with the following code.  I would loop through each method 20 times.  10 with a small number of records, and 10 with a large number of records.  I would then compile the timings and show off the results. THEN I would point at the results, and say; “THERE!  THERE!  Run the test yourselves if you don’t believe me!”

 

This was a bad move on my part.  I was wrong.  See!  I even admit it.  Big of me.  :-p

 

So what are these three "methods"?

 

  • .toString()
  • .getValue()
  • + ''

 

After running the tests; what should I find????  .getValue wins hands down!  Nooooooo....... 

 

Here is my test code:

 

checkSpeeds('incident');  
checkSpeeds('cmdb');  
   
function checkSpeeds(tableName) {  
    var checkList = [];
    var testResults = create2DArray(3);  
   
    testResults[0] = 'tick-tick, \t' + tableName + '\t';  
    testResults[1] = 'getValue, \t' + tableName + '\t';  
    testResults[2] = 'toString, \t' + tableName + '\t';  
   
    var checkRecords = new GlideRecord(tableName);  
    checkRecords.query();  
   
    for (var j=0; j < 3; j++) {  
        for (var i=0; i < 10; i++) {  
            checkRecords.restoreLocation();  
   
            var check1 = new Date().getTime();  // grab our start time
            checkList = [];  
            while (checkRecords.next()) {  
                if (j == 0) {  
                    checkList.push(checkRecords.number + '');  
                }  
                else if (j == 1) {  
                    checkList.push(checkRecords.getValue('number'));  
                }  
                else {  
                    checkList.push(checkRecords.number.toString());  
                }  
            }  
   
            testResults[j] += ((new Date().getTime()) - check1) + 'ms, \t';  // calculate our end time
        }  
        testResults[j] += '\t' + checkList.length + ' records\t';  
    }  
   
    for (j=0; j < 3; j++) {  
        gs.print(testResults[j]);   // print off the results
    }  
}  
   
// simple function for creating a 2d array
function create2DArray(depth) {  
    var result = [];  
    for (var i=0; i < depth; i++) {  
       result.push([]);  
    }  
    return result;  
}  

 

Here are my results:

    

TypeTable# RecordsTest 1Test 2Test 3Test 4Test 5Test 6Test 7Test 8Test 9Test 10Total WinsAverage

% Faster Than Next

tick-tickincident107 records27252331252824252423225.5
getValueincident107 records22272223222222243223423.95.53
toStringincident107 records21252422293127223022525.3
tick-tickcmdb5162 records784758752724731714690705705743

730.6

getValuecmdb5162 records63562659260856659860260763062310608.716.68
toStringcmdb5162 records729806763752721697697726721696730.8

 

The analysis?

 

With just a few records getValue edged out the competition by 5.5%.  At 5,000 records it became very obvious that some sort of serious optimization has been done; with the percentage now rising to 16.7%!  It was also interesting to me that there appeared to be no obvious difference between + '', and .toString().  Wild!

 

I went outside, and contemplated life for awhile.  This messed with my fundamental understanding of the universe!

 

Anyway, after crying it out on my front porch I went back in and ran a few more tests.  Perhaps I had the columns out of order or something…, but no.  It came up pretty much the same no matter how I tried to twist the numbers around.

 

So here you go.  Of the three methods:

 

.toString and + ‘’ are just about the same; with + ‘’ edging out the other method only slightly.

 

.getValue gets stronger and stronger the more records you throw at it. 

 

Not sure what to do next.  Maybe gravity isn’t a constant after all.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

In my previous two articles I tackled variable substitution using GlideSystem methods in server-side scripts, and then also how to apply this and other techniques in Business rules.  Here I will be showing how to use these techniques inside Workflows, and demonstrate new methods of logging.

 

My previous two articles in case you want to do some extra digging:

 

Community Code Snippets - Logging: Some Notes on Variable Substitution

Community Code Snippets - Logging: Some Notes on Business Rules

 

Prerequisite:  Basic knowledge on how to create a Workflow and use the Workflow Editor.

 

With the Geneva release we received three new logging methods attached to the workflow object:

 

workflow.info

workflow.warn

workflow.error

 

These work very similarly to their GlideSystem analogs (sic. gs.info), but with one really significant difference:  They all three write to the workflow context log!  Yes!  No more searching through the System Log looking for my workflow logging messages!!!

 

Oh yeah, and like their gs analogs (and unlike gs.log) they are also scope safe.  :-)

 

I created a simple workflow to show how to test these out:

 

Name: Logging Test

Table: Global [global]

Description: Workflow logging examples

If condition matches: -- None --

 

I placed a Run Script Activity on the form:

 

Name: before

Script:

 

var location = 'WF:' + context.name + '.' + activity.name;   

// Current Factory
var current = new GlideRecord('incident');
current.addActiveQuery();
current.setLimit(1);
current.orderByDesc('number');
current.query();
current.next();

// Set up some variables
var number = current.number;  
var caller = current.caller_id.getDisplayValue();  
var category = current.category.getDisplayValue();  
var impact = current.getValue('impact');  
var priority = current.getValue('priority');  
var urgency = current.getValue('urgency');  

// Create the test message
var message = gs.getMessage('--->[{6}] \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}\n\tUrgency:\t{5}\n',   
    [number, caller, category, impact, priority, urgency, location]);  

// Send it to the System Log
gs.log(message, location);  // the old way

// Send it to the Workflow Context log AND the System Log
workflow.info(message);
workflow.warn(message);
workflow.error(message);

 

 

You can see that I have utilized the same code as that used in the Fix Script in my original article.  Instead of gs.info, etc. I used the workflow objects instead.

 

These write not only to the Workflow Context log, but also to the System log.

 

I wired up the workflow like this:

 

 

Ran the workflow...

 



...and once finished I then navigated to Workflow -> Live Workflows -> All Contexts.  I then brought up my most recent context. 

 

You can see that the workflow objects wrote the three logging statements to the Workflow Context log:

 

 

This is slick.  The need for bringing up the System Log list view (which, if you have a lot of entries in it, could take forever to display), and searching through the results was just made obsolete!

 

So, let’s see what was actually written into the System Log:

 

 

You can see that all three Workflow logging statements were written out, as was the gs.log.  Note that you still should follow the best practice of including the identification information in your message.  This will short-cut having to search out mystery messages in the log.

 

Remember to remove them from your code before it goes to production!

 

I created an update set for this workflow and attached it to the article.

 

For more information on the Workflow object see the wiki.

 

For more information on GlideSystem see the wiki.

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.  The class has an entire module covering Workflow Scripting.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

Having a background in System Engineering helps when it comes to administering MID Servers. I wrote a comment the other day about how to configure the wrapper program to auto-restart the JVM based on specific error messages. https://community.servicenow.com/community/operations-management/discovery/blog/2016/06/02/mid-server-crashing-due-to-so…

 

Sometimes it is necessary to start or restart the MID Server. Looking at the MID Server form, you will find the provided UI Action: "Restart MID" (green arrow below). This UI Action sends a command to the ecc_queue for that MID Server (ecc_agent) to pick up and execute just like any other probe or task.

 

MID Server Form.png

But what if the MID Server isn't picking up work from the ecc_queue? If something has gone awry, networking issues, storage problems - including brief disruptions - the MID Server application can end up in a bad state where the work isn't getting picked up from the ecc_queue. Symptoms of this would be log statements in the agent\logs\agent.log.0 file saying that xml is being enqueued or even SEVERE errors contacting your SN instance. In these cases and others, there is good reason to restart the Windows Service entirely.

 

Remoting into the Windows host is time consuming, should require some level of access control including group memberships that must be maintained for employees and contractors and is just all around tedious in nature. You could use the sc.exe command in Windows and structure the command to remotely restart the service from your workstation. I've used that for a good while and it's not a bad workaround. Writing the wrapper script for doing it inside Cygwin was just difficult enough to make things interesting. This solution has portability problems for people not used to using Cygwin and it still requires group memberships and access controls for your employees and contractors to access the MID Server host and restart services.

 

Enter the "Brute Restart MID" UI Action.

 

Thanks to PowerShell Probe Script Utility and a standard Windows Service naming convention (see line 4 below), it is possible to script this as a UI Action. Be sure that you have your condition set on this to limit access to your ITOM admins.

 

var target_agent_name = current.name.replace(/'/g, "\\'");
var target_host = current.host_name;
var ps_script = 'restart-service -inputobject (get-service -computername '+ target_host
+' -name snc_mid.'+ target_agent_name + ')';
var orch_host = gs.getProperty('mid.server.rba_default', 'NONE');
if (orch_host == 'NONE'){
  gs.addErrorMessage('Please set mid.server.rba_default sys_property.');
} else {
  var powerShellProbe = new PowershellProbeES(orch_host);
  powerShellProbe.setScript(ps_script);
  powerShellProbe.create();
  gs.addInfoMessage('Remote Service Restart Issued.');
}
action.setRedirectURL(current);

 

I also added a Business Rule to ecc_queue to move attachments from the "Grab MID logs" action to the MID Server record.

 

MID Server Attachments.png

 

Together these small enhancements simplify the routine tasks associated with MID Server administration. They also allow us to keep access to the MID Server host machines more tightly controlled, which is good practise anyways. They also allow those of us with access to stay off our VPN clients which is a definite win.

Johnny Walker
Acorio LLC

Greetings developer community!

 

On Thursday June 30th we attempted a Webex webinar to show developers all the cool new stuff in the platform Helsinki release for building killer UIs and custom widgets with Service Portal Designer, scaling out app dev safely to non-admin developers in departments across the enterprise with Delegated Development, ECMAScript5 for a modern JavaScript experience, Studio-Git integration for distributed source control, and other cool stuff like record watcher.

 

Unfortunately, due to "overwhelming" (a cliche but true this time) interest inthe event, we broke Webex due to the super high number of would-be attendees (more than 1,500 people tried to join live) and we had no ability to screen share for the first 30-40 minutes of the event.

 

Well, when the going gets tough....

 

So we are doing a "Take 2" live Ask the Experts on Thursday July 14th at 11AM PT using Google Hangouts this time where dave.slusher will give you a guided tour of all of the above and answer your questions live! The session will be recorded for those of you who can't make it on the relatively short notice, and you can always ask questions and join the discussion after the fact.

 

Come learn about how to use Helsinki to enable the Service Revolution and crush it with AngularJS !

Martin Barclay
Director, Product Marketing
App Store and ISVs
ServiceNow
Santa Clara, CA

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

Continuing from my last article, on various Server-Side logging tips, we will look at a couple of little-used (for debugging) GlideSystem functions:  addInfoMessage, and addErrorMessage.  Did you know that you can use variable substitution with these as well?

 

I created a simple Business Rule with the following:

 

Name: Logging Test

Table: Incident [incident]

Active: Checked

Advanced: Checked

When: before

Order: 100

Inserted: Checked

Updated: Checked

Script:

 

 

(function executeRule(current, previous /*null when async*/) {

var location = 'BR:Logging Test';

var number = current.number;
var caller = current.caller_id.getDisplayValue();
var category = current.category.getDisplayValue();
var impact = current.getValue('impact');
var priority = current.getValue('priority');
var urgency = current.getValue('urgency');

var message = gs.getMessage('--->[{6}] \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}\n\tUrgency:\t{5}\n', 
    [number, caller, category, impact, priority, urgency, location]);

gs.log(message); // this works fine as does gs.info

gs.addInfoMessage(message);
gs.addErrorMessage(message);

})(current, previous);

 

 

Now navigate to Incidents -> Open and open your favorite incident.  Make a change, save, and the code produces on the client-side:

 

 

And in the System Log:

 

 

Note:  The \t’s (tabs) get stripped out for some reason, but the \n’s (newlines) seem to work fine.

 

So, now for a little-known technique which I like even better:

 

 

gs.include("FormInfoHeader");  // normally this goes at the top of the code

var formInfo = new FormInfoHeader();
message = '<b><p style="color:red;background-color:yellow;">' + message + '</p> (gs.addMessage)</b>';

formInfo.addMessage(message);
gs.getMessage(message);

 

Which shows the following in the Incident form:

 

 

So here we can put in html tags and mess with what actually is displayed!  For debugging this is great!  I can flag errors with real colors!  So what is happening here?  FormInfoHeader is a Script Include library that represents the Form’s Info Header object.  With the Form Info Header method “addMessage” we are able to store any simple string, or any HTML formatted strings into the form for immediate or future use.  Then using gs.getMessage we can actually display this at will.  NICE FEATURE!  These only last for the session, but sure can be handy when debugging.

 

Now that we know that any HTML tags will work we can really go-to-town!  Try this:

 

 

message = '<b><p style="color:black;background-color:orange;">';
message += gs.getMessage('--->[{6}] <br/>Number: {0} <br/>Caller: {1} <br/>Category: {2} <br/>Impact: {3} <br/>Priority: {4} <br/>Urgency: {5} <br/>', 
    [number, caller, category, impact, priority, urgency, location]);
message += '</p> (gs.addMessage)</b>';
  
formInfo.addMessage(message);
gs.getMessage(message);

 

 

Which produces in the Incident form:

 

 

So here we can combine our previous variable substitution with our new addMessage to produce nicely formatted and easily readable results.  Cool, huh?!  :-)

 

These techniques are really useful when debugging business rules.  I actually prefer them to gs.info or gs.log.  However, you have to remember to remove them from your code before it goes to production!

 

I went ahead and attached the XML to upload for the Business Rule that I had created.  To import this file see the following wiki.

 

For more information on GlideSystem see the wiki.

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.  The class has an entire module covering Workflow Scripting.

 

In my next article I will talk about a couple of logging tricks with Workflows.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a rudimentary knowledge and/or familiarity of scripting in ServiceNow.

____________________________________________________________________________

 

In my Scripting classes I have demonstrated some techniques concerning the more advanced side of server-side logging and debugging that is available to ServiceNow scripters.  It is interesting how little is written about these capabilities, and how few are the examples on how to actually use some of the cooler features.  I thought I would try to bring together a few examples of these techniques into a single location.

 

All of the following examples can be tested using either Fix Scripts, or Scripts - Background.  If you are interested in what these are you might want to first read my article contrasting the two methods.  For this article I used a Fix Script, and named it Logging Testing.

 

Before getting started with the following examples, you might consider reading these wiki articles on logging:

 

Scripting Alert, Info, and Error messages

Scoped Script Logging

GlideSystem: getMessage

 

(Btw, you can also find similar information here, but you won’t be able to Google or Bing for it.)

 

So first thing to do is set up some test data to use in our logging tests:

 

 

var location = 'FS:Logging Testing';
var spuriousText = 'Danger';

// go grab a single most recent incident record. 
// I used “current” as a variable name for later 
// use in a Business Rule
var current = new GlideRecord('incident');
current.addActiveQuery();
current.setLimit(1);
current.orderByDesc('number');
current.query();

current.next();

var number = current.number;
var caller = current.caller_id.getDisplayValue();
var category = current.category.getDisplayValue();
var impact = current.getValue('impact');
var priority = current.getValue('priority');
var urgency = current.getValue('urgency');

 

 

Now that we have our test data let’s get started.

 

Let’s look at a GlideSystem method called: getMessage.  getMessage was how we used to do variable substitution in gs.log before we were given the new gs.info, .error, and .warn message capability.  Here is an example:

 

 

var message = gs.getMessage('--->[{2}] GETMESSAGE: \n\tNumber: {0} \n\tCaller: {1}', [number, caller, location]);

gs.log(message); // location embedded in message

 

 

This gives you the following result in the system log:

 

 

Notice that variables are placed inside an array.  This is how getMessage variable substitution works.  Otherwise, with no array brackets, it will only use the first variable and ignore the rest.  Only two parameters allowed you see.  Notice the variable substitution order?  It is zero based (just like the array).

 

gs.log is unique in that it allows us to pass the source (location) in as a parameter.  So we can migrate it from our gs.getMessage, and pass it to the log call.

 

 

var messageSansLoc = gs.getMessage('---> GETMESSAGE: \n\tNumber: {0} \n\tCaller: {1}', [number, caller]);

gs.log(messageSansLoc, location); // location as source

 

 

Here is the result.  Notice where the location ends up in the log?

 

 

gs.log is a really old bit of ServiceNow functionality, and is not Scope safe.  In other words: it does not work in a scoped application and will throw an error.  It works fine in the Global scope, but nowhere else.  So, we have been given a new set of functions to replace it in Scoped, and Global applications (the idea, as I understand it, is to refrain from using gs.log moving forward).  These new functions are:  gs.info, gs.warn, and gs.error.  These three functions have built-in variable substitution, but with limitations and quirks.

 

With all three you have “raw” variable substitution up to five values.  After which you have to move to an array like in gs.getMessage.  And like with getMessage, if you go beyond the maximum number the remainder will be ignored.

 

Here is an example using up to five values; which works fine.

 

 

// values listed - maximum number before array
gs.info('--->[{4}] MAX B4 ARRAY: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}', 
    number, caller, category, impact, location);

 

 

 

If you move to six values you will notice that the last is ignored and the variable substitution fails for the last value.

 

 

// values listed - no-array demonstration (>5)
gs.info('--->[{5}] MAX EXCEEDED: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}', 
    number, caller, category, impact, priority, location);

 

 

 

If you then go with the array brackets, and rerun - it will work fine; just like with gs.getMessage.

 

 

// values listed - array demonstration (>5)
gs.info('--->[{5}] ARRAY: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}', 
    [number, caller, category, impact, priority, location]);

 

 

 

So I decided to see if it would handle something strange.  Namely, using an array in the last parameter slot.  This did not work.  It failed to recognize the extra variables and break them out.  Instead the array was ignored altogether.  Therefore, all variables must be inside the array.  Notice how it choked on the array?  You can see this in what was placed in the Priority value.

 

 

// values listed - weird array demonstration (>6)
gs.info('--->[{6}] WEIRD ARRAY: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}\n\tUrgency:\t{5}', 
    number, caller, category, impact, [priority, urgency, location]);

 

 

 

Here is an example of using the gs.warn with an array.  Really it is pretty much the same as the gs.info except that instead of a level of Info in the System Log you get a Warning.

 

 

// values listed - warning - array demonstration (>6)
gs.warn('--->[{6}] WARNING: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}\n\tUrgency:\t{5}', 
    [number, caller, category, impact, priority, urgency, location]);

 

 

 

And the same with the gs.error.  No surprises.

 

 

// values listed - error - array demonstration (>6)
gs.error('--->[{6}] ERROR: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}\n\tUrgency:\t{5}\n', 
    [number, caller, category, impact, priority, urgency, location]);

 

 

 

The nice thing about variables substitution is that you can repeat the same variable without having to do much of anything special.

 

 

// repeating value substitution
gs.warn('--->[{1}] REPEAT: {0}, {0}, {0} Will Robinson!', spuriousText, location);

 

 

 

Variable replacement also allows for mixing the order up with ease.

 

 

// rearranged substitution order
gs.info('--->[{5}] REARRANGE: \n\tNumber:\t\t{0} \n\tImpact:\t\t{3} \n\tPriority:\t{4} \n\tCategory:\t{2} \n\tCaller:\t\t{1} ', 
    [number, caller, category, impact, priority, location]);

 

 

 

Now, let me introduce you to a bit of the real power available with variable substitution.  This works fine btw with gs.getMessage as well.

 

We can turn the variable substitution string into a pattern variable and pull it out of the logging statement for easier maintenance.  We can also do the same with the array.

 

 

// pattern and array variable demonstration
var pattern = '--->[{5}] PATTERN: \n\tNumber:\t\t{0} \n\tCaller:\t\t{1} \n\tCategory:\t{2} \n\tImpact:\t\t{3}\n\tPriority:\t{4}';

var values = [number, caller, category, impact, priority, location];

gs.info(pattern, values);

 

 

 

Pretty cool, huh?  So, let’s take that to the next level!  We can dynamically create the array, AND the pattern to adjust for the creation of a different message depending on changing situations in the code.  Imagine the various array pushes actually being inside condition statements and you will get the idea.

 

 

// dynamic pattern and array population example
var dynamicValues = [];
dynamicValues.push(location);
dynamicValues.push(number);
dynamicValues.push(caller);
dynamicValues.push(category);
dynamicValues.push(impact);
dynamicValues.push(priority);

var max = dynamicValues.length - 1;
pattern = '--->[{' + (max) + '}] DYNAMIC: ';
for (var i=max-1; i > -1; i--) {
  pattern += '{' + i + '},';
}

pattern = pattern.slice(0, -1); // get rid of last comma
gs.print(pattern);
gs.info(pattern, values);

 

 

 

So, there you have it!  You now are in possession of examples on how to use various types of server-side debugging and logging available with GlideSystem, and variable substitution tricks-and-tips.

 

I went ahead and attached the XML for you to upload the Fix Script that I had created.  To import this file see the following wiki.

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.  The class has an entire module covering Workflow Scripting.

 

In my next article I will talk about a couple of logging tricks with Business Rules.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

The "Load All Records" is a beautifully designed feature to import data into the instance using two steps: first step is loading, and the second is transforming. It is crafted to a very high specification, where the loading happens on the Data Sources while the transforming happens on the Transformation maps. Each execution is controlled by an Import set that displays the history of the data imported.

 

testing1.png

 

On the Data Source, the Test Load 20 Records feature can lead to confusion. If you click "Test Load 20 Records" as opposed to "Load All Records" it won't actually transform your data. Instead, it will retrieve the data for the first 20 records, as opposed to all.

load 20 records.jpg

Alas, this feature is not useless. There are some cases where you would want to Test Load the first 20 records of your data. Once on the data source, Test Load 20 Records will allow you to:

 

  • To create the import set table if it does not exist. The import set table is the table where the data is loaded.
  • To create the transformation map and map the source field names once the "import set table" exist. The feature itself will not create the transformation; however, it will create the import set table if it does not exist, allowing the system to retrieve the field names.

 

The main use case of the Test Load 20 Records feature is to validate that the load works correctly.

Use "Test Load 20 Records" to validate if the data load works correctly and you have set the correct configuration (e.g. passwords)

 

There are a few things that Test Load 20 Records DOES NOT permit.

  • You will not be able to transform the 20 records loaded. Data loaded with this test will not be part of the transformation.
  • You will not be able to automatically import and transform. Data is only loaded. It's NOT transformed. To automatically import and transform, you need to create a scheduled job to load the data, which will also transform it. To manually transform your data, go the transformation map, then click "Transform'"

"Test Load 20 Records" is DOES NOT transform the data after it gets imported.

Below is the image of a transformation map:

LDAP_test_20_transform.jpg

Test Load 20 Records vs. Load All Records

When loading data and transforming it, it is important to consider the use cases for both testing and loading the 20 records, or going all out and committing to loading all records. If your goal is just to validate and test that the data works correctly, "Test Load 20 Records" is the way to go. If you want to go through the full process and import and transform the data, use "Load All Records."

 

For more information on transforming your data see:

 

Video demos:

 

Importing and Exporting data:

 

Transforming your data:

 

Documentation:

 

My other blogs

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes a fundamental knowledge and/or familiarity of ServiceNow.

____________________________________________________________________________

 

From time-to-time the need arises at a client to have a scheduled email to be broadcast from their ServiceNow instance.  This ability is actually pretty straightforward using a couple of mechanisms that are present in the out-of-the-box ServiceNow implementation.

 

The process is:

 



Lab 1.1: Create a New Event to Listen For

 

  a. Navigate to Events -> Registry

 

  b. Click the New button

 

  c. Fill in the form with the following:

 

    i. Event Name: notification.scheduled.daily

    ii. Table: -- None --

    iii. Fired By: Daily Notification scheduled job

    iv. Description: Daily Notification to do timesheet

 

 

 

Lab 1.2: Create a new Notification

 

  a. Navigate to System Policy -> Email -> Email Notifications

 

  b. Click the New button

 

  c. Scroll to the bottom of the form and in the Related Links click the Advanced View link.

 

 

  d. Fill out the form with the following:

 

    i. Name: Daily Notification Example

    ii. Table: -- None --

    iii. From the When to send tab:

 

      A. Send when: Event is Fired

      B. Event name: notification.scheduled.daily

 

 

    iv. From the Who will receive tab:

 

      A. Event parm 1 contains recipient: checked

      B. Send to event creator: unchecked

 

 

    v. From the What it will contain tab:

 

      A. Content type: HTML only

      B. Subject: Daily Timesheet Reminder!

      C. Message HTML: Please remember to fill out yesterday's time in your timesheet!

      D. Importance: High

 



Lab 1.3: Create a New Scheduled Job

 

  a. Navigate to System Definition -> Scheduled Jobs

 

  b. Click the New button

 

  c. From the interceptor choose: Automatically run a script of your choosing

 

  d. Fill out the form with the following:

 

    i. Name: Daily Notification

    ii. Run: Day

    iii. Time: Hours: 06.  This can be any time you want to send out the email.

    iv. Run this script:

 

var getGroupByName = new GlideRecord('sys_user_group');
if (getGroupByName.get('name', 'Daily Notification Group')) {
  gs.eventQueue('notification.scheduled.daily', null, getGroupByName.sys_id + '', null);
}

 

 



Lab 1.4: Create the Distribution Group

 

  a. Navigate to System Security -> Users and Groups -> Groups

 

  b. Click the New button

 

  c. Fill out the form with the following:

 

    i. Name: Daily Notification Group

    ii. Manager: (fill in with your favorite manager)

    iii. Description: Team to be notified daily about timesheets

    iv. Right click on the form head and click Save

    v. Under the Group Members tab add the users you want to send the email to.

 

   

 

Lab 1.5: Testing!

 

a. Navigate to System Definition -> Scheduled Jobs and pick the Daily Notification job you created.

 

b. Click on the Execute Now button.

 

c. Navigate to System Scheduler -> Scheduled Jobs

 

d. Search for the Daily Notification job to verify that it is present

 

 

e. Navigate to System Logs -> Emails.  Search for an entry where Subject is Daily Timesheet Reminder! and open then most recent entry (should only be one right?)

 

f. Verify that the recipient list is the same as that in your group.

 

 

And that is all there is to it!  :-)

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.  The class has an entire module covering Scheduled Job Scripting.

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE to ADVANCED

Assumes knowledge and/or familiarity of several different areas in ServiceNow.

____________________________________________________________________________

 

Recently two of our developers (adam.keller, travisbell) came to me and asked if it were possible using Service Catalog and Workflows to do the following:

 

  1. Add multiple Users to a Group with a dynamic request for Approval
  2. Add a single User to multiple Groups with a dynamic request for Approval
  3. Each Group Approval Request/Response must not hold up the other Group Approvals.
  4. The Group Manager would be the approver for each group.

 

I thought the answer would be an easy one.  Boy, was I wrong!

 

What I attempted:

 

  1. My first thought was to simply use the Approval - Users Activity, and feed it a dynamic group for approval.  However, even though it would normally work for requesting group approvals it would not work to fulfill the third and fourth requirements.
  2. I next tried to construct a workflow loop using the Approval - Users Activity.  Essentially take all of the users and present them as a whole to each group for approval.  This worked fine, but that pesky third requirement got in the way again.
  3. Next I decided to try kicking off multiple workflows from a calling workflow.  This almost worked.  It behaved VERY strangely.  For example, with two groups to be approved one would wait for approval like it was supposed to, but the other one would auto-approve!  Probably a timing issue.  I will have to dig deeper on this and see if I can find out why.
  4. I then decided to try the same thing with Events.  So created a custom event, complete with Script Action to be called from the workflow.  Essentially this is the same as just calling a workflow.  Sure enough same behavior as my second scenario!  First one waited for approval, the second group auto-approved!  Yeah, gotta be timing...
  5. The parallel RITM approach just HAD to be the solution here, but how to do that?  I broached the subject with b-rad (Brad Tilton), and he pointed me to a response to a similar problem stated on the Community.  duffyb (Barbara Duffy) (response link) had run into how to properly create a RITM in code.  He said he had used her solution, and it had worked for him.  The RITM would then call it’s requisite workflow and the approvals would work correctly.  Tried it, and it failed! This time the groups and users values were not making it from the calling workflow to the new RITMs!  It was creating the RITMs fine, but the values weren’t showing up.
  6. In the wind-down at a meeting, Brad and I were discussing the issue, and mamann (Mark Amann) who was present said he would like to know more as it sounded familiar.  Sure enough Mark had also solved something very similar to this.  Same kind of thing where a RITM was being created in code, but the variables passed into the workflow did not populate.  He had the final bits of the puzzle:  You have to throw a timer in the RITM workflow to wait until the passed-in variables actually arrive!  Actually arrive?  Looks like there is a timing thing going on after all!  Sure enough when I put that last bit of magic into the code it worked!


Who says networking doesn’t get results?!  :-)


So I decided to share.  It is obvious that this problem is recurring, and not too many people were aware of the solution.  I certainly wasn’t!

 


Prerequisites

 

  1. Some knowledge of creating Workflows with the Workflow Editor
  2. Some knowledge of creating Service Catalog Items
  3. Some knowledge of creating and scripting Workflow activities



Lab 1.1: Create the Add Users to Groups Workflow

 

1. Navigate to Workflows -> Workflow Editor.  The Workflow Editor will be displayed.

 

2. Click on the “+” button to create a new workflow.

 

a. Name: Add Users to Groups

b. Table: Catalog Item [sc_cat_item]

c. If condition matches:  Run the workflow

d. Click on the Submit button to create the workflow.

 

3. Navigate in the workflow to Core -> Utilities and place a Run Script Activity between the Begin and End activities.

 

a. Name: Initilialize

b. Script:



// interestingly because of the timing issues I had to use
// a hardcoded context value with my identifier
var identifier = 'Add Users To Groups.' + activity.name;

var message = '--->\n';

// From the initial RITM
var groups = current.variables.groups + '';
var users = current.variables.users + '';

var groupList = [];

// if we have more than one they will be comma delimited
if (groups.indexOf(',') > -1) {
  groupList = groups.split(',');
}
else {
  groupList.push(groups);  // only one group
}

// just a few debug messages - remove these when going to QA
message += 'groups: ' + groups + '\n';
message += 'groupList: ' + groupList.length + '\n';
message += 'users: ' + users + '\n';
message += 'current.request: ' + current.request;

// get the sys_id of the RITM to create. In this case we
// want to create a new RITM for each group that we want 
// to request access from
var requestItemTemplate = new GlideRecord('sc_cat_item');
requestItemTemplate.get('name', 'Add Users to Group');

var ritmTemplateID = requestItemTemplate.sys_id + '';

// loop through and create a new RITM for each group, 
// attached to the parent RITM
for (var i=0; i < groupList.length; i++) {
  var group = groupList[i] + '';
  
  // undocumented feature - no API definition unfortunately
  // appears to do the following:
  // - Create new RITM based on template
  // - Copy all variables from the template to the new RITM
  // - Associate the new RITM to the current RITM
  // - Trigger the workflow (state = 2)
  var requestHelper = new GlideappCalculationHelper();
  requestHelper.addItemToExistingRequest(current.request + '', ritmTemplateID + '', 1);
  // Not sure of the function or necessity of this, but it is 
  // always present with the 2-3 examples I found
  requestHelper.rebalanceRequest(current.request + '');
  
  // Find our new RITM (it will be the most recent right?)
var reqItem = new GlideRecord('sc_req_item');
  reqItem.addQuery('request', current.request + '');
  reqItem.addQuery('cat_item', ritmTemplateID + '');
  reqItem.orderByDesc('number');
  reqItem.setLimit(1);
  reqItem.query();
  
message += '---> TOTAL: ' + reqItem.getRowCount() + '\n';
  // Now set the variable values of our new RITM to the targeted
  // group for approval, and the users we want added. Even after
  // this update is done it takes a bit for the new RITM workflow
  // to wake up to these new values
if (reqItem.next()) {
    message += '---> Updating: ' + reqItem.sys_id + '\n';
    reqItem.variables.group = group + '';
    reqItem.variables.users = users + '';
    reqItem.parent = current.sys_id + '';
    reqItem.update();
  }
}

// debug - write out all of our results to the system log
gs.log(message, identifier);

 

 

c. Click the Submit button to save your activity.

 



Lab 1.2: Create the Add Users to Group Workflow

 

1. From the Workflow Editor click on the “+” button to create a new workflow.

 

a. Name: Add Users to Group

b. Table: Catalog Item [sc_cat_item]

c. If condition matches:  Run the workflow

d. Click on the Submit button to create the workflow.

 

2. Navigate in the Workflow Editor to Core -> Timers and place a Timer Activity after the Begin.

 

a. Name: Wait for Vars

b. Timer based on: A user specified duration.

c. Duration: 15 seconds.

d. Click on the Submit button to create the Timer.

 

3. Navigate to Core -> Conditions and place an If Activity after the Timer Activity.

 

a. Name: Are Vars Filled In?

b. Advanced: Checked

c. Script:

 

 

answer = ifScript();

function ifScript() {
  if (JSUtil.notNil(current.variables.group) && JSUtil.notNil(current.variables.users)) {
    return 'yes';
  }
  return 'no';
}

 

d. Click on the Submit button to create the If Activity.

 

4. Navigate to Core -> Utilities and place a Run Script Activity between the Begin and End activities.

 

a. Name: Initilialize

b. Script:

 

 

var identifier = context.name + '.' + activity.name; 

var users = current.variables.users + '';
var group = current.variables.group + '';

var messages = '--->\n';

// debug - edit out for production
messages += '---> current.variables.users: ' + current.variables.users + '-' + identifier + '\n';
messages += '---> current.variables.group: ' + current.variables.group + '-' + identifier + '\n';

var userList = [];

if (users.indexOf(',') > -1) {
  userList = users.split(',');
}
else {
  userList.push(users + ''); // only one user found
}

// retrieve the list of user names
var userRecords = new GlideRecord('sys_user');
userRecords.addQuery('sys_id','IN',userList);
userRecords.query();

var userNames = [];
while(userRecords.next()) {
  userNames.push(userRecords.name + '');
}

// retrieve all the users already present in the group
var groupMembers = new GlideRecord('sys_user_grmember');
groupMembers.addQuery('group', group);
groupMembers.query();

// remove already existing members from the list
while(groupMembers.next()) {
  var member = groupMembers.user + '';
  if (userList.indexOf(member) > -1) {
  messages += '---> Member removed: ' + member + '\n';
  userList.splice(userList.indexOf(member),1); // remove the already existing member
  }
}

// retrieve group manager, and group name
var groupInfo = new GlideRecord('sys_user_group');
if (groupInfo.get('sys_id', group)) {
  workflow.scratchpad.manager = groupInfo.manager + '';
  workflow.scratchpad.groupName = groupInfo.name + '';

  messages += '---> workflow.scratchpad.groupName: ' + workflow.scratchpad.groupName + '-' + identifier + '\n';
  messages += '---> workflow.scratchpad.manager: ' + workflow.scratchpad.manager + '-' + identifier + '\n';
}

workflow.scratchpad.userNames = userNames;

workflow.scratchpad.group = group;
workflow.scratchpad.users = userList;

workflow.scratchpad.messages = messages;

 

 

c. Click the Submit button to save your activity.



5. Navigate to Core -> Conditions and place an If Activity after the Initialize Run Script Activity.

 

a. Name: Any users to process?

b. Advanced: Checked

c. Script:

 

 

answer = ifScript();

function ifScript() {
  if (workflow.scratchpad.users.length > 0) {
    return 'yes';
  }
  return 'no';
}

 

 

d. Click the Submit button to save your activity.



6. Place another If Activity after the Any Users to Process If Activity.

 

a. Name: Is Manager Present?

b. Advanced: Checked

c. Script:

 

answer = ifScript();

function ifScript() {
  if (JSUtil.notNil(workflow.scratchpad.manager)) {
  return 'yes';
  }
  return 'no';
}

 

 

7. Navigate to Core -> Approvals and place an Approval - User Activity after the If Activity.

 

  So here is our dynamic approval.  Simple, huh?  :-)  Remember requirement #4.  We only want to request approval from the group manager.

 

a. Name: Get Group Manager Approval

b. Advanced: Checked

c. Script:

 

 

var identifier = context.name + '.' + activity.name;
workflow.scratchpad.messages += '---> approval group: ' + workflow.scratchpad.group + '-' + identifier + '\n';

answer = [];
answer.push(workflow.scratchpad.manager);

 

 

d. Click the Submit button to save your activity.



8. Navigate to Core -> Utilities and pull out five Run Script Activities.

 

a. First activity:

 

i. Name: Log Users Already Exist

ii. Script:

 

 

workflow.scratchpad.messages += '---> Group (' + workflow.scratchpad.groupName + '): All users already exist in the group, or no users to process!  Process ending.\n'; 

 

 

iii. Click the Submit button to save your activity.

 

b. Second activity:

 

i. Name: Log Manager Not Present

ii. Script:

 

 

workflow.scratchpad.messages += '---> Group (' + workflow.scratchpad.groupName + '): The Group Manager was not found. The manager must be present! Process ending.\n';

 

 

iii. Click the Submit button to save your activity.

 

c. Third activity:

 

i. Name: Log Request Rejected

ii. Script:

 

 

workflow.scratchpad.messages += 'Users ('+ workflow.scratchpad.userNames +') rejected for group ---> ' 
  + workflow.scratchpad.groupName + '\n';

 

 

iii. Click the Submit button to save your activity.

 

d. Fourth activity:

 

i. Name: Add Users to Group

ii. Script:

 

 

addUsersToGroup(workflow.scratchpad.users, workflow.scratchpad.group);

function addUsersToGroup(userIDList, groupID) {
  for (var i=0; i < userIDList.length; i++) {

  workflow.scratchpad.messages += '\tUser (' + userIDList[i] + ') added to group: ' 
  + workflow.scratchpad.group + '\n';

  var groupMember = new GlideRecord("sys_user_grmember");
  groupMember.initialize();
  groupMember.user = userIDList[i] + '';
  groupMember.group = groupID + '';
  groupMember.insert();
  }
}

 

iii. Click the Submit button to save your activity.

 

 

e. Fifth activity:

 

i. Name: Log Messages

ii. Script:

 

 

var identifier = context.name + '.' + activity.name; 

gs.log(workflow.scratchpad.messages, identifier);

 

 

iii. Click the Submit button to save your activity.


9. Navigate to Core -> Approvals and pull out an Approval Action Activity.

 

a. Name: Auto-Reject

b. Action: Mark task rejected.

 

 

10. Wire everything up to look like the following diagram:

 

 

NOTE:  The combination of the If and Timer activities make the workflow wait until the If condition is met.  In this case the variables are filled in.

 

11. Publish your workflow.  In Helsinki I found that sometimes my new changes were not picked up by Service Catalog unless I did this!  So if you find that modifications you made do not seem to appear during testing then this is the solution.



Lab 1.3: Create the Service Catalog Interface

 

1. Navigate to  Service Catalog -> Maintain Categories and create a new Category.

 

a. Title: User and Group Exercises

b. Catalog: Service Catalog

c. Right-click on the header and save your new category.

 

2. Add a new Catalog Item

 

          NOTE: If you do not see some of the following fields on your form you may have to add them to your form.

 

a. Name: Add Users to Groups

b. Workflow: Add Users To Groups

c. Short description: Add a user to several groups

d. Use cart layout: unchecked

e. Omit price in cart: checked

f. No quantity: checked

g. No proceed to checkout: checked

h. No cart: checked.

i. Right-click on the header and save your new catalog item.

 

3. Add two variables to the new Catalog Item

 

a. Variable one

 

i. Type: List Collector

ii. Question: User(s)

iii. Name: users

iv. Mandatory: true

v. List table: User [sys_user]

vi. Reference Qualifier: active=true

vii. Order: 100

viii. Click on the Submit button to save your variable.

 

b. Variable two

 

i. Type: List Collector

ii. Question: Group(s)

iii. Name: groups

iv. Mandatory: true

v. List Table: Group [sys_user_group]

vi. Reference Qualifier: active=true

vii. Order: 200

viii. Click on the Submit button to save your variable.

 

Your Catalog Item should look something like this:

 

 

4. Add another new Catalog Item

 

a. Name: Add Users to Group

b. Workflow: Add Users To Group

c. Short description: Add users to group

d. Use cart layout: unchecked

e. Omit price in cart: checked

f. No quantity: checked

g. No proceed to checkout: checked

h. No cart: checked.

i. Right-click on the header and save your new catalog item.

 

5. Add two variables to the new Catalog Item

 

a. Variable one

 

i. Type: Reference

ii. Question: Group

iii. Name: group

iv. Mandatory: true

v. List table: Group [sys_user_group]

vi. Reference Qualifier: Simple

vii. Order: 100

viii. Click on the Submit button to save your variable.

 

b. Variable two

 

i. Type: List Collector

ii. Question: Users to add to group

iii. Name: users

iv. Mandatory: true

v. List Table: User [sys_user]

vi. Reference Qualifier: active=true

vii. Order: 200

viii. Click on the Submit button to save your variable.

 

Your Catalog Item should look something like this:

 

 

Lab 1.4: Testing

 

1. First we need to activate our new Category.  Navigate to Self-Service -> Service Catalog.

 

2. Click on the “+” button in the upper right corner to open the Sections form.

 

3. Add User and Group Exercises someplace on the form.

 

4. Click the “x” button to close the Sections form.

 

5. Now we can test our new Catalog Item.  Click on the User and Group Exercises link.

 

6. Click on the Add Users to Groups link.

 

Your Service Catalog Item form should look a bit like this:

 

 

7. Pick your favorite group, and a couple of users to add, and click the Order Now button.  The Order Status form will be displayed. 

 



8. Click on the first request number created (in my case REQ0010081).  The Request form will be displayed. 

 

9. Scroll to the bottom of the request form and note that there are three RITMs.  The first RITM is the main workflow (kicked off by this request), and the other two were generated by that workflow.

 

10. In the related list order the RITMs by Number ascending and click on the first (lowest numbered) record.  For me it would be RITM0010099.

 

11. Scroll to the Related Links and click on the Show Workflow link.  This will display the workflow context diagram.  It should look something like this:

 

 

12. Close this browser tab, and return to the Request Item tab.

 

13. Back arrow on your browser to get back to the Request form, scroll to the related links, and pick the second RITM record.

 

14. Scroll to the bottom of the RITM form.  You should see one or more approvers (depending on how many were already present in your group).

 

 

15. In related links for the RITM click on the Show Workflow link.  It should look something like this:

 

 

NOTE: By definition if there is no one already present in the group it will auto-approve.  So you may want to add at least one user to your group before you run this test.

 

16. Close this browser tab, and return to the Request Item tab.

 

17. From the Approvers tab, go ahead and approve one of the approvers.  You should see a couple of roles added messages appear at the top of the form.

 

 

18. Back arrow on your browser to get back to the Request form, scroll to the related links, and pick the second RITM record.  Look at the workflow (it should be the same as the other).  Approve this RITM as well.

 

19. Now look at that RITMs workflow again.  You will observe the completion of the flow.

 

 

20. Finally navigate to User Administration -> Groups, and open your group.  You should see all the newly added users in the Group Members tab.

 

Obviously, other tests you will want to perform are:

 

1. Do all users exist in the group?

2. Is the manager field filled in on the group?

3. Manager rejects the request

 

And you are done!

 

You will probably want to go the extra step of notifying the requestor of the results of each request, and you could also set up the status for the user to track where their request currently is.  Just a couple of cleanup items.  :-)

 

I want to highly recommend taking the ServiceNow Scripting training class should you get the opportunity.  The class has an entire module covering Workflow Scripting!

 

Steven Bell

 

accenture logo small.jpg

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

Filter Blog

By date: By tag: