Skip navigation

A question that I get asked a lot is how to use custom fonts in Service Portal.

Here are the three primary ways:

Option 1:

The easiest option is through Google Fonts.

  1. Select the Google font you want to use.
  2. Copy the font’s style sheet URL.
  3. Go to your theme and add a new CSS Include.
  4. Make sure the “Source” is selected to URL and then paste the CSS URL.
  5. Click save.

Now you can reference the font in your CSS.

Screen Shot 2016-09-28 at 5.02.38 PM.png

 

Option 2:

  1. You’ll need to encode your fonts using base64 and then include them in the CSS Includes of your theme. You can use this free tool by Font Squirrel: Create Your Own @font-face Kits | Font Squirrel.
  2. Use the “expert” option, then you will see an option for base64 encoding in the CSS section.
  3. Select “Base64 Encode.”
  4. Once exported, add the generated code as a CSS include on your theme.
    For more information see: Learn how to create custom CSS in your theme here.

Screenshot 2016-09-27 10.04.01

Option 3:

Another approach is to upload your font files as attachments to the CSS Includes record and then reference them with “sys_attachment.do?” and passing in the sys_id as a parameter. See the following example:

screen-shot-2016-09-28-at-4-52-25-pm

 

For additional information on CSS fonts, here’s an article that I have found to be very helpful.

If you find this useful, let me know in the comments below

--------------------------------
Nathan Firth
Principal ServiceNow Architect
nathan.firth@newrocket.com
http://newrocket.com
http://serviceportal.io

Did you know that you can enable Multi-Factor Authentication on your Personal Developer Instance in only a few minutes? It is true. We recently published a video that walks through the couple of simple steps.

 

 

 

It breaks down to this:

 

  • Log in to your developer instance (or request one at the Developer Portal if you don't already have one)
  • Enable the Integration - Multifactor Authentication plugin on your instance.
  • Go to the Multi-Factor Authentication properties and enable it. Make sure you have a sufficient number of attempts to login without MFA or you can lock yourself out of the instance without much recourse. The default is 3 and shouldn't go lower.
  • Edit your User form to include the "Enable Multi-Factor Authentication" checkbox.
  • Open the record(s) for the accounts you want to add MFA.
  • Log in as that user. You will be prompted to create a Google Authenticator account for this account on this instance. Pair up with authenticator.
  • At this point, you'll need the authenticator code to log in to this account going forward.

 

If you'd like to increase the security of your Developer Instance, this will do it. Now even a brute force attempt at guessing logins will still require the authenticator code which makes it even less feasible. Note this works for any ServiceNow instance but I am speaking to the Developer Program because these are my people. Rock on.

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

In this post we’re going to create a Service Portal widget that displays a United States map that colors the individual states based on the amount of incidents opened in that state relative to the other states. We are going to use D3js to accomplish this. If you are unfamiliar with D3, check out this previous post introducing D3 to ServiceNow. This post will follow an example from the D3 community which can be found here.

 

To create our state map we will need to grab our incident data in our server script, create a dependency to a new UI script that contains the coordinates for the state map, and pass our data to the UI script from our client script.

 

Server script

 

In our server script, we will create an array of objects within the data object. This array will contain 50 objects; each representing a state. Once that array of objects is created, we will use a pair of GlideAggregate calls to retrieve the counts of open and closed incidents by location. We’ll use these calls to populate our array of objects with the counts for each state.

 

Below is a screenshot of the server script along with the pasted script:

 

Post 3 Server Script.png

 

 

(function() {

            // Create an array of state abbreviations in the same order as our UI script

            var states = ["HI", "AK", "FL", "SC", "GA", "AL", "NC", "TN", "RI", "CT", "MA",

            "ME", "NH", "VT", "NY", "NJ", "PA", "DE", "MD", "WV", "KY", "OH",

            "MI", "WY", "MT", "ID", "WA", "DC", "TX", "CA", "AZ", "NV", "UT",

            "CO", "NM", "OR", "ND", "SD", "NE", "IA", "MS", "IN", "IL", "MN",

            "WI", "MO", "AR", "OK", "KS", "LA", "VA"];

         

            // Create an array of objects in our data object with placeholder properties

            data.states = [];

            for (i=0;i<states.length;i++) {

                        data.states.push({state: states[i], open: 0, closed: 0});

            }

         

            // Find counts of open incidents by location

            var openCount = new GlideAggregate('incident');

            openCount.addQuery('active', 'true');

            openCount.addAggregate('COUNT', 'location');

            openCount.query();

            while (openCount.next()) {

                        var openState = openCount.location.state;

                        var openStateCount = openCount.getAggregate('COUNT', 'location')*1;

                        for (i=0; i<data.states.length; i++) {

                                    // Increase the open property if there is a match with a state

                                    if (openState == data.states[i].state) {

                                                data.states[i].open += openStateCount;

                                                break;

                                    }

                        }

            }

         

            // Find counts of closed incidents by location

            var closedCount = new GlideAggregate('incident');

            closedCount.addQuery('active', 'false');

            closedCount.addAggregate('COUNT', 'location');

            closedCount.query();

            while (closedCount.next()) {

                        var closedState = closedCount.location.state;

                        var closedStateCount = closedCount.getAggregate('COUNT', 'location')*1;

                        for (i=0; i<data.states.length; i++) {

                                    // Increase the closed property if there is a match with a state

                                    if (closedState == data.states[i].state) {

                                                data.states[i].closed += closedStateCount;

                                                break;

                                    }

                        }

            }

         

})();

 

UI script

 

Similar to the widget dependency we created in the previous posts, we will create dependencies to call D3 and also to call a UI script that we will create. The code for our new UI script can be found here. It doesn’t matter what you name this UI script as long as you remember what you name it; I named mine u_states. Once you have created this UI script, reference it with a widget script dependency.

 

Client Script

 

Our client script will be used to determine what color each state should be, define the HTML template that will be used as a tooltip, and make the call to our new UI script to actually draw our state map.

 

We will loop through the array of objects that we created in our server script and use the D3 interpolate to determine each state’s color. The D3 interpolate allows us to define a range of two colors: one for the lowest incident density and one for the highest incident density. Based on a given state’s incident count, its color will fall somewhere in this range. For this example we will use #ffffcc as our low color and #800026 as our high color, but you can use any colors you want.

 

Below is a screenshot of my client script along with the pasted script:

 

Post 3 Client Script.png

 

function() {

            /* widget controller */

            var c = this;

         

            // Find the max number of open incidents in a single state

            var maxOpen = d3.max(c.data.states, function(d) { return d.open; });

 

            // Create object containing the state data and determine what color

            // each state should be

            var mapData ={};

         

            c.data.states

            .forEach(function(d){

                        mapData[d.state] = {color: d3.interpolate("#ffffcc", "#800026")(d.open/maxOpen), open: d.open, closed: d.closed};

            });

         

            // Define the HTML for the tooltip

            function tooltipHtml(n, d){

            return "<h4>"+n+"</h4><table>"+

            "<tr><td>Open </td><td>"+(d.open)+"</td></tr>"+

            "<tr><td>Closed </td><td>"+(d.closed)+"</td></tr>"+

            "<tr><td>Total </td><td>"+(d.open + d.closed)+"</td></tr>"+

            "</table>";

            }

 

            // Calls the draw function from our u_states UI script which uses

            // D3 to draw our map

            uStates.draw("#statesvg", mapData, tooltipHtml);

 

            d3.select(self.frameElement).style("height", "800px");

}

 

HTML

 

Below is the pasted HTML I used for this widget:

 

<div id="tooltip"></div>

<div style="text-align: center;">

            <h2>Incident State Map</h2>

            <svg width="960" height="600" id="statesvg"></svg>

</div>

 

CSS

 

Below is the pasted CSS I used for this widget:

 

.state{

            fill: none;

            stroke: #a9a9a9;

            stroke-width: 1;

}

.state:hover{

            fill-opacity:0.5;

}

#tooltip {

            position: absolute;        

            text-align: center;

            padding: 20px;          

            margin: 10px;

            font: 12px sans-serif;     

            background: lightsteelblue;

            border: 1px;   

            border-radius: 2px;        

            pointer-events: none;      

}

#tooltip h4{

            margin:0;

            font-size:14px;

}

#tooltip{

            background:rgba(0,0,0,0.9);

            border:1px solid grey;

            border-radius:5px;

            font-size:12px;

            width:auto;

            padding:4px;

            color:white;

            opacity:0;

}

#tooltip table{

            table-layout:fixed;

}

#tooltip tr td{

            padding:0;

            margin:0;

}

#tooltip tr td:nth-child(1){

            width:50px;

}

#tooltip tr td:nth-child(2){

            text-align:center;

}

 

Final product

 

Below is a screenshot of my finished widget. Now that we have the basic framework for a map widget, we can theoretically create a widget with any map and use any data from ServiceNow. The main part that we skipped for this post is the actual creation of the SVG map coordinates, but there are plenty of online resources that help with that process (potentially a future post?).

 

Post 3 Map.png

 

 

Sources

 

- https://d3js.org/

- US State Map

 

For a full collection of my posts, visit http://mitchstutler.com/blog

 

 

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

Service Portal in Helsinki has opened up a new world of UI opportunities. The modular nature of functional widgets which combine AngularJS, CSS/SCSS, Client scripting and Server scripting has unleashed a whole new level of creativity. Already in Share you can find a bunch of new widgets and it’s only beginning. However, the functionality that you're seeing is no longer restricted to just the Service Portal experience.

 

One key feature to understand is that your Theme (Header/Footer, CSS and JS) is attached to your Portal. When you navigate to your Portal suffix it then knows to serve up the page, including the Theme defined at the Portal record level.

 

Here’s what that looks like:

Standard page served up within a Service Portal’s theme (in this case, a specific Catalog category):

1.JPG

The URL for this is: {your-instance}.service-now.com/sp/?id=sc_category}

  • Implies a Service Portal with a URL suffix of “sp"
  • Implies a Page ID of “sc_category"
  • The URL drives how the page is built

 

Now here is a standard page served up WITHOUT a portal & theme (no header):

2.JPG

The URL for this is: {your-instance}.service-now.com/$sp.do?id=sc_cartegory}

  • You’ll notice the $sp.do which is a Service Portal call to serve up a specific Portal Page ID.
  • You’ll notice the /sp/ has been omitted from the URL implying we are not loading this page within a Service Portal (and in doing so without the associated Theme).

 

HOWEVER, there are 2 other ways you can use this same page:

 

1. Include the Page within the Theme INSIDE the Platform UI:

3.JPG

The URL for this is: {your-instance}.service-now.com/nav_to.do?uri=/sp/?id=sc_category}

  • You'll notice the double-header, however, because you're telling it to serve up the entire Portal by using /sp/ (if that's your Portal suffix)

 

2. Include the Page WITHOUT the Theme INSIDE the Platform UI:

4.JPG

The URL for this is: {your-instance}.service-now.com/nav_to.do?uri=$sp.do?id=sc_category}

  • You see now the header has been removed since the URL does not have the /sp/ which is the Portal suffix which then implies the Theme has been removed.

 

So what does this mean and/or why should I care?

  • You can now elevate your UI Page experience to be either AngularJS-driven pages served up via Service Portal or as you had them before using Jelly.
  • Build once, use many - use widgets that you're producing for the employee portal within the platform view as well.
  • Design out your own Fulfiller experience if you want! You don't have to wait necessarily for the platform to change UI elements - you can start to build out your own.

 

It's going to be great to see how people start to take this to a whole new level!

Let's connect:

twitter.com/mattmetten
www.mattmetten.com

During my tenure at ServiceNow, I have always stressed the importance of "data-driven" code.  What I mean is make workflows, business rules, etc dependent on tables and records in ServiceNow that can be maintained outside of your internal enhancement release process.  In other words, I shouldn't have to promote code to change something as simple as an approver in a workflow.  I find that ServiceNow Administrators are often bogged down maintaining data instead of enhancing the process to be more efficient and save time.  Examples:

  • Use the task's configuration item whenever possible to store important process attributes for that particular item.  In a workflow "dot-walk" to the Task's CI for things like Approval Group, Support Group, Owned By, Location, etc and leverage those attributes instead of hard coding the values in a workflow or code.
  • Create your own custom tables to store data in support of your process.  Does the incident category really need to be a choice type field that only admin's can add choices?  No!  You can easily create a custom category table and change the category field to be a reference instead.  Then create ACL's to allow users to maintain this data for you.
  • Don't be afraid to add attributes to out of the box tables like locations and departments.  I have seen cases where locations have a specific support group for that campus, building, or floor.  Instead of creating code to determine the group based on the location in the task, simply add a Support Group attribute to the location record that can be maintained outside of code and use that in your workflows and code.

 

Coding in this way takes more time up front to do the right look-ups, but it will save you a ton of time in the long run and make your ServiceNow administrators happy.  Plus you will have the ability to "delegate" the maintenance of this data to people outside the ServiceNow Administration group if you so choose.  I cringe every time I hear of administrators being asked to manually make changes to data in ServiceNow "just because" they are the only users that have access to update that data.  Mistakes can and will happen!  So instead, modify the ACL's, create access, etc for the users that own that data to do it themselves.

 

Easy Import:

I am sure this all sounds good, but I commonly get a follow up question... how can non-administrators maintain data in these custom tables, especially if there are a lot of rows to maintain?  The answer is usually to import a spreadsheet.  Unfortunately data imports in ServiceNow are an admin function and import sets can be very confusing to setup.  The Fuji release introduced "Easy Import" where it will automatically create an import and update template for you:

http://wiki.servicenow.com/index.php?title=Easy_Import#gsc.tab=0

 

 

Unfortunately the Easy Import feature is only available to administrators out of the box, but this can be changed.  Navigate to System UI \ UI Context Menus and search for name = Import and open the record.

 

You will see this is a global action that available to anyone with the 'admin' role (see the condition).  If you want to make this feature available on a specific table, you can easily clone this record and set it for a specific table and a specific role.  Simply change the table from Global to your specific table and then change the condition to something more appropriate:

Condition Example: gs.hasRole('YOUR-CUSTOM-ROLE') && !ListProperties.isRelatedList() && !ListProperties.isRefList()

 

  • gs.hasRole('YOUR-CUSTOM-ROLE') part of the condition checks the logged in user's roles to see if it matches the role between the quotes.
  • !ListProperties.isRelatedList() part of the condition prevents this action from showing on up related lists.
  • !ListProperties.isRefList() part of the condition prevents this action from showing up on reference list popups.

 

You may also want to change the name from Import to something else because the administrators will see duplicate actions when they are logged in and this way they know which one is which.  Then click Additional Actions > Insert.  Once this is done non-administrators will now have access to Easy import for your specific table.

 

Even Easier Import:

Now truth be told, while Easy Import is an awesome feature, it can still be somewhat confusing especially to non-technical people.  By default it also allows for inserting and updating of every field on the table.  What if you wanted to simply provide a locked down excel import template with a fixed list of columns and allow them to import data into ServiceNow?  Again out of the box importing spreadsheets is an admin function, but fortunately there is another way... Service Catalog Record Producers.  Record Producers are a very powerful platform feature that have many uses.  They are great because they are accessible from the Service Catalog that all users have access to, you can utilize User Criteria to restrict/enable access to them, put data into any table in ServiceNow, and they can call a script.

 

In order to make this write up easier, I am choosing to walk you through importing data into an out of the box table.  But the concept of creating an import template that is loaded by a record producer can be applied to any table in ServiceNow as the process and code is very similar.  Lets first start with a use case to set the context of what I will be walking you through...

 

During conversations about incident and change management, customers often ask "how can I associate an incident or change to 100s to 1000s of CIs".  The Affected CI related list is the best out of the box solution allowing you to list all of those CI's.  The Geneva release introduced a new UI to add affected CI's to a change record and this can certainly be extended to other tables like incident and problem, but sometimes importing a spreadsheet of CI's can be easier especially if this is a change that you perform on a recurring basis.

 

The steps below will walk you through the necessary pieces to make this work: Import Set table, transform map, and record producer.  Once complete, users will be able to access this feature from the service catalog to download the import template and be prompted for the task to associate the list of CI.  The final solution will look like this:

 

  • First we need to create the import set staging table and transform map.  I won't be going into every detail about import sets since it is well documented.
    • Create the import template that you would like your users to utilize.  Name your columns in words that the end users will understand.
      • In my example use case, I created an Excel spreadsheet with one column for the Configuration Item, though again you can add any number of columns to the spreadsheet.  Since I don't want the users to have to enter the change number 100s to 1000s of time on the spreadsheet, I will prompt for the Task in a record producer variable.
      • Populate the spreadsheet with test data and save the spreadsheet somewhere on your computer.
    • Navigate to System Import Sets \ Load Data.  Choose Create table, name your import set table and then choose your import template.
    • Click Submit.  ServiceNow will automatically create an Import Set staging table for you and import the data from the spreadsheet.
    • Once complete, click Loaded data.  Since we are prompting for the Task in the record producer, we need a place to store the task ID so we need to add a new field to the import set table.
    • While viewing the import set table, Affected CI Imports in my use case, click on one of the "hamburger" icons beside one of the column headers, then Choose Configure, and finally Table.
    • Write down the Name of your import set table since you will need it later in the setup.
    • Click New in the Columns section to create a new field.
    • Enter the following information:
      • Type: String
      • Column label: Task ID
      • Column name: u_task_id
        • Write down the name of your new column since you will need it later in the setup.
      • Max length: 40
    • Click Submit to create the new field.
    • Click Update on the Affected CI Import table record so you are taken back to the Affected CI Imports list of imported records.
    • Click Transform Maps under Related Links on the Affected CI Imports list so we can create a new transform map for this new table.
    • Since we don't have a transform map yet the list will be empty, but Click New to create a new Transform Map.
    • Name your Transform Map and set the Target Table.  In my example use case the target table is CIs Affected (task_ci).  All other fields can remain default.
    • Click Mapping Assist under Related Links.
      • If your spreadsheet column names match the field labels, you can click Auto Map Matching Fields instead which will automate the creation of field maps.
      • Don't click the submit button because that will require extra steps to further create the field maps.
    • Map your source fields to the target table fields.  In my example use case there are two field maps: Configuration Item to Configuration Item and Task ID to Task.
    • Click Save.
    • Since the Configuration Item field is a reference you can make further adjustments like setting whether to create a record in the reference table if the CI in the spreadsheet isn't found in the CMDB.  We don't want that to happen, so lets edit the field map.  More details can be found here: Creating New Transform Maps - ServiceNow Wiki
      • In the Field Maps related list at the bottom, click "u_configuration_item" to edit this record.
      • Set Choice Action to reject since in our example use case we don't want to process this CI in the event the CI entered in the spreadsheet is not valid.
        • In other use cases you may want to set it to Ignore if you have additional columns in your spreadsheet and you want to process the row but just ignore the invalid value in the one column.
        • Other cases you may want to create a record in the target table so you can choose create.
        • You may also find the Referenced value field name attribute useful. In my example use case I am expecting the CI's name to match a record in the CMDB but what if you prefer to enter the CI's serial number or asset tag instead.  You can enter the column name (database column name, not label name) in this field and it will perform a lookup against that field instead of the default name.
      • Click Update.
    • Click the "hamburger" Additional Actions button and choose Copy sys_id and paste this into a text file because we will need it later in the setup.
    • We are now done with the Import Set Components.
  • Second we need to create a Service Catalog Record Producer for users to access from the catalog that will provide a link to download the import template as well as prompt for the task to link the list of CI's.  The approach will be that the record producer will create a Import Set Data Source record with the Excel Import file attached to it.  The record producer script will automatically execute the processing and transforming of the excel file.
    • Navigate to Service Catalog \ Catalog Definitions \ Record Producers and click New.
    • Set the Name and Short description to something that will make sense to your users, in my example I am setting both to "Affected CI Import".
    • Set the Table name to Data Source (sys_data_source).
    • For easy access and administration we will attach the import template directly to this record producer.  Either drag and drop your Excel import template into your browser or click the paperclip to browse for it.
    • Right-click on your attachment and choose Copy link address in Chrome or Copy link location in Firefox, etc.
    • Now that we have the URL for the import template, we can add a clickable link in the Description text.
    • Set the Description to provide instructions for your users.  In my example description, step 1 includes a step to download the template by "clicking here".  We can make the click here a clickable link.
    • After entering the description text, highlight the text you want to make the clickable link to download the template and then click the Insert/edit link button.
    • Paste in the URL into the URL field and then click OK.
    • Click the Accessibility tab and choose the Catalog(s) that you want this Record Producer to be in along with the category within that catalog.
    • Click the "hamburger" Additional Actions button and choose Save so we can add the Task reference variable.
    • Scroll to the bottom of the form to the Variables related list and click New.
    • Set the following fields:
      • Type (Top of form): Reference
      • Mandatory (Top of form): true
      • Question (Question Section): Task Number
      • Name (Question Section): task_number
      • Reference (Type Specifications Section): Task (task)
        • You could specify a specific type of task like change_request
        • You could also specify a Reference qualifier condition such as active is true
    • Click Submit.
    • Now we need to set the script to run when the record producer is submitted.  Go back to the What it will contain tab and scroll to the script and paste in the following script.  The script has embedded comments to explain what everything is doing.
// Set the following variables with the name of your import set table and task id column
var importSetTableName = "u_affected_ci_";
var importSetTaskIDFieldName = "u_task_id";
var transformMapID = "63f9ee304f8a2e00d1676bd18110c74c";

// Setup data source for attachment
current.name = "Affected CI Import for:  " + producer.task_number.getDisplayValue();
current.import_set_table_name = importSetTableName;
current.file_retrieval_method = "Attachment";
current.type = "File";
current.format = "Excel";
current.header_row = 1;
current.sheet_number = 1;
current.insert();

// Process excel file
var loader = new GlideImportSetLoader();
var importSetRec = loader.getImportSetGr(current);
var ranload = loader.loadImportSetTable(importSetRec, current);
importSetRec.state = "loaded";
importSetRec.update();

// Update processed rows with task sys_id
var importSetRow = new GlideRecord(importSetTableName);
importSetRow.addQuery("sys_import_set", importSetRec.sys_id);
importSetRow.query();
while (importSetRow.next()) {
    importSetRow[importSetTaskIDFieldName] = producer.task_number;
    importSetRow.update();
}

// Transform import set
var transformWorker = new GlideImportSetTransformerWorker(importSetRec.sys_id, transformMapID);
transformWorker.setBackground(true);
transformWorker.start();

// Take user to task
gs.addErrorMessage("Data import may take time load, please reload record to see all the Affected CIs.");
var redirectURL = "task.do?sys_id=" + producer.task_number;
producer.redirect = redirectURL;

// Since we inserted data source already, abort additional insert by record producer
current.setAbortAction(true);

 

    • Set lines 2-4 within the script using the information you copied down in the earlier steps.  If you were following along and naming everything exactly as I provide in these instructions the importSetTableName and importSetTaskIDFieldName variables should be similar, but you will need to paste in the SysID of the transform map you created.
    • Click Update.
    • Additional ideas for you is to create a Catalog Client Script that will ensure there is an attachment on the record producer before proceeding.  Check the community for solutions on how to do this.
  • You have now completed creating the record producer.

 

Now its time to test!  Cross your fingers that you followed along closely and that this will work on the first try.

  • Navigate to the Service Catalog and to the category you chose to add your record producer and click it.
    • Or feel free to open the record producer again and click Try it.
  • Be sure to test that the template download link works.
  • Choose a task you want to test with, attach a completed import template with a list of Configuration Items
  • Click Submit.
  • It will take a few seconds to start the processing of the data load but the record producer script will take you to the task you chose so you can view the list of Affected CI's that were imported.  As noted in the message at the top of the screen, it may take several seconds to process the entire data load so reloading the record may be required to validate.

 

Hopefully you found this useful.  Again I chose to use an out of the box table as an example, but these steps can be applied to any table in ServiceNow.  The record producer script is generic enough to plugin in your own tables and additional steps.  Enjoy!


Please mark this post or any post helpful or the correct answer so others viewing can benefit.

In ServiceNow, we use scripts to extend your instance beyond standard configurations. When creating scripts like business rules or client scripts, we use Javascript. Javascript is a well-known and popular language that has benefited from significant investment over the years. JavaScript is a high-level, dynamic, untyped, interpreted programming language with a feel like java. As powerful as it is now, debugging is sometimes complex. To make it less complex, you can simplify your code using a code optimizer. Code optimization is any method of code modification to improve code quality and efficiency. A program may be optimized so that it becomes a smaller size, consumes less memory, executes more rapidly, or performs fewer input/output operations. Our business rules and client script editors do have a script syntax analyzer, but you can use Google closure compiler as an additional tool to simplify them.

 

chromespeedo.png

 

Here are 3 use case examples of how to utilize the Closure Compiler debugger tool:

  • Validating if the code syntax is correct, without having to run it
  • Simplifying logical operations (e.g complex conditions, etc)
  • Simplifying complex functions

 

Validating if the code syntax is correct, without having to run it

If you find a complex code that makes some sense, but you suspect it is incorrect, then run it thought the javascript optimizer. This is similar to our own syntax validation on the script editor. For example, let's check our password validation code for errors and inaccuracies using the debugger.

 

Here is an example of password validation:

gs.info("good password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("This4isAgoodPassw0rd!"));
gs.info("bad password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("thisisbadpassword"));

// The password must be 8 characters long (this I can do :-)).
// The password must then contain characters from at least 3 of the following 4 rules:
// * Upper case * Lower case * Numbers * Non-alpha numeric
function CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha(passwordtext) {
    if (passwordtext.length < 8) {
        return false;
    } else {
        var hasUpperCase = /[A-Z]/.test(passwordtext);
        var hasLowerCase = /[a-z]/.test(passwor dtext);
        var hasNumbers = /\d/.test(passwordtext);
        var hasNonalphas = /\W/.test(passwordtext);
        if (hasUpperCase + hasLowerCase + hasNumbers + hasNonalphas < 3) {
            return false;
        } else {
            return true;
        }
        // IT is missig a "}"
    };

 

We can run the code in Closure Complier to find the number of errors and where they may be. This allows us to ensure the Javascript we run is functioning and correct, and we do not need to do investigating once it is live. When we input the password validation code in the debugger, it returns with:

closure-compiler.jpg

 

Once you add the missing '}',  and remove the extra space on "passwor dtext", it get simplified as:

 

gs.info("good password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("This4isAgoodPassw0rd!"));
gs.info("bad password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("thisisbadpassword"));

function CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha(a) {
    if (8 > a.length) {
        return !1;
    }
    var b = /[A-Z]/.test(a),
        c = /[a-z]/.test(a),
        d = /\d/.test(a);
    a = /\W/.test(a);
    return 3 > b + c + d + a ? !1 : !0;
};

---------------------

It reuses input variable 'a'. This is not very good when coding but thumbs up on recycling 'a'.

When executed, it returns:

*** Script: good password: true

*** Script: bad password: false

 

 

Simplifying logical operations (e.g complex conditions, etc)

There is a trick on Logical operations simplification. It is using logical optimisers. You suspect right. It is not one click. It is four(4) steps. You can simplify logical operation if you see redundancy or repetitions.

 

  1. The first step is to replace the expression components into the letters A,B,C,..., on the elements that could be redundant. For example:
    function ValidateInput(letter1, letter2, letter3, letter4, letter5, letter6) {
        if ((letter1 == 'a' && letter2 == 'b') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y' && letter5 == 'z') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y' && letter5 == 'z' && letter6 == 'm')) {
            return true;
        } else {
            return false;
        }
    }
    

     

    To simplify the logic, we replace <letter1=='a'> with A, <letter2=='b'> with B, and so on. Real life example may be a bit more complex but the target is the same. Replace it on units ready for the logical optimisers.

     

    The function would look like:

      if (A && B || A && B && C || A && B && C && D || A && B && C && D && E || A && B && C && D && E && F)
    Use a logic optimiser (e.g. wolframalpha) to remove redundancy.
    
  2. When I use wolframalpha to simplify the logical expression:

    "A && B || A && B && C || A && B && C && D || A && B && C && D && E || A && B && C && D && E && F" at www.wolframalpha.com

    logical expression.jpg

     

    It returns "ESOP | A AND B", which translate in javascript to "A && B"

    which translate to <(letter1=='a' && letter2=='b')>

    translate javascript.jpg

     

  3. Replace the expression components back. Reducing the original function, the example looks like:
    function ValidateInput(letter1, letter2) {
        if ((letter1 == 'a' && letter2 == 'b')) {
            return true;
        } else {
            return false;
        }
    };
    

     

  4. Run the Google optimize on the final code. When running it on google optimiser, the final function will be:
    function ValidateInput(a, b) {
        return "a" == a && "b" == b ? !0 : !1;
    };
    

    closure complier.jpg

The final code on step 4 looks much easier to debug than the original on step 1. The code looks much cleaner and the process of testing would be easier as you have less variables to worry about.

 

Simplifying complex functions

Another way to simplify your Javascript code is to remove the parts on the code that are just not used. Removing excess code will help make the code more readable.

 

For example:

This code produces a memory leak

var theThing = null;
var replaceThing = function() {
    var priorThing = theThing;
    var unused = function() {
        if (priorThing) { /* This part causes MEMORY LEAK because theThing can't be release */
            gs.info("hi");
        }
    };
    theThing = {
        longStr: (new Array(1E6)).join("*")
        , someMethod: function() {
            gs.info(someMessage);
        }
    };
};
for (var i = 0; i < 5; i++) {
    gs.info("testing " + i);
    replaceThing();
    gs.sleep(1E3);
};

 

When running it thought google Closure Compiler, it produces:

for (var theThing = null, replaceThing = function() {
        theThing = {
            longStr: Array(1E6).join("*")
            , someMethod: function() {
                gs.info(someMessage);
            }
        };
    }, i = 0; 5 > i; i++) {
    gs.info("testing " + i), replaceThing(), gs.sleep(1E3);
};

 

The results when debugged in the Closure Compiler will come out like this:

reduce code.jpg

 

This code is much easier to debug after being run through the optimizer. It will also make it clear when an unused part of the code has disappeared. For debugging purposes, you can investigate the redundant code that was causing the problem or review the remaining code for problems. The reason for the memory leak is found here.

"There is not one right way to ride a wave."

~ Jamie O'Brien

 

Debugging code is like surfing, there are several ways to ride a wave. One of them is to take a look from a different angle. Code sometimes tells the story but it sometimes it traps you on it. Use optimiser tools to simplify and help you focus on what it matters and sometimes get direct to the point.

 

More information here:

Closure Compiler  |  Google Developers

Wolfram|Alpha: Computational Knowledge Engine

The 10 Most Common Mistakes JavaScript Developers Make

Docs -Script syntax error checking

Performance considerations when using GlideRecord

Community Code Snippets: Articles List to Date

Mini-Lab: Using Ajax and JSON to Send Objects to the Server

ServiceNow Scripting 101: Two Methods for Code Development

Community Code Snippets - GlideRecord to Object Array Conversion

Community Code Snippets - Four Ways To Do An OR Condition

My other blogs

Last week, I attended the Integrate 2016 + API:World Conference & Expo (and Hackathon!) in San Jose. I met some great people, learned about some pretty interesting APIs and Integration products, and as a guy who spent his first several years with the ServiceNow platform working on APIs and building integrations, it was great getting to geek out on a subject I’ve grown to love.

 

Hackathon

 

hackathon.jpg

I spent Saturday/Sunday hacking on the HPE Haven Sentiment API, meeting other developers, and was thoroughly impressed by the resulting hackathon projects. The current abundance of publicly available APIs is allowing teams to build truly remarkable apps faster than ever before, and this hackathon was well positioned to demonstrate what a small team can build in just over a day.

 

I walked away inspired by the passion of the participants and the ambition of the projects they tackled. The winner: an augmented reality mobile app that uses the device’s camera to look at a food package and tell the user whether or not it’s safe for them to eat based on that user’s allergies. Its target audience is kids who may not know what’s safe to eat when away from their parents, and it was a pretty impressive use of machine learning APIs to provide a potentially life-saving function.

 

Sessions

 

I couldn’t attend them all, but here are some of my takeaways from the week.twilio.jpg

 

ServiceNow’s own Robert Duffner participated in a panel discussing the process of Integrating API Management Into Your Business Strategy. Robert discussed the high priority placed on APIs in the ServiceNow platform to enable customers to quickly integrate with their existing systems. The Cloud Elements blog published a great summary of this panel discussion.

 

Microservices microservices microservices. I heard this term so many times, it no longer sounded like a word. The term is used to describe a methodology of software development in which an application is broken down into “micro” components and each dev team owns all aspects of that component. The goal is to reduce friction between development teams, making it easier to deploy features independently, and in theory, there are quite a few benefits to the approach. A few observations:

 

  • The term is increasingly used when describing the methodologies some companies have been using since long before the word existed (some even asked: is this SOA revisited?).
  • Not everyone fully agreed on the definition of what a microservice actually is.
  • Before you use the term to describe something you are building, it’s probably a good idea to spend some time reading Martin Fowler’s excellent overview of the subject.
  • Microservices aren't for everyone. Don't develop this way just because it's a popular thing on Hacker News - make sure your organization will benefit from the approach.

 

Twilio’s Patrick Malatack gave a great session on the consequences of an unhealthy API, and

raised some key points that really resonated well. Summarized:

 

  • APIs are for Humans, should be human readable, should allow developers to opt-in to complexity
  • Diamonds are forever, APIs are forever, be careful when changing APIs and don’t break things
  • Invest heavily in API documentation to keep developers happy

 

Expo

 

I gained the most value from meeting people in the expo area, and learned about some pretty interesting products that I was previously unaware of. It was particularly gratifying to walk up to a booth, introduce myself as a Developer Evangelist from ServiceNow, and see the person’s face light up and respond “oh yeah, I’ve heard about ServiceNow, we want to build an integration with you guys” (some already have!).

 

Major theme: products that simplify the process of building integrations between a myriad of APIs/products.

 

built-io.jpg

Built.io is an Integration-Platform-as-a-Service product that simplifies the process of building integrations, bots, interacting with IoT devices and more. I spent some time chatting with these guys, and even set up a simple integration with my ServiceNow developer instance using their built-in ServiceNow connector. Cool stuff.

 

Stamplay is another automation/integration framework that makes it easy to chain API calls and pull multiple services together without having to build each integration point from scratch. I haven’t had a chance to play with this yet, but fully intend to see if I can make this talk to my ServiceNow instance in the near future.

 

Microsoft Azure Logic Apps is also an integration platform (see the trend here?) that simplifies the process of pulling multiple APIs together, and provides many connectors to existing services/systems. Also on my list of things to explore further as an integration geek.

 

Amazon API Gateway is a managed service that allows developers to publish/maintain/monitor their APIs centrally, even if those APIs are backed by many disparate services. Expect a future post from me about setting this up with ServiceNow REST APIs if you need the capabilities provided by Amazon.

 

Summary

 

I’m not endorsing any of these products or advertising for these companies, but wanted to share some of the more interesting findings from my time at this conference. I’m excited by the trends I saw throughout the week and the promise of better tools to make hard problems easier to solve.

 

For ServiceNow customers, offerings like these promise to make it increasingly simple to leverage the power of dozens of APIs and services in a fraction of the time it would traditionally take learn the underlying APIs and build the requisite integration logic. Querying APIs that enrich your business processes, "eBonding" with other systems and similar use cases become much easier than ever before.

 

For ServiceNow developers, these offerings open up a world of possibilities when building new apps, and make it easier to build complex API mashups to augment your application's business logic or enable connectivity with the 3rd party services your customers expect.

 

Stay tuned as I explore some of these tools in more depth!

Josh Nerius | Developer Evangelist | @NeriusNow | Get started at developer.servicenow.com

In this video I show you how to use the list filter condition builder to create complex queries and use them in your scripts.

 

No need to know SQL programming.

No need to try and decipher GlideRecord addQuery/addOrCondition nuances.

Just build your filter, copy the query, and swap out the hard coded values for variables!

 

(click the full screen icon in the video for best viewing.)

When we left Part 2 of this series, I had added an inbound email action to create GTD actions from forwarded emails. As this development project was always intended to be a demo tool as well, when it came time for us to do demos developing Angular Service Portal widgets I looked around for functionality to add. I had already done some experimentation with using ngMaterial to create a card interface for this Helsinki feature webinar so I decided to bring a similar interface into DoNow.

 

pt-3-dependencies.png

Step 1 of creating this widget was to set up the dependencies. One is already delivered by default (ng-sortable) and one needed to be imported from external sources (ngMaterial). In the Widget form at the bottom are some related lists. In the Dependency section you have the option to create new ones. It is pretty straightforward to create a new one. You can point to external resources or choose to import the libraries into your instance. This was how all Angular work was done prior to Helsinki, by importing the libraries as UI Scripts. There are tradeoffs for either. If you need to lock to a specific version it makes sense to paste the code into a UI Script. In my case, I opted for the simplicity of just pointing to the Google hosted copies of the libraries. With a single JavaScript import and a single CSS import, ngMaterial was set up. ng-sortable was even simpler, just choosing it from the slush bucket of available choices. ngMaterial is adding some of the UI elements I want to use, ng-sortable is adding in drag and drop capabilities.

 

Having created that prerequisite piece, it was time to actually code the widget.  First a very quick bit of background in how ServiceNow has incorporated Angular into Service Portal. (Docs here for more reading and more resources here.) You'll see on creation of widget that you have an HTML piece, a client side controller script, and a server script. This breaks down very cleanly into thinking in MVC terms where the server script maintains the Model, the HTML the View and the client script runs the Controller. The Service Portal environment creates a variable for you automatically called "data" which is available in the server scripts. It works much like g_scratchpad. Any data you want available to the front end can be packaged in here and will transfer to the UI. The server script has a full Glide scripting environment and can do anything you expect from script fields (subject to any applicable scoping rules, of course.) This data variable is automatically in the $scope variable in the controller script and can be acted upon by addressing $scope.data . The $scope variable is implicit in the HTML front end so you can then present your values by referencing the data variable.  Enough background, let's look at code!

 

pt-3-server.png

Although you could edit any of these components from the Widget form, you will want to use the Widget Editor. It gives you a nice interface that allows you to show, hide or edit any of these scripts as well as a live preview of your widget. This is actually operating on the data of your instance so you can see your code in action immediately. One thing to remember is that although you have a live preview and can act on the data instantly you have to save the code with the button before you see any code changes take effect. It can sometimes feel like so much is happening automatically that you can forget to hit the button and get confused as to why you aren't seeing updated code. Remember to save.

 

To start this widget out, I do some Glide scripting. I add the current user's name and sys_id to the data object because it is very simple to get here and less so from the front end. I also create a hash map with 6 empty arrays mapping my 6 GTD priorities to the empty array. As we loop over the data the GlideRecord is converted to a simple JSON object and added to the appropriate array by dereferencing the hash map. This may seem like overkill if you know that Angular can filter. This may actually get factored out in the future but for now it helps with the hiding and showing of the records. One thing to be aware of is that this script is called each time data is packaged and pushed to the front. Although you have the ability to write any code you want in here, minimize side effects and expensive computation because this code can potentially run frequently.

 

pt-3-html.png

Next I built the UI portion. Part of the beauty of Angular is that you write your interface in HTML peppered with things called "directives" which can add extra functionality to the rendering of that tag. Those dependencies that were added previously all bring in their own set of directives, which is how they allow you to do different work on the interface. You'll note that there are tags called "md-checkbox" and "md-content" and "md-card". These were brought in from the ngMaterial dependency and allow for the creation of the swim lanes full of cards. By organizing into the six md-content buckets each filled with their individual sets of data this allows for some easy showing and hiding. I'm not going to delve too deep into the workings of Angular itself (if you need remedial work, there is a lot to read at the official site) but suffice it to say that ng-repeat is the engine of the looping and ng-model is used to bind various pieces of the data to the user interface. If you look at this you'll see that we basically fill out a data card for each of the actions that we packed into the data object. If you notice some of the ng-models, you'll notice that the md-checkbox tags are built by looping over data.priorities and set their ng-model to the priority.model for each priority as it comes up. This is a boolean value (if it wasn't before, md-checkbox would force it to be one).   The md-content containers are built by looping over the same array and each has an ng-if associated with the same boolean. This means that when that value is true, the element shows and when it is false it hides. Let's see that in operation.

 

Now by checking or unchecking those Priority checkboxes, each of those columns shows or hides. This is the core of Angular in operation, binding various elements to the same data and having the interface respond to it real time. One of the concepts that it took me a while to internalize is that even though we are referencing things that look like strings in these HTML tags of the directives, everything in there is binding variables unless you make it do something else. Almost everything in here is operating by reference so setting ng-if="priority.model" means that the tag is now bound to the state of the model field of the object contained in the priority variable. I have seen people have problems thinking they are passing values around when actually they are mutually binding user interface elements to the same underlying model and sharing a single data store.

 

pt-3-6-col.png
pt-3-2-col.png

You'll note that there is a little magic in here to make the layout respond to how many of these columns are showing. I'll go into more detail in the next post in this series when I dig into some of the code that lives in the controller script. For now, the take home lessons are that you can do the typical Glide scripting and queries on the server side and typical Angular coding in the HTML section. It is a nice compromise for the complexity of bringing these technologies together. There is a little secret sauce that configures some things for you that you would have to do for yourself in pure Angular which is definitely a thing to be aware of should you learn on Service Portal and later build a pure Angular app.

 

For those who are getting interested in this application (which I hope is at least some of you) the Github repository for it is public. You are welcome to fork and examine the code at will. Be aware if you do that this is not a final product and still very much a work in progress. It seems to work but is not guaranteed to be bug free or complete. IOW, caveat emptor and no warrantee exists on this. Note that because of the way ServiceNow integrates with Git we can't accept pull requests so we lose a little of the power of the platform that way. It is a shame but that is the way the internals work. Feel free to reach out if you do something interesting and we can go look at your repo to see how you have moved the ball down the field.

 

In the next section I will dig into the controller script and show how REST APIs can be integrated into Service Portal code. Keep watching the skies for that!

 

Summary:

 

It is simple and straightforward to bring in external dependencies into your Service Portal; you can easily query data and build an operational user interface with just a few HTML tags and Angular directives.

 

Series so far:

Building an Application: Part 1, Setup and Background

Building an Application: Part 2, Using Inbound Email Tokens

Building an Application: Part 3, Adding Service Portal Widgets

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Note: This blog post reflects my own personal views and do not necessarily reflect the views of my employer, Accenture.

 

There are a lot of things that are great about Service Portal, but the way widgets are implemented is one of my favorites. In this blog post I want to call out Widget Options, which help to make widgets more dynamic and reusable, and talk about some ways that you can use them.

 

For an example, if we look at the default Service Portal homepage we'll see 4 columns across the middle for Order Something, Knowledge Base, Get Help, and Community.

187b73b3e5.png

 

What is not immediately obvious is that those are 4 different instances of the same widget, Icon Link. If you go in and ctrl+right click on the order something widget instance and choose Instance Options you'll get a modal with 8 different widget options you can configure.

c572dac384.png

 

This allows us to reuse the same widget multiple times without having to rewrite code, and allows someone to customize these widgets without writing any code. Lets take a look at how the widget is written and see how these options are referenced.

07dfeff977.png

 

You can see that there are many places in the html where the options values are referenced and used in order to render the widget instance. You're not limited to the HTML field either as you can reference the options from the Server and Client script fields as well. Here are a couple of links that give some more details on how that works:

 

documentation/widget_options.md at master · service-portal/documentation · GitHub

Widget options

 

Good practice: In addition to making it easier to use the service portal without writing code, one of the things I always think about when I'm writing widgets is how I might be able to use the widget options to have to duplicate less code and write fewer widgets. For example, if you wanted to add multiple lists of records to a portal page that was structured similarly for all of them, you could pass the table you're querying, an encoded query, and the field values you want to display through the widget options.

 

I'd really like to hear of some creative ways others have used the widget options in the comments on this post.

I know it's late in the day, but I couldn't pass up the opportunity to blog about International Talk Like a Pirate Day.

 

In this quick tutorial, we'll create a simple scoped integration app that uses ARRPI, the Talk like a Pirate translation API.

 

I've already created a skeleton (arrrrrr, see what I did there?) scoped app where I'll write my code, and I'll assume you know how to do the same (if not, check out Building a ServiceNow Application on the Developer Portal).

 

1. Configure a REST Message

 

From Studio, Create New Application File > REST Message with the following details:

 

 

Open the automatically generated get method:

Talk_Like_a_Pirate___Unlinked_get_method.png

 

Create two HTTP Query Parameters:

Talk_Like_a_Pirate___Unlinked.png

 

Navigate back to the REST Message, and from the Related Lists section, create a new Variable Substitution so we can test our call:

Talk_Like_a_Pirate___var_sub.png

 

Test everything out by clicking the Test related link:

Talk_Like_a_Pirate___test.png

With any luck, we should see a result that looks like this:

Talk_Like_a_Pirate___test_run.png

 

We're now ready to start consuming the API elsewhere in the platform.

 

2. Create a Business Rule

 

Now we'll create a business rule to execute our piratey logic.

 

From Studio, Create New Application File > Business Rule with the following details:

 

  • Name: Translate, Arrrrrrrr
  • Table: Task
  • Advanced: checked
  • When: async
  • Insert: checked
  • Condition: new GlideDate().getByFormat('yyyy-MM-dd').endsWith('09-19')

 

Script:

 

Note: you'll need to update lines 3 and 22 depending on the name of your application scope. Replace x_48785_tlap with your application's scope name.

 

(function executeRule(current, previous /*null when async*/) {
  // Let's decide which fields to translate
  var fields = gs.getProperty('x_48785_tlap.fields_to_translate').split(',');
  var value;
  var translated;

  // Use some ECMA5 goodness to loop through the fields
  fields.forEach(function(field) {
      var value = current.getValue(field);

      // If the field holds a value, translate it! 
      if (value) {
          translated = englishToPirate(current.getValue(field));
          current.work_notes += 'Pirate ' + current[field].getLabel() + ': ' + translated + '\n';
      }
  });

  current.update();

})(current, previous);

// Encapsulate our translation logic so it's easy to reuse
function englishToPirate(text) {
  try {
      var r = new sn_ws.RESTMessageV2('x_48785_tlap.Talk Like a Pirate API', 'get');
      r.setStringParameter('text', text);

      var response = r.execute();
      var responseBody = response.getBody();
      var httpStatus = response.getStatusCode();
      var responseObj = JSON.parse(responseBody);
      return responseObj.translation.pirate || '';
  } catch(ex) {
      var message = ex.getMessage();
  }
}

 

3. Create a System Property

 

Finally, let's create a system property to control which fields get translated.

 

From Studio, Create New Application File > System Property with the following values:

 

  • Suffix: fields_to_translate
  • Type: string
  • Value: short_description,description

 

Let's try it out!

 

Go create a new incident and give it a short description of "Hello my friend, do you know where I can find a printer around here?". If everything worked correctly, we should see a work note  after a few moments:

 

INC0010016___ServiceNow_translated.png

 

Conclusion

 

This may seem like a silly example, but it demonstrates a fairly common integration scenario: make a call to an external API when a record is created and do something with the result. It also shows that it really is this easy to set up an integration with an external REST API. It took me about an hour to create the app and write this blog post, and while talking like a pirate requires less care and planning than a production integration, setting up a simple call to an external API may not be as hard as as you think.

 

If you'd like to play with this on your own, my sample code is available in a public github repo here. Feel free to fork this and talk like a pirate to your heart's content.

 

Arrrrrrr.

 

 

/**

* Josh Nerius | Developer Evangelist

* ServiceNow | The Enterprise Cloud Company

* (m) 312-600-4994 | Chicago, IL | Central Time

* Get started at developer.servicenow.com

*/

Josh Nerius | Developer Evangelist | @NeriusNow | Get started at developer.servicenow.com

Organizations often struggle to notify people in crisis situations. Crisis situations can be system outages, natural disasters, or any event that requires people to be notified.  A time consuming manual process is usually involved to send notifications ensuring the right people are notified. Sometimes laws, regulations, and/or audit require that the recipients of the alerts confirm receipt of the alert or confirm whether they need help or not.

 

Crisis Alert is a custom scoped application built on the ServiceNow platform leveraging ServiceNow Notify to solve this issue. It was built as a utility type application that provides input of Groups, Users, and/or a filter condition of users that need to be notified.  Crisis Alerts can be created by other applications within the ServiceNow platform to drive the mass communications for those records. Notifications can be sent via email, SMS, and/or text to Voice and the solution will look at the targeted recipient's notification devices in ServiceNow and reach out to those users via those devices.  If input is required from the user to confirm they received the crisis alert or that action is required is logged as well for auditing purposes.

 

Licensing Requirements:

  • Platform Runtime for the creators of Crisis Alerts.
  • Notify licenses for the recipients of the alerts.
    • A Twilio account is also required but done separately.

 

The solution is comprised of two update sets found on ServiceNow Share: https://share.servicenow.com/app.do#/detailV2/66c49c9c1386a600f609d6076144b036/overview

  1. Crisis Alert Scoped Application vX.xml - This includes all the scoped application files.  Upload, preview, and commit this update set first.
  2. Crisis Alert Global Scope Code vX.xml - This file can be found in the Supporting Files section on the Details tab.  This includes global scoped files that are required for the application.  Upload, preview, and commit this update set second.

 

Setup:

  • Ensure that ServiceNow Notify is configured and working on the instance.
    • It works best if you purchase a dedicated Twilio phone number for Crisis Alerts.  This number can be purchased through your Twilio account and once purchased it will be automatically downloaded into your instance by clicking Twilio configuration under the Notify\Administration application.
  • Copy down the E.164 formatted Twilio phone number that you would like to utilize for the Crisis Alert application.  Example: +18005551212
  • Upload, preview, and commit the two update sets from ServiceNow Share.
  • Change your application scope to Crisis Alert by clicking the Settings gear in the upper right corner of your desktop browser, and then choosing Developer.
  • Navigate to Crisis Alert\Properties, and enter the E.164 Twilio phone number into the x_snc_crisis_alert.crisis_alert.phone_number system property on this page.
  • Set the Notify group for the Twilio number to the Crisis Alert Group.
    • Navigate to Notify\Numbers and select the Twilio number that will be utilized by the Crisis Alert application.
    • Type 'Crisis Alert Group' in the Notify group field..
    • Click Update to save the record.
  • Setup notification devices for the ServiceNow users.
    • These can be added via Notification preferences from the User Profiles or a script can be leveraged to create them for all users.
    • It may be useful to add the Notification Devices related list to the User form.
    • When adding SMS type notification devices, the Service Provider field is required out of the box, but this field is not utilized by ServiceNow Notify so any selection can be made.
    • Make sure the SMS and Voice phone numbers are entered in E.164 format.
  • Navigate Crisis Alert\Create New to test the application setup.
  • An annotation has been added at the top of the Crisis Alert form with further instructions.  This annotation can easily be removed once you are familiar with the application.

 

 

 

The data center is on fire!


Please mark this post or any post helpful or the correct answer so others viewing can benefit.

With the Helsinki release came the introduction of Service Portal. Those of us who have been knee-deep in CMS for the past few years are very excited about this new addition!

 

Here's a list of 6 things that I have come to love about Service Portal:

 

  1. No Jelly
    While I'm disappointed to basically have to dump the years of learning Jelly, I'm so very glad to have more modern ways of serving up data. I also am happy for what this will do for the customer base as you now no longer need someone with such a specific skill set to run your portal. Finding AngularJS and Bootstrap developers is WAY easier than finding Jelly specialists.
  2. No More iframes
    Probably the greatest limitation of CMS is the requirement to use iframes for anything involving data (forms mainly). While Service Portal has a bit of a catch up to do on some of the OOB features, the ability to get at the data in a single page view is pretty awesome.
  3. Page Designer
    One of the most innovative UI changes in ServiceNow I've seen in recent years has to be Page Designer within the portal configuration tool. From one view you can build your pages, create your layouts, drop in your widgets, set your options, adjust CSS and define any meta properties you need. You can even preview the results as it will be rendered within various viewport sizes (mobile, tablet and desktop). We now have a very low-code approach to building out portal pages!
  4. Widget Editor
    For those looking to get down and dirty with functionality, widgets are your friend. The widget editor itself is also an amazing approach to development. In one pane you can view your HTML code, CSS/SCSS, Client Scripts and Server Scripts. Throw in some JSON and you can preview your changes. All without reloading the page! You can even set the options for your widget. If you're a pro-code type, this is for you and it's a great improvement from the Dynamic Block days of CMS. And also for you pro-coders, make sure you check out the Github for Service Portal that's being compiled by the product team: https://github.com/service-portal
  5. Robust Theming
    When using CMS your theme was basically a collection of CSS files. With Service Portal, you now have the ability to define a header and footer (both widgets by the way - think of the possibilities!), attach any CSS or JS files and set any Sass variables and away you go. You can then share this theme across multiple portals or not. So much more power available to you, in a very simple way of putting it all together.
  6. Analytics
    My old digital marketing tendencies are pretty excited for all the things you can do now with the Log Entries available in Service Portal. Instead of serving up a list of "Popular Items", I can get more sophisticated and refine that list to "Popular Items (of people in my same location)" or "Popular Items (for those in my same role)". Why not? We have the data. Due to the iframe limitation of CMS we never had such portal specific data. I am probably the most excited to see what the community does with this information now as contextual experiences are a given with this new feature.

 

Hopefully you've had some time to kick the tires a bit on Service Portal. If you have, throw any comments you have below as I would love to hear what your impressions are!

Let's connect:

twitter.com/mattmetten
www.mattmetten.com

This is going to be a story of a problem I had myself.

 

First, I would like to do a shoutout to fschuster and patrick.wilson for helping me out here, without their ideas I probably would still be banging my head against a wall.

 

Now, here comes the scenario...

 

Our instance had its birth in Eureka and have gone through Fuji, Geneva and now it's the time to hit Helsinki.

We started to use SSO with Eureka, connected to our ADFS service.

We also have a CMS site(portal) that we use where some pages are public(for example the start page) and doesn't require a logged in user while some other pages does. So when you click on a link and isn't logged it you should be redirected to SSO and then back to the page you wanted to reach.

 

So for Helsinki we wanted to build our first Service Portal with pretty much a 1:1 relationship when it comes to the functionality etc. And my problem here will be focused how to get some pages as public and when the user hits the other pages, they should be redirected to the SSO.

 

1. First try, Use the Public field

My first thought was that this might be solved with just using the "public" field and check the ones that are public and then ones that isn't would be redirected automagic to SSO. But it didn't work like that. I only got to the page and it had the top menu visible since the user which wasn't logged in didn't had access the page.

 

2. Perhaps use a login page as well?

Well, next step... Hmm, I might need a login page then. Since if I have a login page, the user will be redirected to that page for login. Sound logic.. And since I already got SSO installed, they should get to the SSO instead. But nope. The user got only to the login page and the login widget.

 

3. Does my SSO still work?

Now I began to doubt things. does the SSO still work? I knew it work when I wanted to logon to the "normal UI" for users. But what is the problem... I then made so the startpage wasn't public anymore. And TADA, trying to enter the Service Portal and I got redirected to SSO... WTH... so if the whole portal required logged in users, it would point to the SSO and work, but if some pages were public, I only got to the first internal login page of the portal.

 

4. Hmm.. Why dont I have these properties

Then I was asked to set some values on two system properties. "glide.authenticate.multisso.enabled" & "glide.authenticate.sso.redirect.idp". I realized that I don't have these from the beginning... Hmm, should I just create them? Before doing that I thought I would try to find out what they did and took a trip to documentation. First line there was "The following items are installed with the Integration - Multiple Provider SIngle Sign-On Installer plugin". And after checking, that plugin wasn't active. Should I activate this? I started to hesitate since we don't have Multiple Providers... We only have one.. And we all know, once you activated the plugin, there is no turning back. Naa, I don't wanna do that. Not yet at least.

 

5. Let's go to the SAML 2 properties and see

I wasn't involved with the configuration of SAML 2 and since it has been working, I haven't spend so much time on it. It has just been on my list to look at more when times comes by. So perhaps there is something I can click on or change? =)

 

But look here, this is what ServiceNow threw at my face when I clicked on the properties module.

Well.. It suddenly sounded like a good idea to activate that plugin =)

 

6. Finally the solution

So now it's time to tell you what I did to make it work.

 

  • I activated the plugin "Integration - Multiple Provider SIngle Sign-On Installer plugin". That wasn't so hard =)
  • I changed the value to true on "glide.authenticate.multisso.enabled" and verified that "glide.authenticate.sso.redirect.idp" had the sys_id of the IDP record.


After this, it worked. I clicked on a page that wasn't public, I got to the login page for like 1 sec, then I was redirected to the SSO login.

 

A bit of background how it work. Since I have the login widget on the login page, ServiceNow knows that it's a login thingie and mixed together with the properties above it will redirect to the IDP. Now, going with this, I can put a login widget on the nonpublic pages and get the same result, but it wouldn't look so good since I would need to all those pages public as well, otherwise the login widget wont load and the redirect wont work.

 

What I would like to see is an improvement on the login page. Atm, I get like a 1 sec flash of the login page before the redirect and it doesn't really look good since I see the widget etc. Will probably need to clean it up with some CSS in the start.

 

I also noticed some weird stuff when I activated the plugin and ServiceNow migrated the old values. It for example changed which field on my user records it should compare with the result from the SSO and that resulted that I couldn't log in to the instance through SSO since it couldn't find a matching user record.

 

I hope you learned something from my journey and hopefully this will help someone with the same problem as I.

//Göran

ServiceNow Witch Doctor and MVP
-----------------------------------
For all my blog posts: http://bit.ly/2fCzj1g

With release of Geneva came a new interface. We sometimes refer to this interface internally at ServiceNow as Concourse, however you may know it better as UI16. Concourse makes use of modern web technologies to make ServiceNow look better, and be more useable than ever! At the time of writing, UI16 is the default interface for the Geneva, Helsinki, and Istanbul releases.

 

One of the many changes in the new interface is the way that banner images are retrieved in the code, and once retrieved, how they are placed in the header. Let's talk about how this works, and some tips which will make working with UI16 banner images seem like a walk in the park!

ui16 banner image.png

 

Banner images in UI15 vs. Banner images in UI16

The UI16 banner image is configured in the same way that UI15 was - via system properties, and values in the user’s Company records. However, by default the UI15 header has a light background, and the UI16 header has a dark background. This means we can’t use the same logo for these interfaces as it won’t look right. This has lead to the addition of a new field on the Company record, as well as a new system property for storing a logo which will look good on a dark background. I’ve highlighted the old and new settings in the tables below:

 

Company [core_company] fieldDescription
banner_imageUI15 banner image
banner_image_light (NEW)UI16 banner image

 

System Property
Description
glide.banner.image.titleBanner mouse-over text
glide.banner.image.url_targetTarget frame used when clicking the banner image
glide.banner.image.urlURL used when clicking the banner image
glide.product.descriptionPage header caption
glide.product.imageUI15 banner image (fallback)
glide.product.image.light (NEW)UI16 banner image (fallback)

 

All of the above are used to build the header, which makes use of the out of box MyCompany script include to retrieve the values. When the UI16 header loads, it will check the values defined in the banner_image_light field of the current user’s Company record. If it’s not set, and there’s no parent company, then the fallback banner image defined in the glide.product.image.light system property will be used. If the user’s company has a parent, then the banner_image_light field will be checked on that. This process of checking the parent companies will continue for a maximum of 10 parent companies, and if it’s not defined on any of them then again it will fall back to image defined in the glide.product.image.light system property.

 

UI16 banner image sizing

When UI16 was initially released the maximum height of a banner image was 20px. After some user feedback this was increased to 32px. The maximum height can almost never be reached due to the way the image is placed on the page. This will be improved in a future release, at which point you can use these measurements:

  • The maximum width of a banner image will be 230px.
  • The maximum height of a banner image will be 32px.

 

This is an aspect ratio of 115:16. It would be best to design your logo with this aspect ratio in mind.

 

It’s not an official workaround, but I’ve found that defining a global UI Script with the below value in the script field will mean that the banner image can hit the maximum height of 32px most of the time.

 

var bannerImage = top.document.getElementById('mainBannerImage16');

if (bannerImage != null) {

  bannerImage.style.height = "32px";

}

PRB709492-before-after.jpg

Make sure that if you decide to use something like the above, that you fully test it, and that you remove it once you upgrade to a release which contains a fix for PRB709492.

 

Viewing your banner images on Retina Displays

The above maximum width and height is true when dealing with regular displays, but what about high pixel density displays like the Retina displays found in most Apple computers and iOS devices?

 

Retina displays have a pixel density 4 times that of a regular display. This seems like a simple thing to implement, but in reality it actually brings up a lot of issues, especially with the parts of the page which are bitmap-based (e.g. images). If you have a screen which has 4 times the pixel density, there are two things that may happen as a result.

  1. Images on a web page will be very small on the screen. This will happen if you just display the images pixel-for-pixel on the screen.
  2. Alternatively, they will be the same size as they were, but will be blurry (as they are “scaled-up”) and won't make use of the benefits of a Retina display.

 

Retina displays work around the above two issues so that if you were to specify an image to be 10px high and 10px tall, on a regular display it will show as exactly that, however on a Retina display it will actually be 20px high and 20px tall.

 

To show how this impacts the UI16 banner image, I took a photo of each of my screens and zoomed in on the “now” part of the ServiceNow logo:

banner-image-ui-16.jpg

You can really see the difference here between the amount of pixels used to display images on a Retina display, and that are used on a non-Retina display. Due to the way that web page measurements and images scale on Retina displays, the maximum size of the banner image on a Retina display are:

  • The maximum width of a banner image is 460px.
  • The maximum height of a banner image is 64px.

 

These measurements are double that of a non-Retina display. Most importantly, it maintains the same aspect ratio of 115:16. This means that the image will appear the same size to users regardless of whether they're using a Retina display or not.

 

3 tips to banner image sizing in UI16

We can take this information and create a few simple points to keep in mind when setting your UI16 banner image:

  • Keep the aspect ratio of your banner image to 115:16 to ensure you use the entire space available to you
  • Ensure your banner image is designed for Retina displays. If it’s viewed on a non-Retina display, it will be scaled down and will not lose quality.
  • Consider implementing my unofficial workaround, ensuring you remove the workaround once you upgrade to a release with the fix included.

 

Feel free to contact me on Twitter with my handle @dylanlindgren or post reply in the comments below.

 

 

I repeat, if you use my workaround, please remove it once upgraded or it can interfere with the potential fix.

200x200_podcast-album_art-Final.jpg

 

I work with a lot of really cool people and I want to share that enthusiasm with you. Join me as I talk to katharine.campbell our supervisor of multimedia. I've known Katy since she and her team asked me to do some voice overs in 2013 (or did I ask them?). She has been with ServiceNow since April 2012 and has gone through some major changes professionally and personally.

 

Listen
itunes24.png

 

Subscribe using iTunes

 

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  ADVANCED

Assumes very good knowledge and/or familiarity of scripting, and user interfaces in ServiceNow.

____________________________________________________________________________

 

The other day I was working on a gnarly UI problem with lisse (Calisse Voltz), and we found that we needed to add a Slush Bucket control to a GlideDialog form.  We went looking in the community, and on the web for information on how to best accomplish this task, and what did we find?  Nuttin - technical term meaning null.  Well, a GlideDialog executes a UI Page, and there are a few UI Page examples in the Out-Of-The-Box (OOB) ServiceNow platform, and one of them should contain a Slush Bucket, right?  Well, one did!  With that we were then able to figure it out.  I thought I would share.

 

So, what is needed?

 

Design:

 

  1. Create a simple UI Page that contains a Slush Bucket control.
  2. Create a Client Script on that UI Page that populates the Slush Bucket.
  3. Create a Ajax Script Include to provide the data for the Slush Bucket - for demo purposes.
  4. Create a UI Script that pulls the picked data from the Slush Bucket and allows us to use it.
  5. Create a UI Action to fire the UI Page as a GlideDialog - for demo purposes.



Lab 1.1: Create the Ajax Script Include

 

This will be the driver that fills our Slush Bucket with test data.  Of course this would change should you want to really do something special with the example, but I put this here to give you an idea of how it can be done.

 

1. Navigate to System Definitions -> Script Includes.  The Script Includes list view will be displayed.

2. Click on the New button to create a new Script Include.  The new Script Include form will be displayed.

3. Fill out the form with the following:

a. Name: GetIncidentsAjax

b. Client callable: checked

c. Active: checked

d. Accessible from:  All application scopes

e. Description: Driver to fill GlideDialog Slush Bucket with test data

f. Script:

 

var GetIncidentsAjax = Class.create();
GetIncidentsAjax.prototype = Object.extendsObject(AbstractAjaxProcessor, {

  getIncidentList : function() {
    var incidentList = [];
   
    var incidentRecords = new GlideRecord('incident');
    incidentRecords.setLimit(10);
    incidentRecords.orderByDesc('number');
    incidentRecords.query();
   
    while (incidentRecords.next()) {
      var incident = {};
      incident.number = incidentRecords.getValue('number');
      incident.sys_id = incidentRecords.getValue('sys_id');
      incidentList.push(incident);
    }
   
    return new JSON().encode(incidentList);
   
  },

    type: 'GetIncidentsAjax'
});

 

g. Submit to save your work

 

 

 

Lab 1.2: Create the UI Page and Client Script

 

Next we want to create the actual UI Page that will act as our GlideDialog.  This will contain the Slush Bucket control, a couple of buttons (Cancel, Submit), and our Client Script to fill the Slush Bucket on load of the form.  You will note, in the example below, that the Slush Bucket is a single inclusion line, and contains only a name.  I am also using some of the fancier button definitions I have found in the OOB code.  For those of you who didn't know: If you use gs.getMessage you can localize the example.  You need to do a requires tag to link in the UI Script library, and make it available to the page.  Read the comments in the code as I explain a lot of what's happening there.

 

 

NOTE: Yeah, using Jelly as that is still the default when creating a new UI Page in Helsinki.  I will let b-rad (Brad Tilton) set this up in Angular, or may do it myself later.

 

1. Navigate to System UI -> UI Scripts.  The UI Scripts list view will be displayed.

2. Click on the New button to create a new UI Script.  The new UI Script form will be displayed.

3. Fill out the form with the following:

a. Name: SlushBucketDialog

b. Category: General

c. Description: UI Page for GlideDialog Slush Bucket example

d. Direct: unchecked

e. HTML:

 

<?xml version="1.0" encoding="utf-8" ?>
<j:jelly trim="false" xmlns:j="jelly:core" xmlns:g="glide" xmlns:j2="null" xmlns:g2="null">
  <form id="bucketStuff">
    <g:requires name="SlushBucketProcessor.jsdbx"/>
    <div id="page">
      <table width="100%">
        <tr>
          <td>
            <g:ui_slushbucket name="slushIncidents" />
          </td>
        </tr>
        <tr>
          <td style="text-align:center" class="pull-center">
            <button type="cancel" class="btn btn-default action_contex" id="cancel_dialog" name="cancel_dialog" >
              ${gs.getMessage('Cancel')}
            </button>
            $[SP]
            <button type="submit" class="btn btn-primary action_contex" id="submit_dialog" name="submit_dialog">
              ${gs.getMessage('Submit')}
            </button>
          </td>
        </tr>
      </table>
    </div>
  </form>
</j:jelly>

 

f. Client Script:

 

// JQuery function to load up our processor UI Script and initialize it with our UI Page info
// this will make all of the functionality of the UI Script available.
$j(function () {
  try {
    window.slushBucketProcessor = new SlushBucketProcessor(GlideDialogWindow.get());
  }
  catch(err){
    alert(err);
  }
});

// Called when the form loads
addLoadEvent(function() {
  // clear the slush bucket
  slushIncidents.clear();
  
  // show loading dialog until we get a response from our ajax call to retrieve a list of incidents
  showLoadingDialog();

  // load the slushbucket with test data
  var incidentAjax = new GlideAjax('GetIncidentsAjax');
  incidentAjax.addParam('sysparm_name', 'getIncidentList');
  incidentAjax.getXML(incidentResults);
  
  function incidentResults(response){
      hideLoadingDialog();

    var incidentList = JSON.parse(response.responseXML.documentElement.getAttribute("answer"));

    // Fill the Slush Bucket control with the test data
    // Note that the value is the sys_id, and the "label" is the number
    for (var i=0; i < incidentList.length; i++) {
      slushIncidents.addLeftChoice(incidentList[i].sys_id, incidentList[i].number);
    }
  }
});

 

 

g. Submit to save your work

 

 

 

Lab 1.3: Create the UI Script Processor

 

We need to create a processor script to add functionality to our UI Page.  When called from the UI Page this script will be merged into the page functionality.  BTW, this is a good practice to keep your UI Page Client Scripts "thin", and allow for possible re-use of code.  UI Scripts are the Client-Side library containers.  If you keep it generic enough you can create some nice libraries for later use in other projects.  Don't forget to read my comments in the code.

 

1. Navigate to System UI -> UI Scripts.  The UI Scripts list view will be displayed.

2. Click on the New button to create a new UI Script.  This will display the new UI Script form.

3. Fill in the form with the following:

a. Name: SlushBucketProcessor

b. Global: unchecked

c. Active: checked

d. Description: UI Page that will act as the GlideDialog.  This will contain the Slush Bucket control, a couple of buttons (Cancel, Submit), and our Client Script to fill the Slush Bucket on load of the form.

e. Script:

 

var SlushBucketProcessor = function(dialogWindow) {
  var $btnCancelDialog = null;
  var $frmSubmitDialog = null;
  
  //Initialize the window
  _init();
  
  function _init() {
    _initForm();
    _initEventHandlers();
  }
  
  function _initEventHandlers() {
    // cancel button event
    $btnCancelDialog.click(function(event) {
      submitCancel(event);
    });
    
    // form submit event
    $frmSubmitDialog.submit(function(event) {
      onFormSubmit(event, this);
    });
  }
  
  // set up the form submit, and the cancel action
  // JQuery to pull in the form objects by id
  function _initForm() {
    $btnCancelDialog = $j("#cancel_dialog");
    $frmSubmitDialog = $j("#bucketStuff");
  }
  
  function submitCancel(event) {
    event.preventDefault();
    
    // tear down the dialog
    dialogWindow.destroy();
    return false;
  }
  
  function onFormSubmit(event, form) {
    event.preventDefault();

    var slushIncidents = [];
    
    // extract the results from the slush bucket control
    var slushIncidents_right = form.slushIncidents_right;
    
    // how many did we end up with?
    alert(slushIncidents_right.length);
    
    // do something with the results.  The name is the innerHTML, the sys_id is the value
    for (var i=0; i < slushIncidents_right.length; i++) {
      slushIncidents.push(slushIncidents_right[i].innerHTML + '-' + slushIncidents_right[i].value);
    }
    
    // display our array
    alert(JSON.stringify(slushIncidents));
    
    // tear down the dialog
    dialogWindow.destroy();
    return true;
  }
};

 

 

f. Submit to save your work

 

 

 

Lab 1.4: Create the UI Action to Fire the Dialog

 

Finally we need to create some sort of action to actually test our new GlideDialog UI Page with Slush Bucket!  I chose to create a UI Action attached to the Incident form.  It really doesn't do anything with the form, but gives the impression it does!

 

NOTE: I use another good OOB technique here by using a class to fire off the GlideDialog.  This allows me to use the "this" construct, and other functionality that is available to a class that wouldn't be available to normal function.

 

a. Name: Dialog Slushbucket

b. Table: Incident [incident]

c. Order: 100

d. Action name: dialog_slushbucket

e. Active: checked

f. Show insert: checked

g. Show update: checked

h. Client: checked

i. Form button: checked:

 

Everthing else unchecked

 

j. Comments: Form button to test the functionality of the Slush Bucket GlideDialog UI Page

k. Onclick: (new slushBucketHandler(g_form)).showDialog();

l. Script:

 

var slushBucketHandler = Class.create({
  showDialog: function() {
    // fire off the dialog using our UI Script name
    this.dialog = new GlideDialogWindow("SlushBucketDialog");
    this.dialog.setTitle('Slushbucket Dialog');
    this.dialog.setWidth(400);
    this.dialog.render(); // Open the dialog box
  }
});



m. Submit to save your work

 

 

 

Lab 1.5: Testing the GlideDialog

 

We have arrived!  All the bits are done.  Now let's test our dialog page!

 

1. Navigate to Incident -> All.  The Incident list view will be displayed.

2. Open up any incident.  Note that the Dialog Slushbucket button appears on the form.

 

3. Click the button.  The dialog should be displayed, and the slush bucket control will populate.  It may take a moment, and you might actually see the LoadingDialog box appear briefly.

 

 

4. Pick as many incident numbers as you want, and transfer them to the right side of the control.

 

 

5. Click the Submit button.  1) an alert will appear showing how many incident numbers you had selected.  2) a second alert will appear showing a compressed view of the array of your incident number-sys_id combos.  Click on the ok button to close each respectively.

 

 

6. Click ok, on the last alert box, and the dialog should disappear.

 

And you are done!  You might want to test out the cancel button to check that functionality as well (unit test everything).

 

Some extra reading:

https://learn.jquery.com/using-jquery-core/avoid-conflicts-other-libraries/

UI Pages - ServiceNow Wiki

UI Scripts - ServiceNow Wiki

Displaying a Custom Dialog - ServiceNow Wiki

Slushbucket - ServiceNow Wiki

GlideAjax - ServiceNow Wiki

 

Steven Bell

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!



NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  INTERMEDIATE

Assumes some knowledge and/or familiarity of scripting, and REST web services in ServiceNow

____________________________________________________________________________

 

Recently I had to create a REST Web Message to retrieve information from a server.  It occurred to me that there really had not been many examples written on this for ServiceNow.  At least from the code perspective.  So I thought I would share.  This really isn’t that complicated, and you should be able to glean what you need from the ServiceNow wiki, but I thought I would present a working example to play with anyway.

 

In this example we will be using a test REST web service site called: ‘cat-facts’.  This will return a success string (true/false), and from 1 to any number we specify; of facts about cats.

 

The idea here is to retrieve data from a REST Web Service and then parse the results into a JSON object for further use.

 

What is needed:

 

  1. Web Service end-point.  This will be:
    1. http://www.programmableweb.com/api/cat-facts?number=<<some number>>
    2. If the number is not provided it is automatically assumed to be 1.
  2. ServiceNow REST Message
  3. Fix Script to run our test code and parse the results

 


Lab 1.1: Creating the Rest Message

 

  1. Navigate to System Web Services -> Outbound -> REST Message.  The REST Message list view will be displayed.
  2. Click on the New button to create a new REST Message.  The new REST Message form will appear.
  3. Fill in the form with the following:
    1. Name: CatFacts
    2. Endpoint: http://catfacts-api.appspot.com/api/facts
    3. Accessible from: All application scopes
    4. Right-click on the form header and choose Save.  This will auto-generate four HTTP methods (get, put, post, delete).  We will only be using “get” for the purposes of this article.



5. In the HTTP Methods related list click on the “get” method.  This will open the Method form.

6. Change the Endpoint to: http://catfacts-api.appspot.com/api/facts?number=${number}



7. Right-click on the form header, and save your work.

8. Scroll to the bottom of the form and from the Variable Substitution related list click the New button. 

This will display the New Variable form.

9. Fill out the form with the following:

a. Name: number

b. Test Value: 5

Note: we will be pulling back five facts about cats.

c. Click Submit to save the variable.  This will take you back to the Method form.



10. Scroll down to the bottom of the Method form, and choose the “Test” link under the related links. 

This will execute the REST Message with the number value of 5. 



11. You should get a 200 result (success), and five facts about cats!



 

Our REST Message is now ready to be consumed - a fancy-smancy technical term meaning “used”.

 

 

Lab 1.2: Fix Script to Run Consume the REST Message

 

So now comes the cool stuff.  We will be creating code to execute the REST Message, and then convert the response to a JSON object so that we can actually make some use of it.

 

  1. Navigate to Fix Scripts.  This will display the Fix Scripts list view.
  2. Click the New button.  This will display the New Fix Script form.
  3. Fill out the form with the following:
    1. Name: Cat Facts
    2. Active: true
    3. Description: Test script to consume the CatFacts REST Message
    4. Script:

 

var webService = 'CatFacts';
var command = 'get'; // case sensitive
var number = 5; 

var restMessage = new sn_ws.RESTMessageV2(webService, command);
restMessage.setStringParameter('number', parseInt(number));
var response = restMessage.execute();  // REST return
var httpStatus = response.getStatusCode(); // response status code

var responseBody = response.getBody(); // stringified JSON returned info

var parsed = JSON.parse(responseBody); // turn it into a true JSON object

// Now we can begin using the response
gs.print(responseBody);

var facts = parsed.facts;
var success = parsed.success == 'true';

gs.print(facts.length);
gs.print(parsed.success);

if (success) {
    for (var i=0; i < facts.length; i++) {
        gs.print(facts[i]);
    }
}


 

 

5. Right-click on the form header and Save your work.



 

6. Execute the Fix Script (Related Links -> Run Fix Script)g. You should get a result something like this:



And there you have it!  In this article I demonstrated:

 

  1. How to create a REST Message
  2. How to add a variable to the REST Message method
  3. How to test a REST Message method
  4. How to call a REST Message from a script
  5. How to use a variable in a REST Message from a script
  6. How to parse the returned results into JSON from the REST Message
  7. How to use the JSON results

 

Steven Bell

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

NOTE: MY POSTINGS REFLECT MY OWN VIEWS AND DO NOT NECESSARILY REPRESENT THE VIEWS OF MY EMPLOYER, ACCENTURE.

 

DIFFICULTY LEVEL:  ADVANCED to EXPERT

Assumes very good knowledge and/or familiarity of scripting, and workflows in ServiceNow.

____________________________________________________________________________

 

Recently I had the need to port a large JSON object (not a GlideRecord) from code to a called Workflow.  Simple, I said, I will just pass it through the variables parameter of my startFlow command.  Not.  startFlow, it appears, replaces all sorts of important characters in the object; scrambling it and making it garbage on the receiving end!  It helps you!  Arrrgh! - a technical term expressing frustration.  Well, a need-is-a-need and I had to have a solution.  Here is what I came up with.

 

Caveat:

  1. DO NOT BOTHER USING THIS METHOD TO SEND A CONVERTED GLIDERECORD.  You can just go get this via a GlideRecord call from the workflow.  We ARE all server-side here after all!  So in my examples...what did I do?  I converted a couple of GlideRecords to give me some test data.  Do as I say, not as I do!  :-)
  2. Remember to play with these techniques in a sandbox or in your personal developer instance.  Please don't play with code like this in Production!  Seriously.


Design:

 

So the process will be to take our object, convert it to JSON, convert THAT to a string, then URL encode it (the secret).  However, you still have to tweak the encode a bit to get it to work (some character replacement).  Then on the workflow side-of-things we reverse the process.  Think Star Trek Transporter!  :-)

 

From the calling script:

 

  1. Create an object, or object array
  2. Convert the object to JSON
  3. JSON Stringify the object
  4. URL Encode the string
  5. Fix up the encoded URL


From the workflow:

 

  1. URL Decode the string
  2. Unfix the decoded string
  3. JSON Parse the object
  4. Begin using the object

 


Lab 1.1: The Calling Script

 

The following code can be run on any server-side script (Business Rules, Script Includes, Scheduled Jobs, Script Actions; to name a few).  We will be using a Fix Script to do the work for us.  I will be incorporating a couple of other articles I had written some time back to give us some test data to play with.

 

Community Code Snippets - Current Factory

Mini-Lab: Extending the GlideRecord Object

 

I have attached the code for both of these to this article.  I would suggest reading both if you want to understand how they actually work, but it is not necessary if you just want focus on this article.  Import the attached files to bring these libraries into your developer instance.

 

  1. Navigate to Fix Scripts.  The Fix Scripts List View will be displayed.
  2. Click on the New button.  This will display a New Fix Script form.
  3. Fill out the form with the following:
    1. Name: Send JSON to Workflow
    2. Active: checked
    3. Description: Example of sending a JSON object to a workflow
    4. Script:

 

gs.include('ACNTableUtils'); // currentFactory, listFactory
// this has been improved since the article was written
gs.include('ACNGlideRecordExtensions'); // toObject, toObjectList

// get a single incident record and convert it to an object
var current = ACNTableUtils.currentFactory('incident').toObject();

// got get a bunch of records and convert it to an object array
var order = {};
order.type = 'descending';
order.field = 'number';
// just showing off here - the toObjectList part isn't really necessary  :-)
var incidentList = ACNTableUtils.listFactory('incident', 5, order).toObjectList();

// there is a LOT of stuff in a GlideRecord so let's pick off a couple of things
// we could pass across for a test
var incidents = [];
for (var i=0; i < incidentList.length; i++) {
    var incident = {};
    incident.number = incidentList[i].number;
    incident.sys_id = incidentList[i].sys_id.toString();
    incident.short_description = global.JSUtil.notNil(incidentList[i].short_description) ? incidentList[i].short_description : 'nuttin here';
    incidents.push(incident);
}
// end of test data setup

var short_description = current.short_description;
var number = current.number + '';
var testList = ['cows', 'chickens', 'pigs', 'horses'];

// From this point is where we set up the send
var payload = {};
payload.short_description = short_description;  // single value
payload.number = number; 
payload.testList = testList; // simple array
payload.incident = current; // complex object
payload.incidentList = incidents; // object array

// turn the payload into a JSON object and stringify it
payload = global.JSON.stringify(payload); 

var workflow = new Workflow();
var workflowID = workflow.getWorkflowFromName('Send JSON to Workflow');

var vars = {};
// now we encode it and fix it up before startFlow can destroy it
// colens and brackets are problems. Oddly you have to reconstitute the brackes here,
// and replace the colens.  startFlow has a lot to answer for
// place the string into our inbound variable.
vars.u_payload = encodeURI(payload).replace(/%5b/g, '[').replace(/%5D/g, ']').replace(/:/g, '%3A');

// now call the workflow and pass our object
workflow.startFlow(workflowID, null, null, vars);





5. Save your work.

 


 

Lab 1.2: The Workflow

 

1. Navigate to Workflow -> Workflow Editor

2. Create a new Workflow

    1. Name: Send JSON to Workflow
    2. Table: Global

 

3. Create a new input variable: u_payload, length 8000. 

NOTE: you might want to make this bigger if you have a lot of data you are bringing across!  Otherwise your string gets truncated and you get strange conversion errors from the JSON decode!

 

4. Pull out a Run Script Activity

    1. Name: Initialize
    2. Script:

 

var location = context.name + '.' + activity.name;
var payloadDecode = decodeURI(workflow.variables.u_payload);
// prep the string prior to reconstituting the JSON. 
// Colens are the only issue, all the rest get handled by the decodeURI
payloadDecode = payloadDecode.replace(/%3A/g, ':');  

// we can now parse the string into a JSON object
var payload = JSON.parse(payloadDecode);

// now we can start using the values
var short_description = payload.short_description;
var number = payload.number;
var testList = payload.testList; // simple array
var incident = payload.incident; // complex object
var incidentList = payload.incidentList; // object array

var message = '--->\n';

message += '\tshort_description: ' + short_description + '\n';
message += '\tNumber: ' + number + '\n';
message += '\tTestList length: ' + testList.length + '\n';

for (var i=0; i < testList.length; i++) {
  message += '\t- ' + testList[i];
}
message += '\n';

message += '--- incident\n';
message += 'number: ' + incident.number;
message += '\tsys_id: ' + incident.sys_id + '\n';

message += '--- incidentList\n';
for (var j=0; j < incidentList.length; j++) {
  var incident = incidentList[j];
  message += 'number: ' 
  + incident.number 
  + '\t- short_description: ' 
  + incident.short_description.replace(/~/g,' ') + '\n';
}

gs.log(message, location);



 


    5. Click update.

 


 

Lab 1.3: Test

 

  1. Go back to your fix script, and run it.  You should see something like the following results:

 

 

And there you go!  Remember:  If you get strange conversion errors in the workflow code it is probably because you overran 8000 characters in your string.  Just increase the u_payload size to correct the issue.

 

Steven Bell

 

 

For a list of all of my articles:  Community Code Snippets: Articles List to Date

 

Please Share, Like, Bookmark, Mark Helpful, or Comment this blog if you've found it helpful or insightful.

 

Also, if you are not already, I would like to encourage you to become a member of our blog!

This time, the scenario was like this.

 

I have a list collector and what I choose there, should be a filter on list collector nr.2. Now biggest thing here was that I can choose multiple values in the first and these should be OR conditions on the next list collector.

 

Now, in my example here I want to first let the user choose which department the user should belong to and put that as a filter on the second list collector.

First I would say that there is a Pre-helsinki solution and a Helsinki solution(maybe)

 

Picture above is also using the "no_filter" to hide the filter.

 

 

Pre-Helsinki:

In the example above our first list collector is in a variable called "depart", So what we need to do is to make a
onChange client script that hits on the variable depart. When depart changes, the script changes the filter on the second list collector "users".

 

This is how the onChange script can look like:

 

function onChange(control, oldValue, newValue, isLoading) {
  if (isLoading) {
  return;
  }
  //Name of the list collector that the filter shall apply on
  var collectorName = 'users';
  //If the newValue isn't empty, build the filter
  if( newValue != '') {
  //Lets split up the string into an array
  var answer = newValue.split(',');
  var filterString = [];
  //First filter shouldnt start with an "or" so lets set it here with the first value
  filterString = 'department=' + answer[0];

  //If there is selected more than one department, put those in as "OR" conditions
  if(answer.length > 1){
  for (var i=1; i<answer.length;i++){
  filterString += '^ORdepartment=' + answer[i];
  }
  }
  eval(collectorName + 'g_filter.reset()');
  eval(collectorName + 'g_filter.setQuery("' + filterString + '")');
  eval(collectorName + 'acRequest(null)');
  }
  //If the newValue is empty, just reset the filter
  else{
  eval(collectorName + 'g_filter.reset()');
  eval(collectorName + 'acRequest(null)');

  }


}


 

The script above takes the new values and put it as a "OR" condition on the second list collector. And I've also added that if they clear the all made choices so the "newValue" is null, the filter is cleared.

 

 

Helsinki and beyond(hopefully)

In Helsinki we got the field "reference qual" which makes us do real nice stuff. I was hoping that with a easy line but I can't get "IN" to work and if anyone reading this know's why, I hope you can throw a comment and explain.

 

Let me show all what I mean.

 

First of all go to the second list collector and the "Default Value" section.

Fill in "Variable attribues" like this where "depart" is the first list collectors name.

 

Now, on the section "Type Specifications" type I was hope this would work:

But it doesn't and I can't really understand why.

 

I can do like this:

 

And it will work, but only if I choose 1 value from the list collector and that takes away the meaning with a list collector...
Might as well just have a reference field then.

 

If I get any solution for the ref qual, I'll update this post.

 


//Göran

 

 

Symfoni-Logo-Color (1).png sn-community-mvp.png

//Göran

ServiceNow Witch Doctor and MVP
-----------------------------------
For all my blog posts: http://bit.ly/2fCzj1g

Note: This blog post reflects my own personal views and do not necessarily reflect the views of my employer, Accenture.

 

Every now and then I'll come across a question in the community or have a customer that needs to track whether or not an item was generated from an order guide or a standard catalog item request. Sometimes they also may need to know which order guide it came from.

 

There are really a couple of ways to track that. One would involve a variable set and some cascading variables, and the other is this SNCGuru solution. They both have drawbacks, but I usually lean towards the latter partially because I try to create as few variables as possible. It's not perfect as there's a use case where it doesn't end up capturing the order guide.

 

Today I posted that SNCGuru article as a response to someone else's question and got a new response. shouvik made my day by linking to this article, and telling us that there is a feature in Helsinki that automatically captures the order guide and writes it to a field called Order guide on the requested item without the customer having to configure anything. I went ahead and tested in my Helsinki instance and, sure enough, the order guide field was populated with the order guide I used to order the items.

 

ritms.PNG

 

I figured I'd try to give it a bit more visibility as I've been working primarily in Helsinki for a while now and I try to stay on top of the newest releases and had no idea that this new functionality existed.

 

Request an order guide

Filter Blog

By date: By tag: