Skip navigation
1 2 Previous Next

Developer Community

23 Posts authored by: Jonny Seymour Employee

At a time when Twitter, Facebook, Google and Yahoo give you flexibility to select what you want and don't want to read, the email is still one of the places where "unsubscribe" is not always an option. As users go on holidays, move jobs (e.g. the account get disabled), mailboxes get full, emails are marked as spam, etc, some email relays would start bouncing back those emails, and those users would not be able to unsubscribe to those senders. On your instance, it is a good practice to review the emails bounced back and review the users for which no further notification should be sent. This requires an efficient and regular intervention, as bounced emails would slowly increase if no action is taken. From the administration point of view, if users are empower themselves to unsubscribe, you will have less work to do, and wanted emails would go out faster.

bounce-mail.-e1478601620490x.png

 

Following up my previous blog Speed up your email delivery by validating recipients, here is my review on the matter.

 

3 ways to minimize bounced emails

  1. Educate your users to unsubscribe from the unwanted notifications instead of just ignoring them
  2. Regularly execute a validation of the emails bounced back to the instance
  3. Archive junk emails to improve the email table performance

 

Educate your users to unsubscribe from unwanted notifications

To educate your users to unsubscribe, ensure to inform them this functionality is available on your instance. Users can create filters within their preference to only receive the notification if the condition set on their personalised filter are met. They could also choose to disable per notification, or disable notifications received by the user altogether. This does not affect the user active status - just the notifications targeted to them.

 

Here is one example from a notifications received by one user: angelo.ferentz.

2017-09-30_1045-email-recieved.png

 

For this, users can go to their notification preferences, then select the notification they have received and set the dial to "off." That would avoid further notifications from being sent to this user only.

2017-09-30_1045-setting-off.png

 

They can also decide to leave them ON, but set a personalised filter that avoids them under certain conditions.

2017-09-30_1050-advanced-filter.png

 

 

Finally, if users decide to stop any notification from being sent to their devices (e.g. before going on holiday), users can select to disable them altogether.

2017-09-30_1107-disable-all.png

 

From the performance point of view, if users only enable the notifications they require, there would be less emails to send. This is a much better option than generating tons of emails that go out and back again, when these settings are ignored.

If you are reading this, unsubscribe of all the unwanted notifications  to help improve performance on the services you receive and provide.

 

Regularly execute a validation of the emails bounced back to the instance

There is not standard practice to parse the emails bounced back. However, as a cloud, instance administrators are limited on the options available to avoid bounced emails.

 

Administrators can see the emails received and marked as Junk. If you recognise those are emails as bounced back, the safest option is to set the user 'notification' to Disabled. Then contact the user to inform them of the change. You can do this regularly as it improves your email reliability because it reduces the amount of emails bouncing back. If your instance is having a larger portions of emails bounced back, I have created some tools to help.

 

retrieveBouncedBackEmail is a script when executed  that would match a query and parse the emails on the body_text, which usually contains the bounced emails. It ignored the emails after the "Received:" text, as it usually contains the original senders.

 

setUserNotificationbyEmail is a script that would enable or disable the notification on the user that matches the email address.

 

Here is the script for retrieveBouncedBackEmail:

 

// Retrieve the list of emails on bounced emails 
// Note: The normal practice here isn't to attempt to parse the bounce message at all.
//
// Returns an array with the list of [0] messages [1] Array of emails bounced
function retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery) {

    return _retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery);

    function _retrieveBouncedBackEmail(queryToMatch, maxToFindPerEmail, limitquery) {
        // Set max emails retrieved from each bounced email parsed - default 10 
        maxToFindPerEmail = maxToFindPerEmail || 10;
        // Limit parsing more than limitquery emails - default 100
        limitquery = limitquery ? limitquery : 100;

        // Set the query that matches the bounced emails - default: "type=received-ignored^sys_created_onONThis week
        queryToMatch = queryToMatch || "type=received-ignored^sys_created_onONThis week@javascript:gs.beginningOfThisWeek()@javascript:gs.endOfThisWeek()";

        // Set the response message and the query to perform
        var vmessages = ["Query limit set " + limitquery, "Searching on sys_email", "Query: " + queryToMatch, "Threshold: " + maxToFindPerEmail],
            b = new GlideRecord("sys_email");
        b.addEncodedQuery(queryToMatch);
        b.setLimit(limitquery);
        b.query();
        vmessages.push(" sys_email records returned: " + b.getRowCount());
        // Loop on the matched emails and assume the body contains the list of emails bounced back
        for (var lemails = []; b.next();) {
            var a = b.body_text,
                // Ignores the text after "Received:" on the body_text as it contains the emails from the original message
                // that you do not want to parse. You want to retrieve only the ones related to the bounced email
                g = a.indexOf("Received:");
            0 < g && (a = a.substr(0, g));

            a = (a = extractEmails(a)) ? uniq(a) : [];

            // It would report on email bound back on which we could not extract an email 
            // so you need to inspected it, to validate why it did not retrieve an email
            0 == a.length && vmessages.push("Email we could not extract an email addresses: " +
                b.sys_id + " - count: " + a.length);

            // If the email contains too many emails, it will add a message
            // so you need to inspected it, to validate why there were so many emails
            a.length > maxToFindPerEmail && vmessages.push("Email ignored by threshold: " + b.sys_id + " - count: " + a.length);
            (0 < a.length && a.length) <= maxToFindPerEmail && (lemails = uniq(lemails.concat(a)))
        }

        // returns an array with the list of arrays: 
        // [0] array of message [1] array of emails
        return [vmessages, lemails]
    };

    // Generate an array of emails found on the text provided
    function extractEmails(text)
    {
        return text.match(/([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9._-]+)/gi);
    }

    // Sort and remove non-unique strings on the array
    function uniq(a) {
        return a.sort().filter(function(item, pos, ary) {
            return !pos || item != ary[pos - 1];
        })
    }
}

 

Here is the script for setUserNotificationbyEmail:

 

// Set the User 'notification' to Enable/Disabled for the matching email addresses
// vemailarray is the array of emails to disable
// setEnable is either "enabled" or "disabled". Disabled is the defaul
// limitquery is the max number of users to set. It has a limitquery of 100
// 
function setUserNotificationbyEmail(vemailarray, setEnable, limitquery) {

    // It will only query the users that need to be enabled or disabled, and ignore the ones already set.
    var c = (setEnable = /^(enable|enabled|true|1|yes|on)$/i.test(setEnable)) ? "notification=1^emailIN" + vemailarray.join(",") : "notification=2^emailIN" + vemailarray.join(","),
        b = new GlideRecord("sys_user");
    limitquery = limitquery ? limitquery : 100;
    b.addEncodedQuery(c);
    b.setLimit(limitquery);
    b.query();
    // create the message to provide back
    c = ['Query Limit set ' + limitquery, 'Records found: ' + b.getRowCount(), 'Query executed: ' + c];
    // Here the users are being set
    for (; b.next();) c.push((setEnable ? "Enabled " : "Disabled ") + "notification: " + b.sys_id + " - user: " + b.user_name), b.notification = setEnable ? 2 : 1, b.update();
    return c
};

 

 

For an example, here is a bounced email found on an instance:

 

2017-09-30_1216-example-bounced-a.png

 

Executing:

// set the qualification to match your bounced emails.  
var result = retrieveBouncedBackEmail("sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7", 45, 1000);
gs.print("\n Results:\n" + result[0].join("\n") + "\n\nFinal list of emails " + result[1].length + "\n\n\n" + result[1].join("\n"));

 

RESULT:

*** Script:
Results:
Query limit set 1000
Searching on sys_email
Query: sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7
Threshold: 45
sys_email records returned: 1

Final list of emails 3


andrea.sisco@abc.com.net
test-bb.aa@abc.com.net
test.aaa@abc.com.net

 

To disable the users, you can execute the following:

 

// create an array of emails, for the users you want to disable.
var todisablelist = ["andrea.sisco@abc.com.net","test-bb.aa@abc.com.net","test.aaa@abc.com.net"]

// To disable the User Notification uncomment the next line
gs.print( '\n\nSetting user disabled:\n\n' + setUserNotificationbyEmail(todisablelist,'Disabled').join('\n'));

 

RESULT:

*** Script:

 

Setting user disabled:

 

Query Limit set 100
Records found: 1
Query executed: notification=2^emailINandrea.sisco@abc.com.net,test-bb.aa@abc.com.net,test.aaa@abc.com.net
Disabled notification: 8d256345dbe983002fd876231f96196e - user: andrea.sisco

 

Or you can use them together as follow:

 

// set the qualification to match your bounced emails.  
var result = retrieveBouncedBackEmail("sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7", 45, 1000);
gs.print("\n Results:\n" + result[0].join("\n") + "\n\nFinal list of emails " + result[1].length + "\n\n\n" + result[1].join("\n"));

// result[1] hold the array of emails, for the users you want 
// To disable *after you validate* the User Notification result[1] contains an array of the emails
gs.print( '\n\nSetting user disabled:\n\n' + 
   setUserNotificationbyEmail(todisablelist,'Disabled').join('\n'));

 

RESULT:

*** Script:
Results:
Query limit set 1000
Searching on sys_email
Query: sys_idSTARTSWITHc3a63a97dbc14bc0d975f1c41d9619b7
Threshold: 45
sys_email records returned: 1

Final list of emails 3


andrea.sisco@abc.com.net
test-bb.aa@abc.com.net
test.aaa@abc.com.net

Setting user disabled:


Query Limit set 100
Records found: 1
Query executed: notification=2^emailINandrea.sisco@abc.com.net,test-bb.aa@abc.com.net,test.aaa@abc.com.net
Disabled notification: 8d256345dbe983002fd876231f96196e - user: andrea.sisco

Please note the user notification value does not affect the "active" or "lock" status on the account. It will only affect the notifications.

I hope these actions empower the administrator to set a user account's notifications to Disable. This would dramatically reduce the amount of emails being bounced back.

 

 

Archive junk emails

ServiceNow can archive and eventually destroy email messages that you no longer need, in the case Junk emails or if your Email table is excessively large. This is especially important if you depend on emails as very large tables tend to degrade over time and delay upgrades.

 

As per performance, we are looking for email retention rules: "Emails - Ignored and over 90 days old". This rule archives email message records that were created more than 90 days prior to the current date and are of type received-ignored or sent-ignored.

2017-09-30_1326-archiving.png

 

 

Once the emails are archived, then they stay on the archive for a year before being destroyed.

2017-09-30_1331-destroying.png

 

Now, it is time to put this all in place and allow your users to unsubscribe. For those which the notifications bounce back, then set the sys_user 'notification' to Disabled. Finally, for those emails ending up on your instance Junk, archive and destroy them. If you keep doing this, your email delivery will be reliable and delays will be a thing of the past.

 

Here are some useful resources for more information :

If you thought the GlideDateTime holds the time and the time zone, and you would need to convert from one time zone to another or complex task to use them, think again. GlideDateTime holds the time in Coordinated Universal Time (or UTC). It takes time to get used to this "simple" concept. You would only need to consider the time zone for "displaying" the values.

albert_einstein.jpeg

When you retrieve the data from a GlideRecord, date and time fields are populated with all the relevant information. The values on the date and time fields are in UTC. However, when you are creating a new date and time field from scratch, or trying to get the Display value, you may face the task of validating the Display time. If that is the case, this blog is for you. I will show some examples of the data retrieved from an incident, and another when setting the time from scratch in a script.

 

Simple enough? The browser will show the GlideDateTime Display value, and calculations are in UTC.

 

GlideDateTime holds the time in UTC

 

When reviewing the different ways to overcome multiple time zones, converting the stored data as UTC (Universal Time Coordinated) is really simple and effective.

From a client point of view, they need to worry about time zones when displaying the data. In any other calculations, the times are UTC.

 

Here is one example of a client time and the user profile time:

 

2017-03-19_1438-the-beaty-of-UTC.png

2017-03-19_1440-timezone-user.png

In UK, the time shown on the PC was 14:38, while on the app it showed 07:38 because the profile was set to the the America/Los_Angeles time zone.

Both times are at different time zones, but on UTC they are both the same. Closed (closed_at) is "2017-03-19 14:38:01 UTC".

The Display time is calculated based on the time zone

 

The time zone is calculated based on the user profile, the system time zone, or the client time zone. However, the time zone can later be set manually by scripts.

Then based on the time zone, an offset is set and the "Display" value is calculated.

 

Here is an example:

 

--script----

var gr = new GlideRecord('incident');
gr.get("number","INC0020002");

gs.print('\ngr.closed_at: ' + gr.closed_at + " UTC\ngr.closed_at.getDisplayValue(): " + gr.closed_at.getDisplayValue() + " system time");

 

Results:

*** Script:

gr.closed_at: 2017-03-19 14:38:01 UTC

gr.closed_at.getDisplayValue(): 2017-03-19 07:38:01 system time

 

Modifying the time zone will NOT modify the stored UTC data; just the Display value.

Modifying the time zone does not modify the stored data. This is an area that may be confusing. GlideDateTime contains a time zone. Changing the time zone, it will only modify the "Display" value. This key on how the data is displayed on the clients.

 

Here is an example:

 

--script----

var message = [];
var gr = new GlideRecord('incident');
gr.get("number","INC0020002");
var vclosed_at = new GlideDateTime (gr.closed_at);
message.push('gr.closed_at: ' + vclosed_at);

// Setting to IST timezone
vclosed_at.setTZ(Packages.java.util.TimeZone.getTimeZone("IST"));
message.push('\ngr.closed_at: ' + vclosed_at + " UTC\ngr.closed_at.getDisplayValue(): " + vclosed_at.getDisplayValue() + " IST");

// Setting to US/Pacific timezone
vclosed_at.setTZ(Packages.java.util.TimeZone.getTimeZone("US/Pacific"));
message.push('\ngr.closed_at: ' + vclosed_at + " UTC\ngr.closed_at.getDisplayValue(): " + vclosed_at.getDisplayValue() + " US/Pacific");

gs.print(message.join ('\n'));

 

Results:

 

Timezone

Display Time

Database time

IST

2017-03-19 20:08:01

2017-03-19 14:38:01 UTC

US/Pacific

2017-03-19 07:38:01

2017-03-19 14:38:01 UTC

 

*** Script:

gr.vgdt_ist: 2017-03-19 14:38:01 UTC

vgdt_ist.getDisplayValue(): 2017-03-19 20:08:01 IST

 

gr.vgdt_pdt: 2017-03-19 14:38:01 UTC

vgdt_pdt.getDisplayValue(): 2017-03-19 07:38:01 US/Pacific

 

How to initialize a GlideDateTime with Display values

 

This is probably the most asked question when dealing with dates and times: How do you create a date and time field when we know the display values? The answer is to set the GlideDateTime time zone before setting the display value. This is performing a reverse offset translation from the "Display" value into the database value.

 

Here is an example:

 

--script----

var vgdt_ist = setDisplayTime ("2017-03-19 20:08:01", "IST");
var vgdt_pdt = setDisplayTime ("2017-03-19 07:38:01", "US/Pacific");
var message = [];

// Setting to IST
message.push('\ngr.vgdt_ist: ' + vgdt_ist + " UTC\nvgdt_ist.getDisplayValue(): " + vgdt_ist.getDisplayValue() + " IST");

// Setting to US/Pacific
message.push('\ngr.vgdt_pdt: ' + vgdt_pdt + " UTC\nvgdt_pdt.getDisplayValue(): " + vgdt_pdt.getDisplayValue() + " US/Pacific");

gs.print(message.join ('\n'));


// Convert provided date time and timezone into UTC to validate
// Default format is "yyyy-MM-dd HH:mm:ss"
// return: GlideDateTime on vtimezone
function setDisplayTime (originaldt, vtimezone) {
    var a = new GlideDateTime();
    a.setTZ(Packages.java.util.TimeZone.getTimeZone(vtimezone));
    a.setDisplayValue(originaldt, "yyyy-MM-dd HH:mm:ss");       
    return a;
}

 

Results:

 

Timezone

Display Time

Database time

IST

2017-03-19 20:08:01

2017-03-19 14:38:01 UTC

US/Pacific

2017-03-19 07:38:01

2017-03-19 14:38:01 UTC

 

*** Script:

gr.vgdt_ist: 2017-03-19 14:38:01 UTC

vgdt_ist.getDisplayValue(): 2017-03-19 20:08:01 IST

 

gr.vgdt_pdt: 2017-03-19 14:38:01 UTC

vgdt_pdt.getDisplayValue(): 2017-03-19 07:38:01 US/Pacific

 

Finally, it is important to simplify date and time fields, considering that they only hold the data in UTC. Everything should be as simple as it can be, but not simpler.

 

For more information on date and time fields, check out these resources:

An attractive concept of tables has been implemented as GlideRecord in ServiceNow. The object comprises of functions, elements and methods to work with all fields available. As tables contain fields, and the fields have types, we tend to assume that table [dot] field will inherit that field type (e.g. string). If you think that, you backed the wrong horse! GlideRecord fields are not string, number, or boolean fields. I can confirm a GlideRecord field is represented by a GlideElement, so each field itself is an object. When used, Java will guess and cast them to the correct type. To "cast" means to take an object from one particular type and transform it into another, like dollars ($) to pounds (£), you can lose something in the conversion.

 

There is no need to cry a river about it; just be aware of this behavior. Even professional developers will fail to spot these problem. Same as when you are driving and looking out for motorbikes, when you are using a GlideRecord, make sure you look out for operations on these GlideElement objects. As it reads on the THINK! campaign, expect the unexpected.

 

think-stay-in-control-web-image-cast.jpeg

What to look out for when using a GlideRecord

 

#

THINK! To look out

THINK! Advice when you are scripting

1

GlideRecord fields containing strings, numbers, or booleans, especially when passed to other functions as parameters

When passing parameters to functions, force the cast to the required variable input type. For example, to cast to string use .toString(). To cast to decimals use value + 0.

2

Undefined values 

When a functions does not exist, the result is undefined

3

The typeof the GlideRecord fields is “object” (e.g typeof gr.incident = object)

Operations that depends on the type of the object like === could fail on GlideRecord fields

4

Strings operations need to be applied to strings and not the GlideElement object

Ensure you are performing a .toString() when a string operation is required (e.g. gr.short_description.toString().length)

 

Casting issues when using GlideRecord

Using an example, I will try to validate these three potential casting problems:

  • In a GlideElement for a string field: When trying to use 'String' function 'length' on it, it returns undefined.
  • In a GlideElement for numbers: When trying to compare the value using === , it returns false.
  • In a GlideElement for a boolean: When trying to compare the value using === , it returns false.

 

 

Example of GlideRecord fields returning false/undefined results

To demonstrate the cast problems, I have created an example. When scripting, you can explicitly cast your field. Here is a script include I have used to explicitly cast the fields:

Script include: ParseGlideElement

gliderecord example.jpg

Example Script:

var ParseGlideElement = Class.create();
ParseGlideElement.prototype = {
    initialize: function() {},
    // Parse. Input: A glideElement object. Output: A cast of the field value into boolean, decimal, date_time or string based on field internalType
    parse: function(a) {
        if(a.nil())return null;
        var b = a.getED().getInternalType();
        return "boolean" == b ? this.parseBool(a) :
            "integer" == b ? this.parseInt(a) : 
            "glide_date_time" == b ? new GlideDateTime(a) : 
            "string" == b ? a.toString() :
            "decimal" == b ? this.parseFloat(a): 
            a
    },
    parseBool: function(a) {
        return "boolean" == typeof a ? a : /^(true|1|yes|on)$/i.test(a)
    },
    parseInt: function(a) {
        return a + 0
    },
    parseFloat: function(a) {
        return a + 0
    },
    type: "ParseGlideElement"
};

 

Here is the record I will be using to test:

test record.jpg

When executing the following background script:

var gr = new GlideRecord("u_test_record");
gr.get('6f46dfb913e576005e915f7f3244b020'); // sys_id of the test created

var vstring = gr.u_string1;
var vinteger = gr.u_integer;
var vboolean = gr.u_truefalse;

gs.print("Test without explicitly casting fields: \n " + testGlideRecord(vstring, vinteger, vboolean).join('\n'));

var gpe = new ParseGlideElement();
vstring = gpe.parse(gr.u_string1);  // casting to string based on the ED internaltype Same as gr.u_string1.toString()
vinteger = gpe.parse(gr.u_integer); // casting to integer based on the ED internaltype. Same as gr.u_integer + 0
vboolean = gpe.parse(gr.u_truefalse); // casting to boolean based on the ED internaltype

gs.print("Test explicitly casting fields: \n " + testGlideRecord(vstring, vinteger, vboolean).join('\n'));


function testGlideRecord(vstring, vinteger, vboolean) {
    var message = [];
    message.push(
        '\nGlide record u_test_record.do?sys_id=6f46dfb913e576005e915f7f3244b020'
    );

    // Example 1 - Expected cast to String

    message.push("\n****** Example 1  - Expected cast to String ");
    message.push("gr.u_string1: " + vstring + " - typeof: " + typeof vstring);
    message.push("gr.u_string1.length: " + vstring.length + " - expected: 11");

    // Example 2 - Expected cast to Integer
    message.push("\n****** Example 2 - Expected cast to Integer ");
    message.push("gr.u_integer: " + vinteger + " - typeof: " + typeof vinteger);
    message.push(vinteger + " === 77777 :" + (vinteger === 77777) + '- expected: true');

    // Example 3 - Expected cast to Boolean
    message.push("\n****** Example 3 - Expected cast to boolean ");
    message.push("gr.u_truefalse: " + vboolean + " - typeof: " + typeof vboolean);
    message.push(vboolean + " === false :" + (vboolean === false) +
        '- expected: true');

    return message;
}

 

... this is the result:

 

Field

Value

Scripting example simplified

Result

Expected

u_string1

teststring2

gr.u_string1.length

undefined

11

u_string1

teststring2

gr.u_string1.toString().length

11

11

u_integer

7777

gr.u_integer === 7777

false

true

u_integer

7777

(gr.u_integer + 0) === 7777

true

true

u_truefalse

false

gr.u_truefalse === false

false

true

u_truefalse

false

parseBool(gr.u_truefalse) === false

true

true

 

As you can see there are a few cases where you need to explicitly "cast" your field types and avoid mixing pears with apples.

 

How to use GlideRecord when the unexpected happens

In a nutshell, here are some recommendations for using GlideRecord when encountering a casting issue:

 

GlideElement with
Element Descriptor (ED)

Operations to look out

Areas of problem

Recommendation

String

concatenations or as parameters to other functions

Operations like startsWith(), endsWith(), length used directly could return undefined

Use .toString()

Integer or Float

When used on math operations against an integer or decimal

On some operations, it could be cast to string incorrectly or validated against the incorrect type

Consider using "value + 0 " or Number (value) to force the cast to decimal

Boolean

When used on a conditions

When on complex conditions, it could be evaluated to false

Consider transforming the field to a boolean value

 

 

I have tested using our Istanbul release and Chrome as the browser.

 

Want to learn more about GlideRecords? Here are some great resources to check out:

 

Thanks to reid (Reid Murray) for the pointers

Do you get Reference fields getting the wrong matching? Then use setDisplayValue. Enviably developed within this highly regarded field type that displays as a string is just one click away from another table that includes our famous dot walk features. Dot-walking provides an ideal way to join table records. When you define a reference field, the system creates a relationship between the two tables. Adding a reference field to a form makes the other fields in the referenced table available to the form. When on the incident table, both Caller and Assigned to are referenced to the sys_user table.

 

Here is the story of an example of what happens when you try to set a reference field value like any other ordinary field! Reference fields are special. I mean, they are REALLY special. A lion disguised as cat.

 

Cat-Lion-Mirror.png

Example of populating a reference field by setValue vs setDisplayValue

I have created two users on sys_user (in this order):

First user: user_name = 2, First name = Mike, Last Name = Yes   (Display value : "Mike Yes") - (e.g. sysid:de5e388d4f1932002c9e4b8d0210c7f3)

populate ref.jpg

Second user: user_name = user_2, First name = "", Last Name = 2  (Display value : "2")

populate ref1.jpg

 

 

When executing the following background script :

 

---background script----

// Create a new Incident
var gr = new GlideRecord('incident');

// Set the caller ID by setValue but using the Display value
gr.caller_id = "2";
gr.short_description = "testing sys_user.name and sys_user.user_name matching display value";
var vsid = gr.insert();
// Now, retrieving the record inserted
var gr2 = new GlideRecord('incident');
gr2.get('sys_id',vsid);

gs.print ('inserted ' + gr2.number +' OK. Caller: ' + gr2.caller_id.name + ' Caller sys_id: ' + gr2.caller_id.sys_id);

 

The result is the incident is created with caller_id = "Mike Yes". It matches the incorrect value.

> [0:00:00.228] Script completed in scope global: script

> *** Script: inserted INC0010002 OK. Caller: Mike Yes Caller sys_id: de5e388d4f1932002c9e4b8d0210c7f3

background script.jpg

 

On reference fields to sys_user, user_name is used to match the user before using the "Display value." This is because I have use setValue ( = is setValue)

> gr.caller_id = "2";

Please note gr.<field> = "xxx" is the same as gr.setValue('field',"xxxx")

 

Same behavior will be seen on email inbound actions, scripts and data imports (loading the data into reference fields). If you are not using the sys_id on reference fields, the appropriate method is:

> gr.setDisplayValue('caller_id', "2");

Use setDisplayValue to assign Reference fields or Choice values when the data provided it is the "Display" value

 

Here is the correct script for this example

---background script----

// Create a new record
var gr = new GlideRecord('incident');

// IMPORTANT: As "2" is the Display value of the user "2", setDisplayValue needs to be used.
gr.setDisplayValue('caller_id', "2"); 
// as short_description is just string, setValue is enough.
gr.short_description = "testing sys_user.name and sys_user.user_name matching display value";
var vsid = gr.insert();

// Retrieving the information back to review:
var gr2 = new GlideRecord('incident');
gr2.get('sys_id',vsid);

gs.print ('inserted ' + gr2.number +' OK. Caller: ' + gr2.caller_id.name + ' Caller sys_id: ' + gr2.caller_id.sys_id);

 

The result is the incident is created with caller_id = "2" . Yeah!

>[0:00:00.067] Script completed in scope global: script

>*** Script: inserted INC0010003 OK. Caller: 2 Caller sys_id: d1de7cc94f1932002c9e4b8d0210c7ce

It correctly matches the Caller "2".

caller id.jpg

 

When dealing with reference fields, please ensure to use setDisplayValue if you are not passing the sys_id to avoid surprises! If you are creating a script, or a inbound action, a data source (to import data), etc

 

More information here:

Oops!... I did it again. While I was testing out data sources, I was tampering some import set tables by setting some "string fields" to "reference fields." However, I noticed some extra data got created before the transformation, when loading data into them. With my raised eyebrows I was not expecting this so I thought I would share my findings. Reference fields are very useful to normalise and organise data. Sometimes reference fields can be too powerful. Bow down before reference fields.

did it again.gif

Changing the field 'name' from type String to Reference

In this example I will show you what happens when you change the field 'name' from type String to reference fields to the sys_user table. I wanted to have the sys_id on the field, instead of the full name.  If you need to replace ugly sys_id with the actual data "Display" values, or to normalize the data, or better relate your data, you may choose to set the field as a reference field. This is a very unusual case.

sysid field.jpg

Normally, we would expect to import most string values as String types.

string types.jpg

 

However, tables are flexible and you can customize some of those to be reference fields.

customize ref field.jpg

 

Misspellings when importing records

The problem is that if we have a simple spelling mistake like "Boris Catino X", the data load could create a new record or set the reference field to NULL.

 

This data is inserted on the "Load all data" stage and there is NO transformation map executed yet.

 

Here is the result of my testings:

Import data

Expected

Match Display valued

Result

Additional notes

beverly campbel

Beverly Campbel

Yes

Sysid of matching record

Match is no case sensitive

Billie Cowley

Billie Cowley

Yes

Sysid of matching record

 

Boris Catino X

Boris Catino

No

New record sysid

New record created as display value does not match. On some cases can return false

 

You want to avoid the new records created by the reference fields themselves. Those records can cause confusion. If the records are not handled carefully, the loaded data could be set to null if there is no matching of the display value. This also applies if the data is passed for a reference field.

user ref fields.jpg

 

Using reference fields is very useful if you are importing accurate data, or the sys_id of the records directly. If the imported data is flaky, keep the fields as Strings. You can then use the transformation maps to gain control on how to process the data and when the new data is created.

 

More information here:

After web services were first introduced, they became very popular. Web services is a great tool to exchange information. ServiceNow implementations of the web services are top notch. When encoding international characters, the safest option points to Unicode. One of the most popular implementations is UTF-8, which is the one adopted by us.  If you need to connect to your instance and use non-ascii characters, you should read this blog. Specially if you are seeing � or questions marks (?) as part of the data received using web services. Luckily the solution is very simple by safe-encoding non-ascii characters before sending them (see below) if you are not using UTF-8.

 

Utf8webgrowth.png

Outside encoding, incoming SOAP requests with characters like "&", "<" or ">" on the data can cause errors. Those characters are reserved as they are part of the XML tags used to delimiter the data. If used on the data, they interfere with the SOAP message itself. If they are passed on the data and read on ServiceNow, they will show as "Unable to parse SOAP document"

Recommendations to safely pass non-ascii characters

I will focus on SOAP web services. They use SOAP messages to exchange data between the soap nodes (e.g. your soap client and the instance). However, while the data is provided on the message, both parties need to agree on the "encoding." We make it easy, we would use UTF-8 to encode characters.

 

unicode-shield.png

 

If you are interacting with your instance and you use a different encoding with special characters on your data, you may face problems if you are not using Unicode. However, XML offers the option to safe encoding most characters into escape characters (safe encoded).

 

For SOAP Message

If the message is

encoded on UTF-8

Recommendation: The message needs to be safe encoded

Incoming to the instance

Yes

No, it is not necessary.

If safe encoded, it will also work.

Incoming to the instance

No

Yes, safe encode the data to avoid ? or � characters

Outbound from the instance

Yes

Yes, if target is not UTF-8 to avoid ? or � characters

Outbound from the instance

No

Yes, always safe encode the data

 

The outbound soap messages from the instance are always on UTF-8, so the messages are always encoded on UTF-8

 

"Incoming" means that SOAP calls toward your instance. "Outbound" means SOAP calls from the instance to your end-point. To safely encode the message, you need to transform any non-ascii character into a XML code.

 

 

Example of a message sent to the instance using an Unicode (UTF-8) encoding

On the following example, I will use SOAP UI, to transfer "comments" that contains non-ascii characters and "work_notes" that contains the same characters safely encoded. I would expect the system will have NO problem with the characters as both soap nodes are using UTF-8.

 

Using SOAP UI looks like:

SOAP ui.jpg

 

In more detail, the message looks like:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:inc="http://www.service-now.com/incident">
   <soapenv:Header/>
   <soapenv:Body>
      <inc:insert>
         <short_description>Testing with encoding characters characters</short_description>
         <comments>Testing with encoding characters characters -not safe encoded
Basic Latin
! " # $ % &amp; ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; &#x3C; = &#x3E; ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~
Latin-1 Supplement
  ¡ ¢ £ ¤ ¥ ¦ § ¨ © ª « ¬  ® ¯ ° ± ² ³ ´ µ ¶ · ¸ ¹ º » ¼ ½ ¾ ¿ À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ÷ ø ù ú û ü ý þ ÿ
Latin Extended-A
Ā ā Ă ă Ą ą Ć ć Ĉ ĉ Ċ ċ Č č Ď ď Đ đ Ē ē Ĕ ĕ Ė ė Ę ę Ě ě Ĝ ĝ Ğ ğ Ġ ġ Ģ ģ Ĥ ĥ Ħ ħ Ĩ ĩ Ī ī Ĭ ĭ Į į İ ı IJ ij Ĵ ĵ Ķ ķ ĸ Ĺ ĺ Ļ ļ Ľ ľ Ŀ ŀ Ł ł Ń ń Ņ ņ Ň ň ʼn Ŋ ŋ Ō ō Ŏ ŏ Ő ő Œ œ Ŕ ŕ Ŗ ŗ Ř ř Ś ś Ŝ ŝ Ş ş Š š Ţ ţ Ť ť Ŧ ŧ Ũ ũ Ū ū Ŭ ŭ Ů ů Ű ű Ų ų Ŵ ŵ Ŷ ŷ Ÿ Ź ź Ż ż Ž ž ſ
Latin Extended-B
ƀ Ɓ Ƃ ƃ Ƅ ƅ Ɔ Ƈ ƈ Ɖ Ɗ Ƌ ƌ ƍ Ǝ Ə Ɛ Ƒ ƒ Ɠ Ɣ ƕ Ɩ Ɨ Ƙ ƙ ƚ ƛ Ɯ Ɲ ƞ Ɵ Ơ ơ Ƣ ƣ Ƥ ƥ Ʀ Ƨ ƨ Ʃ ƪ ƫ Ƭ ƭ Ʈ Ư ư Ʊ Ʋ Ƴ ƴ Ƶ ƶ Ʒ Ƹ ƹ ƺ ƻ Ƽ ƽ ƾ ƿ ǀ ǁ ǂ ǃ DŽ Dž dž LJ Lj lj NJ Nj nj Ǎ ǎ Ǐ ǐ Ǒ ǒ Ǔ ǔ Ǖ ǖ Ǘ ǘ Ǚ ǚ Ǜ ǜ ǝ Ǟ ǟ Ǡ ǡ Ǣ ǣ Ǥ ǥ Ǧ ǧ Ǩ ǩ Ǫ ǫ Ǭ ǭ Ǯ ǯ ǰ DZ Dz dz Ǵ ǵ Ƕ Ƿ Ǹ ǹ Ǻ ǻ Ǽ ǽ Ǿ ǿ ...</comments>
</comments>
         <work_notes>Testing with encoding characters characters -safe encoded
Basic Latin
! &#x22; # $ % &#x26; &#x27; ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; &#x3C; = &#x3E; ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ &#x60; a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~
Latin-1 Supplement
  &#xA1; &#xA2; &#xA3; &#xA4; &#xA5; &#xA6; &#xA7; &#xA8; &#xA9; &#xAA; &#xAB; &#xAC; &#xAD; &#xAE; &#xAF; &#xB0; &#xB1; &#xB2; &#xB3; &#xB4; &#xB5; &#xB6; &#xB7; &#xB8; &#xB9; &#xBA; &#xBB; &#xBC; &#xBD; &#xBE; &#xBF; &#xC0; &#xC1; &#xC2; &#xC3; &#xC4; &#xC5; &#xC6; &#xC7; &#xC8; &#xC9; &#xCA; &#xCB; &#xCC; &#xCD; &#xCE; &#xCF; &#xD0; &#xD1; &#xD2; &#xD3; &#xD4; &#xD5; &#xD6; &#xD7; &#xD8; &#xD9; &#xDA; &#xDB; &#xDC; &#xDD; &#xDE; &#xDF; &#xE0; &#xE1; &#xE2; &#xE3; &#xE4; &#xE5; &#xE6; &#xE7; &#xE8; &#xE9; &#xEA; &#xEB; &#xEC; &#xED; &#xEE; &#xEF; &#xF0; &#xF1; &#xF2; &#xF3; &#xF4; &#xF5; &#xF6; &#xF7; &#xF8; &#xF9; &#xFA; &#xFB; &#xFC; &#xFD; &#xFE; &#xFF;
Latin Extended-A
&#x100; &#x101; &#x102; &#x103; &#x104; &#x105; &#x106; &#x107; &#x108; &#x109; &#x10A; &#x10B; &#x10C; &#x10D; &#x10E; &#x10F; &#x110; &#x111; &#x112; &#x113; &#x114; &#x115; &#x116; &#x117; &#x118; &#x119; &#x11A; &#x11B; &#x11C; &#x11D; &#x11E; &#x11F; &#x120; &#x121; &#x122; &#x123; &#x124; &#x125; &#x126; &#x127; &#x128; &#x129; &#x12A; &#x12B; &#x12C; &#x12D; &#x12E; &#x12F; &#x130; &#x131; &#x132; &#x133; &#x134; &#x135; &#x136; &#x137; &#x138; &#x139; &#x13A; &#x13B; &#x13C; &#x13D; &#x13E; &#x13F; &#x140; &#x141; &#x142; &#x143; &#x144; &#x145; &#x146; &#x147; &#x148; &#x149; &#x14A; &#x14B; &#x14C; &#x14D; &#x14E; &#x14F; &#x150; &#x151; &#x152; &#x153; &#x154; &#x155; &#x156; &#x157; &#x158; &#x159; &#x15A; &#x15B; &#x15C; &#x15D; &#x15E; &#x15F; &#x160; &#x161; &#x162; &#x163; &#x164; &#x165; &#x166; &#x167; &#x168; &#x169; &#x16A; &#x16B; &#x16C; &#x16D; &#x16E; &#x16F; &#x170; &#x171; &#x172; &#x173; &#x174; &#x175; &#x176; &#x177; &#x178; &#x179; &#x17A; &#x17B; &#x17C; &#x17D; &#x17E; &#x17F;
Latin Extended-B
&#x180; &#x181; &#x182; &#x183; &#x184; &#x185; &#x186; &#x187; &#x188; &#x189; &#x18A; &#x18B; &#x18C; &#x18D; &#x18E; &#x18F; &#x190; &#x191; &#x192; &#x193; &#x194; &#x195; &#x196; &#x197; &#x198; &#x199; &#x19A; &#x19B; &#x19C; &#x19D; &#x19E; &#x19F; &#x1A0; &#x1A1; &#x1A2; &#x1A3; &#x1A4; &#x1A5; &#x1A6; &#x1A7; &#x1A8; &#x1A9; &#x1AA; &#x1AB; &#x1AC; &#x1AD; &#x1AE; &#x1AF; &#x1B0; &#x1B1; &#x1B2; &#x1B3; &#x1B4; &#x1B5; &#x1B6; &#x1B7; &#x1B8; &#x1B9; &#x1BA; &#x1BB; &#x1BC; &#x1BD; &#x1BE; &#x1BF; &#x1C0; &#x1C1; &#x1C2; &#x1C3; &#x1C4; &#x1C5; &#x1C6; &#x1C7; &#x1C8; &#x1C9; &#x1CA; &#x1CB; &#x1CC; &#x1CD; &#x1CE; &#x1CF; &#x1D0; &#x1D1; &#x1D2; &#x1D3; &#x1D4; &#x1D5; &#x1D6; &#x1D7; &#x1D8; &#x1D9; &#x1DA; &#x1DB; &#x1DC; &#x1DD; &#x1DE; &#x1DF; &#x1E0; &#x1E1; &#x1E2; &#x1E3; &#x1E4; &#x1E5; &#x1E6; &#x1E7; &#x1E8; &#x1E9; &#x1EA; &#x1EB; &#x1EC; &#x1ED; &#x1EE; &#x1EF; &#x1F0; &#x1F1; &#x1F2; &#x1F3; &#x1F4; &#x1F5; &#x1F6; &#x1F7; &#x1F8; &#x1F9; &#x1FA; &#x1FB; &#x1FC; &#x1FD; &#x1FE; &#x1FF; 
</work_notes> 
      </inc:insert>
   </soapenv:Body>
</soapenv:Envelope>

 

Once processed into the target table, the characters are correctly displayed.

UTF 8 ENCODED.jpg

This shows that the data is processed without any problems.

 

Example of a message sent to the instance using a non-Unicode (ISO-8859-1) encoding

Similarly to the previous example, I will use SOAP UI, to transfer "comments" that contains non-ascii characters and "work_notes" that contains the same characters safely encoded. This time I will encode on 'iso-8859-1'. I would expect the system will try to match the characters against UTF-8.

 

Using soap UI looks like:

non unicode 8 soap ui.jpg

In more detail looks like:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:inc="http://www.service-now.com/incident">
   <soapenv:Header/>
   <soapenv:Body>
      <inc:insert>
         <short_description>Testing with encoding characters characters</short_description>
         <comments>Testing with encoding characters characters -not safe encoded
Basic Latin
 ! " # $ % ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ;  @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~
Latin Extended-A
A a Ă ă Ą ą Ć ć C c C c Č č Ď ď Đ đ E e E e E e Ę ę Ě ě G g G g G g G g H h H h I i I i I i I i I i J j K k Ĺ ĺ L l Ľ ľ Ł ł Ń ń N n Ň ň O o O o Ő ő O o Ŕ ŕ R r Ř ř Ś ś S s Ş ş Š š Ţ ţ Ť ť T t U u U u U u Ů ů Ű ű U u W w Y y Y Ź ź Ż ż Ž ž 
Latin Extended-B
b Đ F f I l O O o t T U u z | ! A a I i O o U u U u U u U u U u A a G g G g K k O o O o j
</comments>
         <work_notes>Testing with encoding characters characters -safe encoded
Basic Latin
! &#x22; # $ % &#x26; &#x27; ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; &#x3C; = &#x3E; ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ &#x60; a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~
Latin-1 Supplement
  &#xA1; &#xA2; &#xA3; &#xA4; &#xA5; &#xA6; &#xA7; &#xA8; &#xA9; &#xAA; &#xAB; &#xAC; &#xAD; &#xAE; &#xAF; &#xB0; &#xB1; &#xB2; &#xB3; &#xB4; &#xB5; &#xB6; &#xB7; &#xB8; &#xB9; &#xBA; &#xBB; &#xBC; &#xBD; &#xBE; &#xBF; &#xC0; &#xC1; &#xC2; &#xC3; &#xC4; &#xC5; &#xC6; &#xC7; &#xC8; &#xC9; &#xCA; &#xCB; &#xCC; &#xCD; &#xCE; &#xCF; &#xD0; &#xD1; &#xD2; &#xD3; &#xD4; &#xD5; &#xD6; &#xD7; &#xD8; &#xD9; &#xDA; &#xDB; &#xDC; &#xDD; &#xDE; &#xDF; &#xE0; &#xE1; &#xE2; &#xE3; &#xE4; &#xE5; &#xE6; &#xE7; &#xE8; &#xE9; &#xEA; &#xEB; &#xEC; &#xED; &#xEE; &#xEF; &#xF0; &#xF1; &#xF2; &#xF3; &#xF4; &#xF5; &#xF6; &#xF7; &#xF8; &#xF9; &#xFA; &#xFB; &#xFC; &#xFD; &#xFE; &#xFF;
Latin Extended-A
&#x100; &#x101; &#x102; &#x103; &#x104; &#x105; &#x106; &#x107; &#x108; &#x109; &#x10A; &#x10B; &#x10C; &#x10D; &#x10E; &#x10F; &#x110; &#x111; &#x112; &#x113; &#x114; &#x115; &#x116; &#x117; &#x118; &#x119; &#x11A; &#x11B; &#x11C; &#x11D; &#x11E; &#x11F; &#x120; &#x121; &#x122; &#x123; &#x124; &#x125; &#x126; &#x127; &#x128; &#x129; &#x12A; &#x12B; &#x12C; &#x12D; &#x12E; &#x12F; &#x130; &#x131; &#x132; &#x133; &#x134; &#x135; &#x136; &#x137; &#x138; &#x139; &#x13A; &#x13B; &#x13C; &#x13D; &#x13E; &#x13F; &#x140; &#x141; &#x142; &#x143; &#x144; &#x145; &#x146; &#x147; &#x148; &#x149; &#x14A; &#x14B; &#x14C; &#x14D; &#x14E; &#x14F; &#x150; &#x151; &#x152; &#x153; &#x154; &#x155; &#x156; &#x157; &#x158; &#x159; &#x15A; &#x15B; &#x15C; &#x15D; &#x15E; &#x15F; &#x160; &#x161; &#x162; &#x163; &#x164; &#x165; &#x166; &#x167; &#x168; &#x169; &#x16A; &#x16B; &#x16C; &#x16D; &#x16E; &#x16F; &#x170; &#x171; &#x172; &#x173; &#x174; &#x175; &#x176; &#x177; &#x178; &#x179; &#x17A; &#x17B; &#x17C; &#x17D; &#x17E; &#x17F;
Latin Extended-B
&#x180; &#x181; &#x182; &#x183; &#x184; &#x185; &#x186; &#x187; &#x188; &#x189; &#x18A; &#x18B; &#x18C; &#x18D; &#x18E; &#x18F; &#x190; &#x191; &#x192; &#x193; &#x194; &#x195; &#x196; &#x197; &#x198; &#x199; &#x19A; &#x19B; &#x19C; &#x19D; &#x19E; &#x19F; &#x1A0; &#x1A1; &#x1A2; &#x1A3; &#x1A4; &#x1A5; &#x1A6; &#x1A7; &#x1A8; &#x1A9; &#x1AA; &#x1AB; &#x1AC; &#x1AD; &#x1AE; &#x1AF; &#x1B0; &#x1B1; &#x1B2; &#x1B3; &#x1B4; &#x1B5; &#x1B6; &#x1B7; &#x1B8; &#x1B9; &#x1BA; &#x1BB; &#x1BC; &#x1BD; &#x1BE; &#x1BF; &#x1C0; &#x1C1; &#x1C2; &#x1C3; &#x1C4; &#x1C5; &#x1C6; &#x1C7; &#x1C8; &#x1C9; &#x1CA; &#x1CB; &#x1CC; &#x1CD; &#x1CE; &#x1CF; &#x1D0; &#x1D1; &#x1D2; &#x1D3; &#x1D4; &#x1D5; &#x1D6; &#x1D7; &#x1D8; &#x1D9; &#x1DA; &#x1DB; &#x1DC; &#x1DD; &#x1DE; &#x1DF; &#x1E0; &#x1E1; &#x1E2; &#x1E3; &#x1E4; &#x1E5; &#x1E6; &#x1E7; &#x1E8; &#x1E9; &#x1EA; &#x1EB; &#x1EC; &#x1ED; &#x1EE; &#x1EF; &#x1F0; &#x1F1; &#x1F2; &#x1F3; &#x1F4; &#x1F5; &#x1F6; &#x1F7; &#x1F8; &#x1F9; &#x1FA; &#x1FB; &#x1FC; &#x1FD; &#x1FE; &#x1FF; 
</work_notes> 
      </inc:insert>
   </soapenv:Body>
</soapenv:Envelope>

 

Below, you can see some characters would be translated. However, with XML encoded characters, you can safely send UTF-8 characters.

incorrectly translated UTF .jpg

Using non-ascii characters can cause them to get transferred incorrectly. However, using XML encoded characters you can safely transfer those characters.

Using XML encoded characters, you can safely transfer non-ascii characters when encoding is not UTF-8.

 

How to safely encode XML data

There are several methods to achieve the data to be encoded. Below is a simple script that can encode the data transferred.

 

Here is a simple background script function that encode data:

// Simple encoding XML data function - Do not double-escape any characters.
function escapeXMLEntities(xmldata) {
    return xmldata.replace(/[\u00A0-\u2666<>\&]/g, function (a) {
        return "&#" + a.charCodeAt(0) + ";"
    })
};

var str = "A a Ă ă Ą ą Ć ć C c C c Č č Ď ď Đ đ E e E e E e Ę ę Ě ě G g G g G g G g H h H h I i I i I i I i I i J j K k Ĺ ĺ L l Ľ ľ Ł ł Ń ń N n Ň ň O o O o Ő ő O o Ŕ ŕ R r Ř ř Ś ś S s Ş ş Š š Ţ ţ Ť ť T t U u U u U u Ů ů Ű ű U u W w Y y Y Ź ź Ż ż Ž ž";
gs.print(escapeXMLEntities(str));

 

Result:

Script completed in scope global: script

*** Script: A a &#258; &#259; &#260; &#261; &#262; &#263; C c C c &#268; &#269; &#270; &#271; &#272; &#273; E e E e E e &#280; &#281; &#282; &#283; G g G g G g G g H h H h I i I i I i I i I i J j K k &#313; &#314; L l &#317; &#318; &#321; &#322; &#323; &#324; N n &#327; &#328; O o O o &#336; &#337; O o &#340; &#341; R r &#344; &#345; &#346; &#347; S s &#350; &#351; &#352; &#353; &#354; &#355; &#356; &#357; T t U u U u U u &#366; &#367; &#368; &#369; U u W w Y y Y &#377; &#378; &#379; &#380; &#381; &#382;

 

When using SOAP messages, ensure you are using UTF-8 to transfer the data, or ensure non-ascii characters are escaped safely into XML safe codes. A good note, this can also be used on many other situations. Luckily, programmatically, it is not a big challenge.

 

More information can be found here:

If you are planning to import thousands of records into your instance and you have a complex coalesce key to update data, this post is for you. Easy import, data load, and import sets are wonderfully designed to import data into your instance.

 

ServiceNow uses two steps to import data:

  1. Loading
  2. Transforming

 

Data import is crafted to a very high specification, where the loading happens on the Data Sources while the transforming happens on the Transformation maps. Each execution is controlled by an Import set that displays the history of the data imported. Transformations maps can have coalesce field (keys to avoid duplicates) to allow them to update records.

 

I will focus on showing an example of a transformation map with a "complex key" (more than one field as coalesce) to update the target records which also avoid duplicates from reference fields (see below) and make one just one query (instead of multiple internal queries if selecting multiple coalesce fields).

 

choose_a_door.png

 

On a transformation map, one, or several, 'coalesce' fields define when a record is updated. Whilst the transformation maps are flexible and configurable, when using complex "keys" some transformations are better with a "field map script" as coalesce (aka "conditional coalesce").

 

A few notes on coalesce fields:

  • Coalesce field searches benefit from indexes on the target data field they are mapped to.
  • sys_id is indexed on all tables, making searches faster if they are used for mappings.
  • Using a reference field as coalesce can cause duplicates if the referenced data has duplicates (see below).
  • Setting multiple fields as coalesce, they could cause multiple queries for each of the coalesce fields on the target data, increasing import times.

 

On my example, I will use the alm_stock_rule table. To the untrained eye, you would think it contains only strings, and integers.

alm_stock_rule.png

A closer look at the alm_stock_rule table show the fields Model, Parent stockroom, and Stockroom are references to another table data (Reference fields). Reference fields store a sys_id for each referenced record in the database, but the sys_id is not shown. The reference field shows the display value.

(empty) or blank does not means the reference field is empty. It could be that the reference field display value is (empty) or blank. Always validate this by reviewing if it contains a sys_id value or not on the record itself e.g. Review the XML data of the record.

stock rule table.png

 

Coalesce using one-to-one field mapping on the transformation map

On reference fields, you can import data using the sys_id of the target 'referenced' data. However, most times, you would like to import data into "alm_stock_rule", using the display value instead to match the existing records.

import data.jpg

 

For this example, we would use "Stockroom", "Model", "Parent stockroom", "restocking option" as key for updates.

 

On this transformation map, we would define the "Stockroom", "Model", "Parent stockroom", "restocking option" fields with coalesce "true"

transform map.png

duplicated reference.jpgHere is a list of pros and cons I've generated on using one-to-one field mapping on the transformation map for the coalesce fields:

 

PROS

 

CONS

It is very configurable per field

 

You have no control on the final searches performed to match the coalesce fields values to the target data. This means that more than one search could be triggered. Worst case scenario is that more than one per each coalesce fields may be triggered.

It is easy to understand

 

If some of the coalesce fields source data is empty, it can trigger a query for (field=NULL) and the remaining coalesce fields which is unlikely to follow the indexes

No scripting is required

 

It depends on the field mapping options available

You can map more than the display value of the reference field by using "referenced value field name"

 

If some of the coalesce fields data holds very limited values (e.g. choice field) and the target table is very large, the query could be slow. e.g. you add impact as part of you coalesce fields, and your target table is incident. There is a case where query could be "select ... from incident where impact = 1" which could be a large query if you have a large incident table.

It is easier see which fields on the target table requires indexes (if the data is unique enough)

 

It could cause duplicates if reference fields are used as coalesce (see below)

 

Duplicate records could appear if reference fields are used as coalesce.

 

Notes on coalesce on reference fields

In this example, the model we are importing is "APC 42U 3100 SP2 NetShelter." I have created two records on the model referenced table (it is not the target table itself but the 'Product Model' table which is referenced by 'model'). As this happens, the coalesce fields will match two, then the import will create a new unwanted record instead of updating it. This is a common problem as not all tables holds unique values.

import data coalesce.jpg

On the import set, those records will show as State = Inserted when it should show ignored or updated

duplicate model.jpg

Using a reference field as coalesce can cause duplicates if the referenced data has duplicates

reference coalesce.jpg

 

Coalesce on field map scripts

An alternative coalesce would be a "Script" mapping to the target "sys_id".

For this example, I will explain a technique of creating a simple coalesce field by field map script to the sys_id of the target. As sys_id have an index already, so the last search with the script result as coalesce is minimal. You would like to do this to have more flexibility on the final search generated to update your data.

 

When using a field map script, the previous example transformation map would look as follow:

field map script.jpg

Then set the field map script to match the sys_id on the target and make it the ONLY with coalesce = true.

coalesce true.png

On the field map script, add the script to find the correct target record:

target record.png

 

Here is the script I used to find the target record:

 

answer = function(a) {  
     var list_to_compare=[["u_stockroom","stockroom.display_name"],  
          ["u_parent_stockroom","parent_stockroom.display_name"],  
          ["u_restocking_option","restocking_option"],  
          ["u_model","model.display_name"]];  
     return findmatch(list_to_compare, source, map.target_table,false,true);  
 }(source);  

/* Function findmatch is use on transformation maps to find a match with multiple coalesce fields

vlist: list of fields to compare, Array = [[ "source_field","target_field"],...]  Target field allows dot walk. 
vsource: source record,  
vtarget: target record,  
nomatchcreate: true will create record if there is no match)  
debugon: true will log the information about the matching results 
 
Returns sys_id of the target record, or null if error or if nomatchcreate = false and no match is found. 
 
Coalesce empty fields need to be OFF, so null answer (e.g on error), insert is cancelled 
*/  
function findmatch(vlist, vsource, vtarget, nomatchcreate, debugon) {
try {
    vtarget = new GlideRecord(vtarget + "");
    // Check the source fields coalesce has a value to add to the query 
    for (var h = vlist.length, c = 0; c < h; c++) 
        vsource[vlist[c][0]].hasValue() && 
        vsource.isValidField(vlist[c][0]) && 
        vtarget.addQuery(vlist[c][1], "=", vsource[vlist[c][0]].getDisplayValue());

    vtarget.setLimit(1);
    vtarget.query();
    var d;
    vtarget.next() ? 
         // if we find a match, we return the sys_id, otherwise, if nomatchcreate = false returns null 
        (d = vtarget.sys_id, debugon && (log.info("source: " + vsource.sys_id + " - record match: " + d), vsource.sys_import_state_comment = "record match: " + d)) : 
        // If no match is found it validates whether a new sys_id is required
        nomatchcreate ? 
            d = gs.generateGUID() :
            (d = null, debugon && (log.info("source: " + vsource.sys_id + " - record match: None"), vsource.sys_import_state_comment = "record match: None"));
    return d
} catch (f) {
    return log.error("script error: " + f), vsource.sys_import_state_comment = "ERROR: " + f, null
}};

 

The script gives you flexibility to set the search that better meet your business requirements.

Ensure you set "coalesce empty field" unchecked (OFF), because if an error happens on the query or field script, it will return null, then it will ignore the record coalesce field is matching null

 

You can see this example is center the updates on only one query that depends on the values available.

After opening the data source and clicking on "Load All Records", then transforming them, the import set data will show as follow:

load all records.png

On the import set, the import set rows tab will show the records would match the correct value this time.

import set records.jpg

The imported data will insert the new record, and update the existing one, even when the referenced model has duplicated data, the field map script will match the right record.

duplicated data.png

Using the field map script, we know it will only execute ONE search on the target form, and allow you to define any query that identify uniquely your target record, giving you flexibility and increasing performance on updates.

 

I've tested using Helsinki, using Google chrome as the browser.

 

 

For more information on transforming your data see:

Video demos:

 

Importing and Exporting data:

 

Transforming your data:

Validating the order of execution for transform map scripts

In ServiceNow, we use scripts to extend your instance beyond standard configurations. When creating scripts like business rules or client scripts, we use Javascript. Javascript is a well-known and popular language that has benefited from significant investment over the years. JavaScript is a high-level, dynamic, untyped, interpreted programming language with a feel like java. As powerful as it is now, debugging is sometimes complex. To make it less complex, you can simplify your code using a code optimizer. Code optimization is any method of code modification to improve code quality and efficiency. A program may be optimized so that it becomes a smaller size, consumes less memory, executes more rapidly, or performs fewer input/output operations. Our business rules and client script editors do have a script syntax analyzer, but you can use Google closure compiler as an additional tool to simplify them.

 

chromespeedo.png

 

Here are 3 use case examples of how to utilize the Closure Compiler debugger tool:

  • Validating if the code syntax is correct, without having to run it
  • Simplifying logical operations (e.g complex conditions, etc)
  • Simplifying complex functions

 

Validating if the code syntax is correct, without having to run it

If you find a complex code that makes some sense, but you suspect it is incorrect, then run it thought the javascript optimizer. This is similar to our own syntax validation on the script editor. For example, let's check our password validation code for errors and inaccuracies using the debugger.

 

Here is an example of password validation:

gs.info("good password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("This4isAgoodPassw0rd!"));
gs.info("bad password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("thisisbadpassword"));

// The password must be 8 characters long (this I can do :-)).
// The password must then contain characters from at least 3 of the following 4 rules:
// * Upper case * Lower case * Numbers * Non-alpha numeric
function CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha(passwordtext) {
    if (passwordtext.length < 8) {
        return false;
    } else {
        var hasUpperCase = /[A-Z]/.test(passwordtext);
        var hasLowerCase = /[a-z]/.test(passwor dtext);
        var hasNumbers = /\d/.test(passwordtext);
        var hasNonalphas = /\W/.test(passwordtext);
        if (hasUpperCase + hasLowerCase + hasNumbers + hasNonalphas < 3) {
            return false;
        } else {
            return true;
        }
        // IT is missig a "}"
    };

 

We can run the code in Closure Complier to find the number of errors and where they may be. This allows us to ensure the Javascript we run is functioning and correct, and we do not need to do investigating once it is live. When we input the password validation code in the debugger, it returns with:

closure-compiler.jpg

 

Once you add the missing '}',  and remove the extra space on "passwor dtext", it get simplified as:

 

gs.info("good password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("This4isAgoodPassw0rd!"));
gs.info("bad password: " + CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha("thisisbadpassword"));

function CheckPassWd8CharSAnd3ofUpperLowerNumberNonAlpha(a) {
    if (8 > a.length) {
        return !1;
    }
    var b = /[A-Z]/.test(a),
        c = /[a-z]/.test(a),
        d = /\d/.test(a);
    a = /\W/.test(a);
    return 3 > b + c + d + a ? !1 : !0;
};

---------------------

It reuses input variable 'a'. This is not very good when coding but thumbs up on recycling 'a'.

When executed, it returns:

*** Script: good password: true

*** Script: bad password: false

 

 

Simplifying logical operations (e.g complex conditions, etc)

There is a trick on Logical operations simplification. It is using logical optimisers. You suspect right. It is not one click. It is four(4) steps. You can simplify logical operation if you see redundancy or repetitions.

 

  1. The first step is to replace the expression components into the letters A,B,C,..., on the elements that could be redundant. For example:
    function ValidateInput(letter1, letter2, letter3, letter4, letter5, letter6) {
        if ((letter1 == 'a' && letter2 == 'b') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y' && letter5 == 'z') ||
            (letter1 == 'a' && letter2 == 'b' && letter3 == 'x' && letter4 == 'y' && letter5 == 'z' && letter6 == 'm')) {
            return true;
        } else {
            return false;
        }
    }
    

     

    To simplify the logic, we replace <letter1=='a'> with A, <letter2=='b'> with B, and so on. Real life example may be a bit more complex but the target is the same. Replace it on units ready for the logical optimisers.

     

    The function would look like:

      if (A && B || A && B && C || A && B && C && D || A && B && C && D && E || A && B && C && D && E && F)
    Use a logic optimiser (e.g. wolframalpha) to remove redundancy.
    
  2. When I use wolframalpha to simplify the logical expression:

    "A && B || A && B && C || A && B && C && D || A && B && C && D && E || A && B && C && D && E && F" at www.wolframalpha.com

    logical expression.jpg

     

    It returns "ESOP | A AND B", which translate in javascript to "A && B"

    which translate to <(letter1=='a' && letter2=='b')>

    translate javascript.jpg

     

  3. Replace the expression components back. Reducing the original function, the example looks like:
    function ValidateInput(letter1, letter2) {
        if ((letter1 == 'a' && letter2 == 'b')) {
            return true;
        } else {
            return false;
        }
    };
    

     

  4. Run the Google optimize on the final code. When running it on google optimiser, the final function will be:
    function ValidateInput(a, b) {
        return "a" == a && "b" == b ? !0 : !1;
    };
    

    closure complier.jpg

The final code on step 4 looks much easier to debug than the original on step 1. The code looks much cleaner and the process of testing would be easier as you have less variables to worry about.

 

Simplifying complex functions

Another way to simplify your Javascript code is to remove the parts on the code that are just not used. Removing excess code will help make the code more readable.

 

For example:

This code produces a memory leak

var theThing = null;
var replaceThing = function() {
    var priorThing = theThing;
    var unused = function() {
        if (priorThing) { /* This part causes MEMORY LEAK because theThing can't be release */
            gs.info("hi");
        }
    };
    theThing = {
        longStr: (new Array(1E6)).join("*")
        , someMethod: function() {
            gs.info(someMessage);
        }
    };
};
for (var i = 0; i < 5; i++) {
    gs.info("testing " + i);
    replaceThing();
    gs.sleep(1E3);
};

 

When running it thought google Closure Compiler, it produces:

for (var theThing = null, replaceThing = function() {
        theThing = {
            longStr: Array(1E6).join("*")
            , someMethod: function() {
                gs.info(someMessage);
            }
        };
    }, i = 0; 5 > i; i++) {
    gs.info("testing " + i), replaceThing(), gs.sleep(1E3);
};

 

The results when debugged in the Closure Compiler will come out like this:

reduce code.jpg

 

This code is much easier to debug after being run through the optimizer. It will also make it clear when an unused part of the code has disappeared. For debugging purposes, you can investigate the redundant code that was causing the problem or review the remaining code for problems. The reason for the memory leak is found here.

"There is not one right way to ride a wave."

~ Jamie O'Brien

 

Debugging code is like surfing, there are several ways to ride a wave. One of them is to take a look from a different angle. Code sometimes tells the story but it sometimes it traps you on it. Use optimiser tools to simplify and help you focus on what it matters and sometimes get direct to the point.

 

More information here:

Closure Compiler  |  Google Developers

Wolfram|Alpha: Computational Knowledge Engine

The 10 Most Common Mistakes JavaScript Developers Make

Docs -Script syntax error checking

Performance considerations when using GlideRecord

Community Code Snippets: Articles List to Date

Mini-Lab: Using Ajax and JSON to Send Objects to the Server

ServiceNow Scripting 101: Two Methods for Code Development

Community Code Snippets - GlideRecord to Object Array Conversion

Community Code Snippets - Four Ways To Do An OR Condition

My other blogs

The "Load All Records" is a beautifully designed feature to import data into the instance using two steps: first step is loading, and the second is transforming. It is crafted to a very high specification, where the loading happens on the Data Sources while the transforming happens on the Transformation maps. Each execution is controlled by an Import set that displays the history of the data imported.

 

testing1.png

 

On the Data Source, the Test Load 20 Records feature can lead to confusion. If you click "Test Load 20 Records" as opposed to "Load All Records" it won't actually transform your data. Instead, it will retrieve the data for the first 20 records, as opposed to all.

load 20 records.jpg

Alas, this feature is not useless. There are some cases where you would want to Test Load the first 20 records of your data. Once on the data source, Test Load 20 Records will allow you to:

 

  • To create the import set table if it does not exist. The import set table is the table where the data is loaded.
  • To create the transformation map and map the source field names once the "import set table" exist. The feature itself will not create the transformation; however, it will create the import set table if it does not exist, allowing the system to retrieve the field names.

 

The main use case of the Test Load 20 Records feature is to validate that the load works correctly.

Use "Test Load 20 Records" to validate if the data load works correctly and you have set the correct configuration (e.g. passwords)

 

There are a few things that Test Load 20 Records DOES NOT permit.

  • You will not be able to transform the 20 records loaded. Data loaded with this test will not be part of the transformation.
  • You will not be able to automatically import and transform. Data is only loaded. It's NOT transformed. To automatically import and transform, you need to create a scheduled job to load the data, which will also transform it. To manually transform your data, go the transformation map, then click "Transform'"

"Test Load 20 Records" is DOES NOT transform the data after it gets imported.

Below is the image of a transformation map:

LDAP_test_20_transform.jpg

Test Load 20 Records vs. Load All Records

When loading data and transforming it, it is important to consider the use cases for both testing and loading the 20 records, or going all out and committing to loading all records. If your goal is just to validate and test that the data works correctly, "Test Load 20 Records" is the way to go. If you want to go through the full process and import and transform the data, use "Load All Records."

 

For more information on transforming your data see:

 

Video demos:

 

Importing and Exporting data:

 

Transforming your data:

 

Documentation:

 

My other blogs

An extremely practical and well presented email relay system, is comprised of a fast cloud SMTP server. It furthers the ability to integrate with external SMTP servers that it is suitable for most business needs. You can reduce the number of invalid emails and speed up delivery of your emails by ensuring to validate the recipients. To ensure flexibility, you can add the possibility to unsubscribe from an email notification. I will try to focus on identifying invalid email addresses. However, ensuring valid email addresses is like trying to eradicate a virus, it requires an efficient and regular intervention. Note, this is just the tip of the iceberg on outbound email performance as several other factors count as well.

 

speed-up-emails.png

 

Here are 6 tips reduce the number of invalid emails and speed up delivery:

  1. Educate your users and include information on the notification about how to unsubscribe.
  2. Validate that the target emails are valid.
  3. Regularly execute email verification and check the list of incorrect or undelivered emails.
  4. Prepare for emails that bounce back (e.g. out of the office, undelivered, etc.).
  5. Plan delivery of bulk emails during non-busy times.
  6. Ensure the target email servers have white-listed the relay SMTP server.

 

For these examples, I have set the following emails for testings:

Email

Syntax

Valid

Reason

abel.t uter@exa mple.com

Error

No

Space on example

adela.cervantszexample.com

Error

No

No @

aileen.mottern@ example.NOEXISTANT

Error

No

NOEXISTANT is not a domain

'alejandra.prenatt@ example.com'

Error

No

single quotes ' at beginning and end

test@gmail.com

OK

Yes

Email is valid

 

I added the recipients to an incident Assigned to and Watch list and added additional comments.

assigned watch list.jpg

 

1. Educate your users and allow the option to unsubscribe.

It is difficult to know the user that need to read an email notification before many implementations. For flexibility, our system allows your to enable or disable your email notification as part of the user preferences. Provide information for how your users can disable the unwanted notifications.

 

Valid emails addresses are the recipients that will receive and potentially read the email notification.

 

Target recipient

Correct syntax

Target exist

User read email

Valid

Yes

Yes

Yes

Invalid

Yes

Yes

No

Invalid

Yes

No

No

Invalid

No

No

No

 

In addition, to ensure flexibility, your email notification  can have an unsubscribe link on the notification itself or to the user notification preference settings.This will avoid users marking your emails as spam on the target email, or triggering unwanted emails back, allow the sense of agreement with the target user, etc. Less emails means less time to deliver future notifications.

 

e.g On your notifications, you can add a mail script to include an unsubscribe link:

<mail_script>
template.print('<a href="' + gs.getProperty("glide.servlet.uri") + "unsubscribe.do?sysparm_notification=" + email_action.sys_id + '">Unsubscribe</a><br/>\n');
</mail_script>

 

Alternatively, an email script that will allow you to unsubscribe.

email script unsubscribe.jpg

 

2. Validate that the target emails are correct

This is one of the most important steps. Check and double check that the email syntax and a basic domain are correct. This is target to the users that have incorrectly written emails.

Ensure your email addresses lists is tuned, and the syntax is correct. Below is a sample findInvalidEmailAddresses script the checks if the email address is valid. To validate the domain, you could use the instance search queries. e.g. emails that ends with "@acme.com", etc.

Invalid emails are all those emails that will never get delivered (email is incorrect) or read on by the user the email address was provided (e.g. email account is abandoned)

In our examples:

validate target email.jpg

 

The email system will perform a basic syntax check and will pick very few emails as follow:

>Notification 'Incident commented' (0d51edbac0a80164006963b000ff644e) excluded recipients because user's device contains an empty or invalid email address (see "cmn_notif_device.email_address"): 'Adela Cervantsz' (0a826bf037102000e0bfc8bcbe5d7a)

Any other user which email were accepted will show on the email logs as:

> Notification 'Incident commented' (0d51edbac0a80164006963b000ff644e) included recipients via notification's "Users/Groups in fields" (e.g., watchlist, assigned_to, etc): 'Alejandra Prenatt' (22826bf03710200044e0bf8bcbe5dec), 'Jonny Seymour' (b6392da30f1856146cce1050efd), 'Aileen Mottern' (71826bf03710200044ebfc8bcbe5d3b), 'Abel Tuter' (62826bf037100044e0bfc8bcbe5df1)

On the email itself the recipients will look like: aileen.mottern@example.NOEXISTANT, abel.t uter@exa mple.com, test@gmail.com, 'alejandra.prenatt@example.com'

email recipients.jpg

 

To validate the syntax of emails stored on the sys_user table, here is a background script that can help you find the some of those emails with the incorrect syntax:

 

findInvalidEmailAddresses(!1); // false !1 to NOT set notification off automatically  
function findInvalidEmailAddresses(setnotifyoff) {  
    var a = [], // holds the message back  
        b = [], // holds the sys_id's for the link  
        f = "emailISNOTEMPTY^active=true^locked_out!=true^notification=2",  
        h = 10000, // max results for the query to avoid unlimited queries  
        e = setnotifyoff == !0; // check if want to set notification off on the user  
    a.push("*** Scripts findInvalidEmailAddresses ***");  
    a.push("-----Searching for invalid email addresses: ----- setting notification off: " + e + "\nsys_user query: " + f);  
    // c is the regex to validate the email is valid. Note this is a guide  
    var c = /^\b[a-z0-9!#$%&'*+\/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+\/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+(?:[A-Z]{2}|com|org|net|gov|tv|biz|info|mobi|name|aero|jobs|museum)\b$/i,  
        d = new GlideRecord("sys_user");  
    // query for relevant users.  
    d.addEncodedQuery(f);  
    d.setLimit(h); // do not query more than 10K records  
    for (d.query(),g=1; d.next();) {  
        d.email && !c.test(d.email) && (b.push(d.sys_id + ""), a.push("User #"+ g +" Found: " + d.sys_id + ' - email = "' + d.email + '"')) && g++ &&  e && (d.notification = 1) && d.update();  
    }  
    (d.getRowCount() >= h)  && a.push("\n**** WARNING results are higher or equal than the limit "+ h +" set for the query");  
    b.length ? a.push("\n**** Found "+ b.length + " possible invalid email(s) of " + d.getRowCount() + " emails checked. Run 'findInvalidEmailAddresses(!0)' to disable them" + "\nTo list them use the following link:\n"+ gs.getProperty('glide.servlet.uri') + "sys_user_list.do?sysparm_query=sys_idIN" + b.join(",") ): a.push("\n**** All emails compliant: " + d.getRowCount() + " emails checked." );  
    // print the message for the background script  
    gs.print(a.join("\n"))  
}; 

For our examples, it will discover the following problematic emails:

---background script result-----

*** Script: *** Scripts findInvalidEmailAddresses ***

-----Searching for invalid email addresses: ----- setting notification off: false

sys_user query: emailISNOTEMPTY^active=true^locked_out!=true^notification=2

User #1 Found: 0a826bf03710200044e0bf10bcbe5d7a - email = "adela.cervantszexample.com"

User #2 Found: 22826bf03710200044e0bf10bcbe5dec - email = "'alejandra.prenatt@example.com'"

User #3 Found: 62826bf03710200044e0bf10bcbe5df1 - email = "abel.t uter@exa mple.com"

User #4 Found: 71826bf03710200044e0bf10bcbe5d3b - email = "aileen.mottern@example.NOEXISTANT"

 

 

**** Found 4 possible invalid email(s) of 527 emails checked. Run 'findInvalidEmailAddresses(!0)' to disable them

To list them use the following link:

<instance>/sys_user_list.do?sysparm_query=sys_idIN0a826bf03710200044e0bfc8bc5d7a,22826bf037102044e0bfc8bcbe5dec,628bf03710200044e0b8bcbe5df1,71826bf030200044e0bfc8bcbe5d3b

 

---background script -----

You will need to search for those users and then set the notification = Disable. After that, you may need to contact the users to correct their email addresses.

 

3. Regularly execute email verification and check the list of incorrect or undelivered emails

Regular expressions and email checks will not filter all invalid emails. Even if your email syntax is correct and validated by regular expressions, you do not really know whether an address is valid until you send an email to it. Even if the email arrives in the inbox, that doesn't mean it has been read.

 

Invalid emails could cause delays to the overall email delivery.

  • Invalid email could cause SMTP retries event if only ONE email of 100 on the recipient list does not exit.
  • Bounced emails can execute inbound actions and potentially create email loops that cause some email outage, than bounce back, then again.
  • Extra processing for the irrelevant emails could be relevant.
  • Target email systems could black-list the SMTP relay, delaying the delivery even further.

 

To avoid those problems, there are several email validations that can be performed. The automatic validation is not available on ServiceNow yet: if you have an external email provider that can test the target emails, you can provide them with your email list and they will validate if they exist on the final exchange or not. They will extract the MX records from the email address and connect to mail server (over SMTP and also simulates sending a message) to make sure the mailbox really exist for that user/address.  If you do not have those services, you can still review the bounced emails on your instance to manually validate those emails. For those emails that bounced back, set the notification = disable on the sys_user form to avoid further unwanted emails. Then, contact the user owners for the correct addresses.

 

Even then, you will need to regularly check for incoming bounced back emails as users leave companies, email changes, etc. There are several other cases some where email will not get delivered. e.g. Emails bounced back because the target email has full mailboxes could indicates that no ones read that mailbox.

 

This is important as your instance will retry to send on certain cases, even if it is an invalid email depending on the SMTP response.

 

You will need to search your skipped emails regularly to fix the problematic emails manually. To avoid them, set the notification - disable on the related sys_user record.

 

This is an example email:

>Subject:Undelivered Mail Returned to Sender

>From:MAILER-DAEMON (Mail Delivery System)

>Auto-Submitted:auto-replied

State: Ignored

Mailbox: Junk

User: MAILER-DEAMON

Body = [ This is the mail system at host bulk.service-now.com. I'm sorry to have

to inform you that your message could not be delivered to one or more recipients.

It's attached below. For further assistance, please send mail to postmaster.

If you do so, please include this problem report. You can delete your own text

from the attached returned message. The mail system <aileen.mottern@example.NOEXISTANT>:

Host or domain name not found. Name service error for name=example.NOEXISTANT type=A: Host not found

You will need to search for email aileen.mottern@example.NOEXISTANT, and set the notification = Disabled for the associated user.

 

4. Be prepared for emails bounced back (e.g. out of the office, undelivered, etc)

Each email you send is potentially one email back to be received. Be prepared. To ignore emails, you either configure your email filters on the email configuration or install the "Email Filters" plugin. If using email configuration, ensure the properties to ignore unwanted emails are set to cover all your bounced email cases.

 

For example: glide.pop3.ignore_headers, glide.pop3.ignore_subjects, glide.pop3.ignore_senders

 

For most sophisticated filters, use the "Email Filters" plugin. Email filters take over the email configuration filter. Once enabled, administrators can configure specific email filtering preferences by using a condition builder or a condition script.

With Email filters enabled, glide.pop3.ignore_headers, glide.pop3.ignore_subjects and glide.pop3.ignore_senders  will get ignored.

See Email Spam Scoring and Filtering (will need to login to HI to view) to know how to use the Email Filters plugin to filter emails which have been scored as spam, when using the ServiceNow email infrastructure.

 

If your email does  meet the filters/Email filters conditions, you will receive a message "ignored by filter" on the logs, an the email will get skipped.

* Information    - Skipping 'Create Incident', ignored by filter

If your email filters are NOT configured correctly, the bounced emails will execute your inbound actions

5. Plan delivery of bulk emails on non-busy times

Emails sent by bulk may delay critical business emails. Many of those emails will get rejected by the target SMTP server and resend once again. Validate with your administrator the best times to send bulk emails. If it's the first time you will send a bulk, ensure you monitor the bounce-back emails to disable the notification on the related users for those problematic email addresses.

 

You can also trigger some notification at an specific time using our gs.eventQueueScheduled(event, record, parm1, param2, date-time) function that could trigger your notification at the required date-time. For example: gs.eventQueueScheduled("incident.reminder", current, gs.getUserID(), gs.getUserName(), current.u_reminder);

 

Keep in mind, there are five (5) email propertieshttps://docs.servicenow.com/administer/reference_pages/reference/r_OutboundMailConfiguration.htmlthat will potentially add a relevant delay on email delivery because of invalid emails causing defer retries:

glide.smtp.default_retry, glide.smtp.defer_retry_ids, glide.smtp.fail_message_ids, glide.email.smtp.max_recipients and glide.email.smtp.max_send

Please set the properties to to meet your business requirements. The emails will be queued and re-queued several times for the problematic email addresses.

 

#1 glide.smtp.default_retry: Enables (true) or disables (false) resending email when an unknown SMTP error code is encountered.

The instance only recognizes the SMTP error codes defined in the glide.smtp.defer_retry_ids property. Default value: true

 

#2 glide.smtp.defer_retry_ids: Specifies the comma-separated list of SMTP error codes that force the instance to resend email.

Default value: 421,450,451,452

421 -    <domain> Service not available, closing transmission channel-

450 -    Requested mail action not taken: mailbox unavailable- e.g. SMTP server could not access a mailbox to deliver your message

451 -    Requested action aborted: local error in processing- e.g. This error is usually SMTP relaying service from too many messages.

452 -    Requested action not taken: insufficient system storage-

 

#3 glide.smtp.fail_message_ids: Specifies the comma-separated list of SMTP error codes that prevent the instance from resending email.

Default value: 500,501,502,503,504,550,551,552,553,554

 

500 -     Syntax error, command unrecognised - e.g. Your antivirus/firewall interfering with incoming and/or outgoing SMTP communications

501 -     Syntax error in parameters or arguments - e.g. Invalid email addresses or an invalid domain name recipient. Error can indicate bad connection

502 -     Command not implemented

503 -     Bad sequence of commands - e.g. Error, particularly if repeated, indicates bad connection. Verify authentication settings

504 -     Command parameter not implemented

551 -     User not local; please try <forward path>

552 -     Requested mail action aborted: exceeded storage allocation - e.g. The recipient’s mailbox has reached its maximum allowed size.

553 -     Requested action not taken: mailbox name not allowed - e.g. Invalid email address.

554 -     Transaction failed - e.g. when its anti-spam firewall does not like the sender’s email address, or the sender’s IP address, or the sender’s ISP server

 

#4 glide.email.smtp.max_recipients: Specifies the maximum number of recipients the instance can list in the To: line for a single email notification. Notifications that would exceed this limit instead create duplicate email notifications addressed to a subset of the recipient list. Each email notification has the same maximum number of recipients. Default value: 100. This is ONLY valid for recipients created by the email notification system (it does not include emails added by scripts or email client).

 

On each duplicated email, email logs will show:

InformationEmail with 4 recipients is split into 4 separate emails because property glide.email.smtp.max_recipients=1 (Email 4 of 4)

 

On our example, the final emails look as follow:

email properties.jpg

 

#5 glide.email.smtp.max_send: Specifies how many emails to send through each new SMTP connection. The instance establishes a new SMTP connection if there are more emails to send than the specified value. Default value: 100

 

#4 and #5 can be used to set email throttling. e.g 100 email with 50 recipients max each vs 100 emails with 1 recipient max each. It completely depend on your target emails and email capacity of the target exchanges.

 

In a nutshell, plan your email delivery, set the resend properties accordingly with a reasonable to email throttling to protect system performance.

 

6. Ensure the target email servers have white-listed the relay SMTP server.

If your SMTP server is not white-listed, spam system could penalize it by adding transaction time to the relay or delaying delivery. This is VERY common. If the number of emails bounced back or the delivery time increases after bulk emails, check the target email address SMTP servers have white-listed the relay SMTP server on your instance. If you are using our SMTP relay system, ServiceNow highly recommends that you configure your target mail exchange systems to retrieve our Sender Policy Framework (SPF) records dynamically, to white-list our Email servers. See Allowing email delivery from ServiceNow to your mail servers for more information.

 

 

 

 

 

A final word. Less is better. Less email recipients means you spend less, you need less storage on the target, you need a smaller processes and

you sent them faster. To make it less, you need to regularly validate your target emails even if it means donkey work. If you do not check regularly, as more invalid emails populate your system, the more time your email will take to get delivered.

 

Hope it helps. I have performed the tests on Geneva, using Chrome as browser.

 

Additional information can be found here:

There are a few cases where the browser acts as password manager software. It will store your passwords to safely save you time and efforts. Where it sounds like a good plan, there are times when you get an unexpected memo from the headache department. Sometimes Chrome, Firefox and IE will set autocomplete for your password to "on" as a result of passwords being saved for specific instances. We clearly set HTML on the password fields to say "do not auto-complete" but can be ignored.

 

In Chrome auto-complete for passwords is ON by default. To double check, just open your Chrome settings then go to advanced, and select “auto-complete passwords” ON. After that, to access your settings, go to: chrome://settings/passwords to manage your passwords in Chrome.

 

Chrome-security-extensions1.png

 

If everything works fine, when you login into the instance the first time, Chrome will allow you to save the password.

remember password chrome.jpg

 

To see all of your passwords saved on Chrome, navigate to chrome://settings/passwords

see all passwords.jpg

This works fine and the next time you try to login, it will auto-complete the password. You can recognize it because the background of the field usually changes to yellow to indicate that autocomplete is working.

autocomplete success.jpg

When everything goes smoothly, you can efficiently login to your instance and save yourself the hassle of having to remember your password with auto-complete.

 

When the browser password management changes go wrong

I worship the ground Chrome walks. As painful for me as it is, there are some cases when the password fields are auto-completed even when it should not be. On Chrome, the browser matches the password with the "previous" field (not necessarily the login name). On ServiceNow, password fields could be anywhere in the form and even when the autocomplete="off" is set on them, some browser just ignore them.

 

Example #1: The User form (sys_user)

If you open our sys_user form, change the password and click "update" (similar to form submit), the browser will try to save the password and associate it to the previous field.

password associated.jpg

In this case, it will associate the password value with the 'Department' field.

password department field.jpg

If you open any other user from the same department, the autocomplete will incorrectly auto-complete the password. Saving the record will override the password. It will even ask you to save when leaving the form even if you have not changed anything.

save password.jpg

To avoid this problem of having the password associated with the wrong user, move the password field below an unique field like User ID or disable auto-complete.

disable auto complete.jpg

 

Example #2: The X.590 form

Another form with a similar problem is X.509 Certificate (sys_certificate). If auto-complete is enabled and have records, it will set the key store password that can cause problems on some integrations.

x.590 form.jpg

You can move the password field below an fairly unique field like Name or disable auto-complete to avoid the it from setting the key store password.

key store password.jpg

 

Example #3: SOAP message form

On the SOAP message (sys_soap_message), the basic authentication fields get auto-completed. These fields can be overwritten when saving the form by the values auto-completed. The problem here is that both fields are initially empty.

soap message.jpg

 

Luckily, the auto-complete password problem will only affect forms with password fields. The complexity of the problem is that it could happen when users are UNAWARE of the auto-complete. Also, the password is saved to the whole domain (service-now.com sometimes) so it will cross into different instances or forms (e.g. passwords set on the sys_user form, then recalled by the browser on sys_soap_message (as showed on example #3). Note that most sophisticated password managers like Lastpass, Keepass, 1password, etc do not have the same problem as IE, Firefox and Chrome have. Also, Opera and Safari seems to work fine as well.

 

Here are the list of forms that can be affected by the browser password auto-complete:

 

Table

-

Column name

digest_properties

-

secret_key

discovery_credentials

-

privacy_key

discovery_credentials

-

password

discovery_credentials

-

ssh_passphrase

discovery_credentials

-

authentication_key

instance

-

database_password

instance

-

admin_password

ldap_server_config

-

password

oauth_entity

-

client_secret

saml2_update1_properties

-

signing_key_password

sys_certificate

-

key_store_password

sys_data_source

-

scp_password

sys_data_source

-

jdbc_password

sys_email_account

-

password

sys_rest_message

-

basic_auth_password

sys_rest_message_fn

-

basic_auth_password

sys_soap_message

-

basic_auth_password

sys_soap_message_function

-

key_store_password

sys_soap_message_function

-

basic_auth_password

sys_update_set_source

-

password

sys_user

-

user_password

sys_wss_profile

-

user_password

 

In a nutshell, if you are using auto-complete password on your browser, you need to be extra cautious. If the password fields are NOT next to a non-empty or unique field to allow the password managers to link to the correct value, then it's safer to disable the auto-complete function for good. If you see YELLOW background, then you are in trouble. I personally use most sofisticated password managers like Lastpass, 1password or Keepass.

 

I have tested this using Chrome 48.0.2564.116 as browser and on Fuji release.

 

More information here:

Email has three chances to recognize a reply email arranged over subject, header and email body. The last chance an email has to recognize a reply is related to the subject of having a reply prefix and the record <number>. This method is popular because it does not requires a previous email from the instance to use it; however, its power is limited. I can see that using watermarks is more efficient. Nevertheless, here are my findings.

 

Incoming emails are classified following an algorithm as New, Reply or Forward. Inbound email actions enable an administrator to define the actions ServiceNow takes when receiving those emails. Reply emails can be identified by several methods. The most complex of methods to identify reply emails is: subject with a reply prefix that contain the record number and no watermarks.

 

incoming_emails.png

 

Recognizing the reply by the subject record number

Based on my tests, to classify the email using the Subject as reply, the text needs to be as follow:

  1. The first characters need to match the reply prefix and it is not case sensitive.
  2. The record number needs to be ONE word. Only spaces or ‘:’ are accepted before and/or after.
  3. The record number prefix needs to exist on the sys_number table.
  4. The number needs to be within the first 160 characters (including the number itself)
  5. The user that sent the email needs to have access to the record refered by the number

 

Based on that, you can see it can be consider as temperamental. Well, maybe.

 

The first characters need to match the reply prefix.

This is usually done by your email client. Nothing to worry except spaces before the prefix or other languages. Just ensure the subject line is starting with a recognized reply prefix. Just avoid spaces (or invisible spaces - e.g. MS word justifications paragraph markers) before the prefix.

 

The record number needs to be ONE word. Only spaces or ‘:’ are accepted before and/or after.

The record <number> is the field called 'number' on the target table. The record <number> has to be one word (e.g. INC00001) and with a valid record number that matches an existing record.

Yes, <number> needs to be ONE WORD (no symbols or characters around them) with spaces around it (e.g. MM-INC0001 is wrong).

The record number 'prefix' needs to exist on the sys_number table

The number "prefix" needs to EXIST and ideally be unique (or first found) on the Number Maintanance (sys_number table).

 

e.g. INCxxxx record will INC match on sys_number table, and it will provide the 'incident' target table.

 

Another example is TASKxxx record. It will match TASK, which points to two target tables. Use with caution. It will match the first table found causing inconsistency. Prefixes are stored on Number Maintanance, and there are TWO "TASK" prefixes. On my testings, it was matching TASK prefix from the task table, instead of TASK from sc_task table.

 

To validate the prefixes, check

<instance>/sys_number_list.do?sysparm_query=prefix!%3DNULL

Here is the how it looks:

02-Number_maintenance.jpg

 

The number needs to be within the first 160 characters (including the number itself) of the subject

There is a limit of 160 characters on the sys_email subject. When creating the subject, ensure the <number> is on the first 160 characters. To validate if the subject was  correct, after the instance process the email, the incoming email logs will show:

Received id=<Message-ID> Classified as reply to '<number>' found in subject

If the subject is incorrect, the incoming emails will get classified as New.

Received id=<Message-ID> Classified as new

 

Here are the results of my testings with different subjects. INC0001 is a valid incident number.

Working subjects:

Reason

Re:INC0001 

Reply prefix + number

re: Review INC0001

Reply prefix + anything + “space” + number

RE: Review:INC0001:

Reply prefix + anything + “colon” + number + “colon”

rE:INC0001 Review

Reply prefix + number  + space + anything

 

You can see the reply prefixes are not case sensitive. Space and colon are acceptable.

 

Not working subjects:

Reason

Re:Review-INC0001

Minus is not acceptable. Space and colon are fine.

Re:Review -INC0001

Minus touching the number is not acceptable.

Re:Review TEST0001

TEST is not a valid sys_number prefix.

Even if the number=TEST0001 exist.

This could happen on manually created numbers

Re:TASK0001

Acceptable but it may pick the wrong target table. from sys_number. There are two TASK prefixes.

Re: XX…(160+chars) INC0001

The subject will get truncated leaving the number out. Place the number on the first 160 characters.

     Re:INC0001

Space before the prefix is not acceptable.

 

As you can see, the character - before the number  is not acceptable. Also, the number needs to exist on the target table. Also, if the number was created without following the prefix (e.g. TEST instead of INC), it will not match.

 

Here are the results on the sys_email table:

01-identified-email.jpg

 

If you are using subjects for reply emails without watermarks, ensure you are using the email prefix, then <number> separated with spaces and the rest of the subject. Then all should be as easy as drinking a glass of water.

 

I have tested with Chrome as browser on Fuji. Emails were written on Outlook 2011.

 

More information here:

Incoming emails are the emails sent to the instance. The inbound emails are classified by following an algorithm as New, Reply or Forward. Inbound email actions enable an administrator to define the actions ServiceNow takes when receiving emails. The feature itself offers a superb level for scripting and a well balanced design for the classified emails, saves you time on coding and guarantee a clear understanding to expand and keep your email actions up to date.

 

Inbound email action 'Type' field offers flexibility to the user with the ability to be set to None that matches all incoming emails. The 'Target table' field offers a similar feature. Those options are well positioned the top of the incoming action form so pay attention when setting them. Here are two tests I did to experiment with None matching all incoming emails on incoming email options 'Type' and 'Target table.'

inboundaction.png

Incoming email options 'Type' and 'Target table.'

The 'Target table' selects the table where the action will be added or records updated. The 'Type' selects the message type required to run the action. The action runs only if the inbound email is of the selected type. Setting the inbound action type to None, increases the complexity as the rule matches all incoming email received types. This could be of advantage on may designs. e.g. if you are using them to stop certain executions based on conditions regardless of their type.

'Type' set to None matches all incoming emails

 

type target none.jpg

 

 

Testing the inbound actions with Type set to None

I have created an inbound email action as follow:

Inbound Email action

Name

=

test_type_none

Target table

=

Incident

Type

=

None

Condition

=

email.subject && (email.subject.match(/\b(?:spam|loan|winning|bulk email|mortgage|free)\b/i) != null)

Order

=

10

 

Script =

gs.log('CS - test_type_none start'); // dev only - remove on prod
// No further inbound actions are required - stopping them
sys_email.error_string += 'processing stopped by test_type_none by subject keyword: ' 
    + email.subject.match(/\b(?:spam|loan|winning|bulk email|mortgage|free)\b/i) + '\n';
event.state="stop_processing";
gs.log('CS - test_type_none ends'); // dev only - remove on prod

 

Then I have sent 3 emails with subject "... freeware offer" classified as new, reply and forward each.

Also  I have sent 3 emails with subject "... free " classified as new, reply and forward each.

type set to none.jpg

 

Results: All incoming emails are processed as expected by the incoming action. As per condition, some processing is bypassed.

 

 

Testing the inbound actions with 'Target table' set to None

I have created the following inbound action:

Inbound Email action

Name

=

test_target_table_none

Target table

=

None

Type

=

None

Condition

=

(empty)

 

Script:

gs.log('CS -test_target_table_none start'); // dev only - remove on prod
event.state="stop_processing";
gs.log('CS - test_target_table_none ends'); // dev only - remove on prod

 

Then I have sent 3 emails that classify as new, reply and forward.

Results: Setting Target table to None, it will not execute the inbound action correctly.

If Target table set to None the followings are the results of incoming emails :

Incoming emails

Classified as

Target

Notes

Usual logs if there is no update

New

(empty)

Always error

Error while classifying message. org.mozilla.javascript.EvaluatorException: GlideRecord.setTableName - empty table name (<refname>; line 1)
Skipping 'test_target_table_none', a suitable GlideRecord not found

Reply

(empty)

Always skipped

watermark's target table '<found-table-by-watermark>' does not match any Inbound Action table, setting to 'Ignored' state

Forward

(empty)

Always error

Error while classifying message. org.mozilla.javascript.EvaluatorException: GlideRecord.setTableName - empty table name (<refname>; line 1)
Skipping 'test_target_table_none', a suitable GlideRecord not found

 

I advise that you always double check and make sure to select your Target table correctly and only set your Type appropriately to match the correct classified emails. Just because you have the option, does not mean you need to use it. Plan and validate.

 

I have tested this with Fuji and Chrome as the browser.

 

More information here:

Inbound email actions are a truly stunning feature for 'processing emails.' It is scalable and simple design to a high standard that does not require expertise to develop them. Incoming emails with matching inbound actions are an ideal combination to provide a fine level for processing for incoming emails. It provides real feel of control and the scripting power mixed with modern email settings for very different scenarios. Independently tucked away, the inbound actions are executed by matching the incoming emails.

 

no_update.png

 

Let's talk about incoming email actions that do not have a real update. Not all emails that are processed will have a target set, even if the system has classified it as a reply with a matching record. This is normal.

 

Lets talk about:

  • Incoming emails and matching inbound actions
  • What is a 'real' update
  • Examples to test incoming emails with no 'real' updates
  • How to force the incoming email target
  • Advanced cases when target table is none in the inbound actions

 

Incoming emails and matching inbound actions

Incoming emails are emails sent to the instance. The incoming emails are classified following an algorithm as New, Reply or Forward. Inbound email actions enable an administrator to define the actions ServiceNow takes when receiving emails. The feature itself offers a superb level for scriptinghttp://wiki.servicenow.com/index.php?title=Script_in_ServiceNow#gsc.tab=0and a well balanced design for classified emails, saves you time on coding, and guarantees a clear understanding to expand and keep your email actions up-to-date.

 

Inbound email actions are similar to business rules, using both conditions and scripts. If the conditions are met, the inbound email actions run the script. The inbound action's conditions include the Type, the Target Table and the condition itself. The 'Type' can be None, New, Reply or Forward to match the classified emails. None will match all types of incoming emails. The target table in the inbound action will help to define the GlideRecord created for 'current.' For inbound actions, the "current" is a GlideRecord based on the target table and the information gathered by the email system.

 

Here is a table to show the relationship between incoming email received type and matching inbound actions:

 

Incoming

Received type

Classified as New

Classified as Reply

Classified as Forward

Emails

Target record

-

Found

-

 

Target record if success

New

Found if data updated

New

 

Notes

Target set based on inbound

action target table

Target set based on

target found on reply

Target set based on

inbound action target table

 

Logs if there

is no update

Skipping 'xxx', did not create

or update incident

Skipping 'xxx', did not

create or update

Skipping 'xxx', did not

create or update

Matching

Type

New or None

Reply or None

Forward or None

Inbound

Action

Usual inbound

action update

current.insert()
current.update()

current.update()

current.insert()
current.update()

 

Target table

Set

Set – it needs to match

email target found

with this table

Set

 

The table shows the incoming emails are classified then matched to the respective inbound actions. Setting the target table makes the inbound actions much easier to understand. Also, setting the inbound action type to None, increases the complexity as the rule matches all received types.

 

A 'real' update

A real update means that at least one field on the 'current' record has been changed or the 'current' record has been created. If after receiving an incoming email there is not real* update on the inbound action 'current' record, the target field on the matching incoming email will remain empty.

 

It makes sense as a way to control which emails would display in the activity formatter.

The Incoming email target is only set if the 'current' record is updated or inserted in the inbound action. Otherwise, it remains empty

 

Examples to test incoming emails with no 'real' updates

Besides inbound actions that do not meet the conditions, there are a few cases where the current.update() does not execute because the data has not changed.

 

I've created the following incoming email action to validate the behaviour:

 

Inbound action

 

 

Name

=

Update Incident.JS

Target table

=

Incident

Type

=

Reply

Condition

=

current.getTableName() == 'incident'

 

Script is

gs.log('CS - Update Incident.JS starts'); // comment on prod

// the following line set impact to 2 = Medium
current.impact = 2;  

// As the previous change is a static field, 
// if current is already 2, no update happens
current.update();

// No further inbound actions are required - stopping them
event.state="stop_processing";
gs.log('CS - Update Incident.JS ends'); // comment on prod 

 

The inbound action looks as follow:

real update1.jpg

 

For the test, I have created an incident 'TETS' with impact = 3.

real update 2.jpg

 

Example #1: First reply email to the instance to update 'Incident: TETS'.

After sending an inbound email to the instance, once it get processed the first time, the following is the result:

 

real update 3.jpg

real update 4.jpg

 

The incoming email target is set to 'Incident: TETS' as expected. This is because current.impact was 3, then the script change it to 2, causing the current.update() to execute, the the system will set the target to current.

 

 

Example #2: Second reply email to the instance to update 'Incident: TETS'.

After sending a second inbound email to the instance, once it get processed, the following is the result:

 

real update 5.jpg

The incoming email target is set to (empty) as expected. This is because current.impact was already 2, then the script set it to 2 again, which is not causing any change, then the current.update() do not to execute. Then the system will set the target to (empty). This does not mean the watermark did not match.

 

Example #3: Inserting a different record than 'current'

I've created a new inbound action that creates a new problem called "vproblem." It looks like follow:

current 1.jpg

 

After sending an inbound email to the instance, once it get processed, the following is the result:

current 2.jpg

Results: The incoming email target is set to (empty) as expected. This is because the system only tracks the inbound action 'current' to set the incoming email target.

As current did not have any update or insert, the the system will set the target to (empty). That is the reason I prefer to use 'current' on inbound actions.

 

Force the incoming email target

You can manupulate sys_email.instance to set the target and sys_email.target_table to set the target_table.

 

The following is an example of an incoming email action that explicitly set the incoming email target:

Inbound action

 

 

Name

=

Update Incident.JS_1

Target table

=

Incident

Type

=

Reply

Condition

=

current.getTableName() == 'incident'

 

Script is

gs.log('CS - Update Incident.JS_1 starts'); // comment on prod

// the following lines will create a new incident
var vproblem =  new GlideRecord('problem');
vproblem.priority=2;
vproblem.short_description = 'New incident - test - short';
vproblem.description = 'New incident - test - descr ';
vproblem.insert();

// No further inbound actions are required - stopping them
event.state="stop_processing";
gs.log('CS - Update Incident.JS_1 ends '); // comment on prod 

// WORKAROUND: To force setting the email target (not recommended)
// This set it to the new record vproblem created 
sys_email.instance = vproblem.sys_id;
sys_email.target_table = vproblem.getTableName();
      // Or if you need to be set to current
      // sys_email.instance = current.sys_id;
      // sys_email.target_table = current.getTableName();

 

The inbound action looks like:

inbound action.jpg

 

After sending an inbound email to the instance, once it get processed, the following is the result:

inbound action.jpg

 

The incoming email target is force to be set to the problem created. This is because we manipulated the sys_email record on the script. It could be forced to any record. If the target is empty on the incoming emails, we can assume there were no valid update on the matching inbound actions. Sometimes simple is more.

 

I have tested this with Fuji and Chrome as the browser.

 

More information here:

Discreetly positioned on the email notification form is the ability to subscribe, marked "subscribable." The property which has been both cleverly and imaginatively designed, adds a subscribe to a record mechanism. On one side, someone can interpret it as the 'mandatory' option and others like the option so users can subscribe to the notifications.  I can say now, neither fish nor fowl. Two words very similar: subscribable and subscription. Have I confused you yet?

 

Subscription-icon-500x500.png

 

Let's talk about

  • Notification 'Subscribable' , 'Mandatory' and 'Force delivery' options
  • Subscribable option set to false
  • Subscribable option set to true
  • Testing the record-subscription based part when Subscribable is set to true

 

Subscribable, Mandatory and Force delivery options

On the email notification, you can set Subscribable, Mandatory and Force delivery to true or false to modify how recipients are included or excluded. These options define how the recipient list is generated from the notification user and group fields or from the cmn_notif_message table.

 

Here is the three email notification options in a nutshell:

Option

Value

Meaning

Purpose

Area of confusion

Subscribable

FALSE

Subscription based notifications

Normal notification

The value FALSE may look like the notification cannot be set on the notification preferences. Also the word subscribable make it looks like the mandatory field

Subscribable

TRUE

Subscribable based notifications

Add extra recipients to normal notifications based on records subscribed to by the users set on cmn_notif_message

Mandatory

TRUE

User Notification preference is mandatory to any user once they are valid recipients of the notification

User notification preference can be turn on/off but it is saved as ON whatever choice is selected

Field is hidden on the notification form whilst subscribable is not. Also it implies users will receive the notification no matter what like force delivery

Mandatory

FALSE

Normal on/off per user subscription allowed

User notification preference can be save as on or off

Force delivery

FALSE

No changes

No changes

Field is hidden. Force delivery may indicate the notification is sent immediately. There is a business rule "Update Email Devices" that makes cmn_notif_device inactive when the sys_user Notification is Disable which makes this option redundant

Force delivery

TRUE

Ignores sys_users Notification value.

User Notification preference is created even if the are not valid recipients of the notification

Users receive the notification even if the user Notification field is set to Disable and their cmn_notif_device is active

 

Here is an example of some email notifications showing the Subscribable, Force delivery and Mandatory option set to true and the others set to false:

 

2015-11-16_2208-force_001.png

Subscribable option set to false

If you want a normal notification, you can set the "subscribable" to false. They use the 'Who will receive' information to generate the recipient list.

They are called subscription-based notifications because they allow users to choose which notifications they prefer to receive and which ones they don't

 

If the 'Mandatory' option is set to true, the notifications will be editable but will not save the "off" value on the user notification preferences. Subscribable alone does not make it mandatory.

Also mandatory respect the sys_user notification field. To ignore that, set 'Force delivery' to true, then users receive the notification even if the user Notification field is set to Disable. These fields only affect the current notification.

 

I have set four (4) users as follow:

User

Sys_user Notification

User Notification preferences

User1

Enable

All notifications enabled

User2

Enable

All notification disabled by setting off the dials

User3

Disable

All notifications enabled

User4

Disable

All notifications disabled by setting off the dials

 

Here is the result of the tests:

Notification

Final email notification

Name

Mandatory

Force Delivery

Original recipient list

Final recipient list

Email Logs shows

No1

False

False

User1

User2

User3

User4

User1

Notification 'No1' included recipients via the notification's "Users" field: 'User1'

 

Notification 'No1' excluded recipients because user's notification preference "Filter" filtered it (see "cmn_notif_message.notification_filter"): 'User2'

 

Notification 'No1' excluded recipients because user's "Notification" setting is disabled: 'User3' (x), 'User4'

No2

False

True

User1

User2

User3

User4

User1

User2

Notification 'No2' included recipients via the notification's "Users" field: 'User1'

 

Notification 'No2' included recipients because users would normally be excluded, but notification's "Force delivery" setting is enabled.: 'User2'

 

Notification 'No2' excluded recipients because user's device is inactive (see "cmn_notif_device.active"): 'User3' (x), 'User4'

No3

True

False

User1

User2

User3

User4

User1

User2

Notification 'No3' included recipients via the notification's "Users" field: 'User2' (x), 'User1'

 

Notification 'No3' excluded recipients because user's "Notification" setting is disabled: 'User3' (x), 'User4'

No4

True

True

User1

User2

User3

User4

User1

User2

Notification 'No4' included recipients via the notification's "Users" field: 'User2' (x), 'User1'

 

Notification 'No4' excluded recipients because user's device is inactive (see "cmn_notif_device.active"): 'User3' (x), 'User4'

 

The 'Who will receive' section should not be EMPTY. If the system is not able to generate recipients, it will not generate an target email.

Example of a Subscription based notification

I created/set 3 users

* aileen.mottern - aileen.mottern@example.com

* alejandra.prenatt - alejandra1.prenatt@example.com

* alissa.mountjoy - alissa.mountjoy@example.com

 

I created/set 1 cmdb_ci_computer

* "*DENNIS-IBM" (sys_id d0e8761137201000deeabfc8bcbe5da7)

 

I created a new email notification test_subscription on incident as follows:

Name

=

test_subscription

Table

=

incident

Inserted

=

checked

Updated

=

checked

Conditions

=

none

Users

=

aileen.mottern

alejandra.prenatt

Subject

=

Test test_subscription

Body

=

Test test_subscription

 

I hardcoded aileen.mottern and alejandra.prenatt as recipients. I filled in the reset of the required fields so it looks like this:

 

hardcoded notif.jpg

 

Then I create a new incident (e.g. INC0010021) to check the notification email results. The results confirm that notifications are sent to aileen.mottern and alejandra.prenatt.

aileen.mottern and alejandra.prenatt can choose not to receive the notification if the mandatory field is set to false

Subscribable option set to true

 

Also called CI-based notifications build their recipient lists by two parts:

  1. It generates a list of recipients like subscription-based notifications (same as with Subscribable set to false).
  2. Plus, the second part which search on cmn_notif_message for users subscribed to the record received on the event as configured in the notification.

They are called subscribable notifications because the option is called subscribable and implies a "subscribe" action  similarly to a subscription business model.


The second part is complex. You MUST set the "subscribable" to true and specify 'Item table' and 'Item' fields in the notification.

Workflow to mantain the connections between the user preference on cmn_notif_message and the records may be required.

The record-subscription-based part, needs users subscribed to the 'Configuration Items' (or relevant column) on cmn_notif_message for the notification. The event itself will specify which specific record to match with the notification preferences.

 

There are a few subscribable notifications already provided on the system.  They have 'Item table' set to cmdb_ci, sys_user, live_message, incident_alert, sys_user_group, cmn_cost_center,etc and the notifications are triggered by "xxx.affected" events.  These notifications do NOT look for the data in the configuration item field on the task record (e.g. incident) as some people with good common sense may think. This is a common misconception. The subscribable notifications on the system look for affected cis related to the task.

 

If you are only interested on the record-subscription part, the 'Who will receive' section can be empty.

 

Example of a Subscribable notification

 

I've changed the 'Subscribable' to true (CHECKED), set 'Item table' = "cmdb_ci", and Item to event.parm1.

 

Name

=

test_subscription

Table

=

incident

Inserted

=

checked

Updated

=

checked

Conditions

=

none

Users

=

aileen.mottern

alejandra.prenatt

Subject

=

Test test_subscription

Body

=

Test test_subscription

Subscribable=checked
Item table=cmdb_ci
Item=event.parm1

 

 

I updated the incident created before e.g. INC0010021 and checked the notification email

Results: The notification is sent to aileen.mottern and alejandra.prenatt.

 

Here is the example of the previous two notification sent:

subscribable notif.jpg

 

Note that the same notification is sent. This is because the "same as Subscription based notifications" is executed. As "subscribable" is set to true, the notification tried to match event.parm1 with the 'Configuration Items' on cmn_notif_message and can not find any matching.

 

In a nutshell, setting the 'subscribable' = true does not affect the recipient list yet.

 

Testing the record-subscription based part of Subscribable based notifications

 

The record-subscription based part needs:

  1. Users subscribed to the 'Configuration Items' on cmn_notif_message for that notification.
  2. The notification to specify which record to match to the notification (e.g. event.parm1).

To make the record-subscription-based work, you need a few users to subscribe to some Configuration Items.

I logged with alissa.mountjoy. Then open "*DENNIS-IBM" on cmdb_ci_computer table and clicked the "Subscribe" link.

subscribe link.jpg

To simplify this example, I opened the cmn_notif_message_list.do record for alissa.mountjoy and changed the Notification Message value to 'test_subscription' (from 'CI Affected').

test subscription.jpg

 

Next, I set up the notification to specify which record to match to the notification (e.g. event.parm1).

To simplify the testing, I've also changed the same notification to be fired on 'ci.affected'.

 

 

Name

=

test_subscription

Table

=

incident

Send When

=

Event is fired

Event name

=

ci.affected

Conditions

=

none

Users

=

aileen.mottern

alejandra.prenatt

Subject

=

Test test_subscription

Body

=

Test test_subscription

Subscribable=checked
Item table=cmdb_ci
Item=event.parm1

 

 

Then, I manually queued the event for that CI.

Note d0e8761137201000deeabfc8bcbe5da7 is the sys_id of the computer system.

 

// Search for the incident to set the event target
var target = new GlideRecord('incident');
target.addQuery('number', "INC0010021");
target.query();

// Queue the event
if (target.next()) {
    gs.eventQueue("ci.affected", target, "d0e8761137201000deeabfc8bcbe5da7", "*DENNIS-IBM");
};

 

The event generated was:

event generated.jpg

As the event.parm1 matches the record on cmn_notif_message for the notification "test_subscription" on the notification recipients, you can see alissa.mountjoy was added.

user added ci.jpg

 

To conclude, normal Subscription based notifications allows you to set the recipients based notifications 'Who will receive' information. This is simple and practical. Subscribable notifications allow you to notify users subscribed to reconds on the system in addition to using the 'Who will receive' fields. Subscribable set to true could add extra unwanted searches if used incorrectly.

 

I have tested the examples with Chrome an Fuji.

 

 

More information here:

Filter Blog

By date: By tag: