Skip navigation

Developer Community

15 Posts authored by: John Chun

While I was working at a pharmaceutical company, I had to generate lots of documents for compliance reasons. In some cases, I had to pull data from ServiceNow to embed them in Word documents, such as requirements, test scripts, traceability matrix, etc. Soon it became very labor intensive so I decided to automate the process using the ServiceNow ODBC driver and Microsoft Office VBA (VisualBasic for Applications, a macro language). This allowed me to extract data from ServiceNow and use them in documents; the time savings was huge.

 

I originally published this on Share shortly after presenting it at Knowledge 14 in San Francisco. Since then, I lost access to Share and, unfortunately, wasn't able to restore my access. So I'm republishing it here after making some revisions.

 

Attached are

 

  1. Slide deck that was presented at K14 with instructions on how to run the samples in the Word file.
  2. Word macro file containing VBA samples for Word, Excel, PowerPoint, Outlook; pdf files are created by exporting Office files as pdf.

 

The Word demo file was created using Word 2010 but should work with later versions. Below is the abstract from the deck:

Beyond ITSM, ServiceNow is a powerful platform for any business service request management. Enhance the power by fully integrating with Microsoft Word, Excel, Outlook and PDF, from simple form-based report generation to live, interactive documents. Automatically pull contents from ServiceNow and apply formats to meet business needs. Add password protection or digital signature for enhanced security. Simple, practical do-it-yourself solutions will be demonstrated and best practices discussed. If you ever wanted more than the standard Excel and PDF export from ServiceNow, this is a must-attend session!

 

The slide deck includes:

 

  1. Solution – Requirements Tracker
  2. Solution – Overview of exporting to Word templates
  3. Solution – Prepare Word VBA
  4. Solution – Open ServiceNow Database
  5. Solution – Query ServiceNow Database
  6. Solution – Create Word Documents
  7. Solution – Export to PDF
  8. Solution – Protect Document (using Digital Signature)
  9. Solution – Create Excel Spreadsheets
  10. Solution – Create PowerPoint from Excel
  11. Solution – Create Outlook Email
  12. Alternate Solutions
  13. Tips – ODBC
  14. Tips – VBA

 

Hope you find it useful and please post questions if you run into anything.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we ran a demo test called "Basic UI Test" and created a Service Catalog Task record. We also saw that, by design, the record was permanently deleted after the test was completed, with the Automated Test Framework (ATF) "automatically taking care of rolling back changes after testing". Before going further with more tests, I'd like to take a deeper look at the various building blocks of ATF both on the surface and behind the scenes; this will give us the lay of the land and help us what to look for later as we create and run more tests.

 

APPLICATION MODULES

 

As noted in Part 1, the Navigator provides the following modules for the atf_test_admin role under the Automated Test Framework application menu:

 

pastedImage_11.png

 

Below is a quick rundown of what you get for each module:

 

Tests

 

This module shows list of Tests. Tests include both UI and server tests.

 

Suites

 

This module shows list of Test Suites. Test Suites are made up of one or more Tests and/or Test Suites.

 

Test Results

 

This module shows list of Test Results. There may be multiple Test Results for a Test.

 

Suite Results

 

This module shows list of Test Suite Results. There may be multiple Test Suite Results for a Test Suite.

 

Run > Client Test Runner

 

This module opens Clint Test Runner window. This window, labeled as UI Test Runner, may also be opened from the Run Test dialog box. Without this window opened, UI tests won't run.

 

Run > Test Run Queue

 

This module shows list of Tests that are Waiting or Running. When running a Test Suite, this lists all Tests that are part of the Test Suite and its child Test Suites, if any. This list doesn't show the execution order, limiting its usefulness. NOTE: Tests can't be scheduled to run at a later time.

 

Occasionally, when I click on this module while Tests are running, I get the message "Security constraints prevent access to requested page" as shown in the screenshot below:

 

Run > Suite Run Queue

 

This module shows list of Test Suites that have Started or are Running. As in Test Run Queue, this list doesn't show the execution order. NOTE: Test Suites can't be scheduled to run at a later time

 

Administration > Properties

 

This module shows property settings. As noted in Part 1, these settings are unchecked by default and the first checkbox must be checked to be able to run tests.

 

Administration > Step Configurations

 

This module shows list of Test Step Configurations. These are used to build Test Steps. ATF comes with several predefined Test Step Configs and new ones can also be created. Test Steps can only be created from an existing Test Step Config.

 

Administration > Step Environments

 

This module shows list of Test Step Environments. These are used in Test Step Configurations and there are two predefined environments: UI and Server. NOTE: this doesn't allow selecting a server instance, for example QA, DEV, etc.

 

Administration > Test Templates

 

This module shows list of Test Templates. These are used to build Test Steps in a Test. One Test Template comes with the demo data. A Test Template contains a list of Test Step Configurations in a Glide List.

 

Administration > Step Configuration Categories

 

This module shows list of Test Step Configuration Categories. These are used in the Add Test Step dialog box to filter Test Step Configurations when building Test Steps. There are two predefined categories: Form and Server.

 

TEST HIERARCHY

 

Using Test Suites, multiple Tests can be bundled. A Test Suite may contain Tests and/or other Test Suites. A Test may belong to more than one Test Suite, as shown in the below hierarchy, whereas a Test Suite may belong to only one parent Test Suite:

 

 

When Test Suite A is executed, here's what happens (within the same level in the hierarchy diagram, assume Tests on the left have lower Execution Order, so executed first):

 

  1. Test 1 runs and finishes
  2. Test 2 runs and finishes
  3. Test Suite B starts
  4. Test 3 runs and finishes
  5. Test 2 doesn't run again, since it already ran in Test Suite A
  6. Test Suite B finishes
  7. Test Suite A finishes

 

The test sequence is shown in the Run Test Suite dialog box while the Test Suite is running. When a Test Suite has both Tests and Test Suites, like Test Suite A above, the Tests are always executed first before Test Suites.

 

While ATF allows a Test to be used in multiple Test Suites, care must be exercised when there are dependencies. In the above example, Test 2 was used twice, first under Test Suite A and second time under Test Suite B. We noticed that Test 2 didn't run again under Test Suite B because it already ran under Test Suite A. If Test 2 had dependency on Test 3 in Test Suite B (e.g., using an output value from Test 3), it may not run correctly.

 

TABLE RELATIONSHIPS

 

I used GQL Pad to inspect the database and put together the below ERD (Entity Relationship Diagram) showing the various tables used by ATF and their relationships. This, in conjunction with the module descriptions and test hierarchy above, provides an insight into the inner connections in ATF. For clarity, instead of showing all fields, only the reference fields are shown here to highlight the various relationships. We'll reference this later when we discuss how test records are connected and investigate any issues.

 

 

I noticed that not all tables had data after loading the demo data; we'll see how and if they're being used as we run more tests later. The Test Suite Test [sys_atf_test_suite_test] table is a many-to-many (m2m) join table that connects between Test Suites and Tests, as shown earlier in Test Hierarchy. Test Template [sys_atf_test_template] has a Glide List for Test Step Configs and is not explicitly related to any tables.

 

Next time, we'll resume running more Tests as well as creating new ones.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Winner of November 2016 Members' Choice Award

Last time, we ran a simple read-only demo test named "Verify That Form Defaults Are As Expected" that had only three test steps. This time, we'll continue running more demo tests and see how the Automated Test Framework (ATF) works.

 

FORM SUBMISSION

 

After logging in with the atf_test_designer role (not impersonating since the test runs impersonating another user) using Internet Explorer 11, I chose to run the simplest write test among the demo tests called "Basic UI Test". This test has only four steps, opening a new 'Catalog Task' form, set some fields, and submit. Below is the screenshot of the test:

 

 

TEST EXECUTION

 

I clicked Run Test and switched over to the Client Test Runner window (labeled as UI Test Runner in the window header) to watch the test in action as shown below. The three field values set are highlighted in red rectangles:

 

 

When the test was completed, the status dialog box was updated indicating successful completion as below:

 

 

TEST RESULTS

 

When clicked on Go to Result, the results were shown as below:

 

 

In the Step Results tab, all four steps showed Success with summary output for each step. In the Test Log tab, it showed more detailed output with 92 entries.

 

SCREENSHOTS

 

The results included three screenshots attached:

 

  1. When the form first opened
  2. After the field values were set
  3. After the form was submitted

This time, I'm not only seeing the form header obscuring the top portion of the form and some missing elements (buttons and icons) from the header, but also the third screenshot taken after the submission is malformed, with the misaligned field labels; you can compare this with an earlier screenshot from the Client Test Runner window above. As I noted in Part 1, I believe this is a side effect of screenshots not being taken directly from the screen. Screenshots provide objective evidence for test results, thus the fidelity is an important prerequisite for regulated testing. I hope the ServiceNow team can address this issue.

 

MISSING TEST RECORD, BY DESIGN

 

Next, I impersonated "ATF User" and looked for the Service Catalog Task [sc_task] record SCTASK0010004 that had just been created and assigned to "ATF User". I navigated to Service Desk > My Work, but couldn't find it. I removed the filter to see all tasks for everyone, but still couldn't find it. I tried the same by navigating to Service Catalog > Open Records > Tasks, but no luck, even after removing the filter conditions. Then I logged in with the admin role and looked inside Sys Audits [sys_audit] and Audit Deleted Records [sys_audit_delete] but no trace. I then ran this Background Script:

 

var gr = GlideRecord('sc_task');
gr.query();
while (gr.next()) gs.info(gr.number);

var gr = GlideRecord('task');
if (gr.get('number', 'SCTASK0010004')) gs.info(gr.number);

 

but still no luck. The only trace of it was that the Number Counter for SCTASK was showing the next number as 10,005.

 

ATF provides data cleanup via Automated Test Framework > Administration > Table Cleanup. But it only applied to the Test Results [sys_atf_test_result] table and it was set to run after 2,592,000 seconds (30 days) since sys_created_on, so this would have nothing to do with the missing Service Catalog Task [sc_task] record.

 

I also inspected the test steps, especially the final step of "Submit a Form" and its Step Configuration under Administration to see if anything would delete the test record, but didn't see anything obvious. Looking through other settings under Administration didn't yield a clue either.

 

The ATF wiki Automated Test Framework does mention

The test framework automatically tracks and deletes any data created by running tests, automatically taking care of rolling back changes after testing.

So I believe this feature must've deleted the record without a trace and I confirmed it. I do think it's a nice feature, but I can foresee cases where you want to inspect your test results, especially if tests fail, and also possibly take additional screenshots. It would be nice to give the user an option to delete test data later. I also like creating a large number of tickets for load testing, for which an automated tool is ideal (web services would be faster at creating a large number of records but they're not the same as UI tests). I didn't check it, but my thinking is this auto deletion feature would also take care of cascading deletes.

 

Further review revealed some log entries for rollback in Rollback Logs [sys_rollback_log] as shown below:

 

 

Next time, I'd like to take a look under the hood to see how ATF works.

 

UPDATES

 

2016-12-07 added rollback log and screenshot

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Winner of November 2016 Members' Choice Award

I've been running a blog series on Data Sampling Techniques where the latest topic was on Statistical Sampling Using Scripted Filter. While the series is targeted towards those interested in data analysis and data quality, I felt a variation of the technique might be of general interest to a wider audience and use cases. So here's a technique on random sampling, that is, randomly selecting GlideRecords using a Script Include and a Scripted Filter. Some use cases might include:

 

  1. Randomly picking top 3 prize winners for those who responded to Service Desk Satisfaction Survey in the last month, the grand prize being an iPad! This would be a good way to increase response rates to any survey.
  2. As you're launching the new Service Portal, you want to promote the portal and self service by giving away prizes; the more the customers use the portal, the better chances they have at winning the prizes.
  3. You've noticed your Knowledge Base is being underutilized, so you'd like to promote the use by giving away prizes.
  4. With the year-end holidays are approaching, you want to give out prizes to your customers as part of marketing campaign.
  5. An auditor is asking for a random sample of 10 change records for the Accounts Payable system from the last 12 months.
  6. You as Process Manager would like to review 30 incident records from the last month as part of Continual Service Improvement program.

 

There may be numerous other use cases not listed here and I'd like to hear about yours. For more analytical data sampling techniques, please see my blog series.

 

Let's add some fun and excitement!

 

OVERVIEW

 

Here's a quick overview of what we'll do; more technical details can be found in my other blog. Here, we'll focus more on various use cases.

 

  1. Create a Script Include with a function we'll call randomSample().
  2. Call randomSample() from Condition Builders using a Scripted Filter.
  3. Retrieve and review the records.

 

SCRIPT INCLUDE randomSample()

 

Let's first create a Script Include with the randomSample() function; this is similar to the statSample() function from my other blog, without the statistical part. Here's how the function works:

 

  1. Takes the table name and encoded query string needed for querying and sample size; if a field other than sys_id is to be returned, specify it.
  2. Query the table and get the row count using .getRowCount().
  3. Pick a random row from the record set and save the specified field value or sys_id; repeat until the sample size number of unique values are collected.
  4. Return the saved field values in an array.

 

To create a new Script Include,

 

  1. Log in with admin role.
  2. Navigate to System Definition > Script Includes.
  3. Click on New button to create a new Script Include.
  4. Fill the form as in the screenshot below:
  5. This can be either Global or Scoped; if Scoped, make sure to jot down the API Name to be used later in Scripted Filter.
  6. Since we want to use this from other applications, set Accessible from to All application scopes.
  7. For this to be used as a Scripted Filter, Client callable must be checked.
  8. In the Script field, paste the below script (also attached below as a file).
  9. Finally, click on Submit.

 

NOTE: I noticed an unexplained behavior that the function is called twice in a row when it's used in a Scripted Filter; the first call generates the list view and the results from the second call are displayed in the Condition Builder's breadcrumb, resulting in different sets of data between the breadcrumb and the list view. This would go unnoticed in most cases because the repeated calls bring back the same results. However, due to the random nature of randomSample(), the return values are different each time the function is called. I added some special handling to the script to ensure the results are identical for all calls by sampling only during the first call. I also ensured the function can be called by other scripts as a Script Include without an issue in case it's used outside of Scripted Filter.

 

/** 
 * Performs random sampling against a filtered list of GlideRecords.
 * Takes table name and encoded query string then returns an array of specified field or sys_id of sample records.
 * 
 * SCRIPTED FILTER EXAMPLE
 *
 * EXAMPLE 1: GET RANDOM SAMPLE USING ENCODED QUERY STRING
 * [Sys ID] [is] [javascript:randomSample('incident', 'active=1', 30)]
 * returns 30 sample records from the population size of 54,939
 *
 * NOTE: Scripted Filter runs in rhino.sandbox context so not all classes/objects are available for scripting.
 * NOTE: The function is run twice in Scripted Filter somehow, so use randomSampleRecords to run only once.
 *
 * MIT License http://tldrlegal.com/l/mit
 * Copyright (c) 2016 John.Chun @snowaid.com
 *
 * @param {string} tableName - name of table for GlideRecord 
 * @param {string} encodedQuery - encoded query string 
 * @param {int} sampleSize - number of records to have in sample 
 * @param {string} fieldName - name of field whose unique values are to be returned
 * @return {string[]} array of sys_id of random sample records
 */

var randomSampleRecords = [];  // this is in rhino.sandbox context in Scripted Filter; otherwise in global

function randomSample(tableName, encodedQuery, sampleSize, fieldName) {

  if (randomSampleRecords.length) return randomSampleRecords;  // in Scripted Filter, force to run only once
  try {
    //var gr = new GlideRecordSecure(tableName);  // enforce ACL; GlideRecordSecure undefined in Scripted Filter in Helsinki
    var isScriptedFilter = !this.GlideRecordSecure;  // use the fact that GlideRecordSecure is undefined in Scripted Filter
    var gr = new GlideRecord(tableName);
    if (!gr.isValid()) throw 'Invalid table name "' + tableName + '".';
    if (!gr.canRead()) throw 'No permission to read from "' + tableName + '".';  // test ACL for table
    fieldName = fieldName || 'sys_id';  // default to sys_id
    if (gr.getElement(fieldName) == null) throw 'Field "' + fieldName + '" not found.';
    if (!(sampleSize > 0)) throw 'Sample size must be a positive integer.';
    
    // get population
    if (encodedQuery) gr.addQuery(encodedQuery);
    gr.query();  // to getRowCount()
    var population = gr.getRowCount();
    if (!population || population < sampleSize) throw 'Total number of rows ' + population + ' is less than sample size ' + sampleSize + '.';

    // throw dice and get a random sample
    var offsets = [], records = [];
    while (records.length < sampleSize) {
      var offset = Math.floor(Math.random() * population);  // 0 <= offset < population
      if (indexOf(offsets, offset) >= 0) continue;  // dupe offset, so rethrow dice
      offsets.push(offset);
      if (offsets.length >= population) break;  // tried entire population
      gr.chooseWindow(offset, offset + 1);  // works in global & scoped
      gr.query();
      if (gr.next()) {
        var value = gr.getElement(fieldName).toString();
        if (indexOf(records, value) < 0) records.push(value);
      }
    }

    if (isScriptedFilter) randomSampleRecords = records;  // in Scripted Filter, save randomSampleRecords
    return records;
  }
  catch(e) {
    return 'ERROR: ' + e;  // return error message
  }
  
  // emulates Array.prototype.indexOf() in older JavaScript
  function indexOf(arr, val) { for (var i = 0; i < arr.length; i++) if (arr[i] == val) return i; return -1; }
}

 

SERVICE DESK SATISFACTION SURVEY WINNERS

 

Let's pick three lucky winners among those who responded to Service Desk Satisfaction Survey last month. If someone responded to more than one survey, it increases the odds of winning (if not, they may not be motivated to respond to subsequent surveys). Sent-out surveys are stored in the Survey Instances [asmt_assessment_instance] table (the Survey Responses [asmt_assessment_instance_question] table contains a row for each question answered; unless you want to increase the odds based on the number of questions answered, the Instance table is a better choice). We'll look at only Service Desk Satisfaction Survey and whom they were sent out to, in the user field. We'll also filter the taken_on field to last month only. Since we're selecting people, we'll do all this in a list view for Users. Below is the summary of the parameter values:

 

ParameterValue
List ViewOrganization > Users
tableNameasmt_assessment_instance
encodedQuerymetric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
fieldNameuser.name
Scripted Filterjavascript:randomSample('asmt_assessment_instance', 'metric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.name')

 

By default, the return value is an array of sys_id. However, you can pick any other field. For example, we can pick user.name, dot-walking to the user record's name field. We need to set the Condition Builder to

 

[Name] [is] [javascript:randomSample('asmt_assessment_instance', 'metric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.name')]

 

Make sure this is the only filter condition. When you run this, the result is

 

 

The breadcrumb shows the three winners, in the order they were picked at random. The list view under it shows the three user records, in the sort order you defined, which, in this case, is by Name in descending order. If you have first, second, and third prizes, you'll want to use the breadcrumb. Depending on your rules, you may want to add a few backup winners so if the winners don't claim their prizes within a certain time, the prizes are given to backup winners. You may also want to use backup winners in case Service Desk staff members are picked but disqualified.

 

Every time you refresh this, you'll get different winners; you may want to make sure to have that one official drawing (refresh) for the prizes.

 

If your organization has people with the same name, you may want to use User ID instead since it should be unique:

 

[User ID] [is] [javascript:randomSample('asmt_assessment_instance', 'metric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.user_name')]

 

 

If you really insist on using sys_id, here's what it looks like (user is a reference field that returns sys_id from the sys_user table):

 

[Sys ID] [is] [javascript:randomSample('asmt_assessment_instance', 'metric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user')]

 

 

Notice the breadcrumb is not as useful as before unless you can tell who's who from the sys_ids (you can hover over the User ID column and look at the link displayed at the bottom of the browser, if needed).

 

If you don't have permission to the Users [sys_user] table, you can run the Scripted Filter from other list views, such as Incident. Simply navigate to Incident > Open and set the Condition Builder as below:

 

[Number] [is] [javascript:randomSample('asmt_assessment_instance', 'metric_type.name=Service Desk Satisfaction Survey^taken_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.name')]

 

You can use Number or any other string field. This will show

 

 

This obviously doesn't show the names in the list view since it's not a Users list view but the names appear in the breadcrumb, as shown in the screenshot above.

 

KNOWLEDGE BASE USERS

 

You've noticed your Knowledge Base is being underutilized, so you'd like to promote the use by giving away prizes. You'll pick 3 winners from those who viewed KB articles last month; the more articles they viewed, the higher odds of winning. The data on who viewed which knowledge base article is stored in the Knowledge Use [kb_use] table. We'll use sys_updated_on and user fields to run similar conditions as before:

 

ParameterValue
List ViewOrganization > Users
tableNamekb_use
encodedQuerysys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
fieldNameuser.name
Scripted Filterjavascript:randomSample('kb_use', 'sys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.name')

 

We need to set the Condition Builder to

 

[Name] [is] [javascript:randomSample('kb_use', 'sys_updated_onONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 3, 'user.name')]

 

Make sure to run this on the first day of the month; if a user viewed the same article last month as well as this month, the sys_updated_on field will only show this month's date, removing the record from the pool.

 

INCIDENT SAMPLE RECORDS FOR CONTINUAL SERVICE IMPROVEMENT

 

You as Process Manager would like to review 30 incident records from the last month as part of Continual Service Improvement program. Let's look at only the closed incident tickets and randomly select 30 records:

 

ParameterValue
List ViewIncident > Open (or any Incident list view)
tableNameincident
encodedQueryclosed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()
fieldName
Scripted Filterjavascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30)

 

We need to set the Condition Builder to

 

[Sys ID] [is] [javascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30)]

 

This returns an array of sys_ids; another option is to return an array of Numbers and match on Number:

 

[Number] [is] [javascript:randomSample('incident', 'closed_atONLast month@javascript:gs.beginningOfLastMonth()@javascript:gs.endOfLastMonth()', 30, 'number')]

 

We've looked at a few practical use cases. This should give you ideas on how to use random sampling so you can use it for other cases. Enjoy and have fun!

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

The new Automated Test Framework (ATF) in the Istanbul release is a long-awaited feature that I'm sure many people are excited about. Test automation has been a focus area of mine for some time, especially in highly regulated industries under SOX (financial) and GxP (life sciences) regulations where testing is a critical part of compliance. So I decided to take it for a test drive when it first became available (using glide-istanbul-09-23-2016__patch0-10-05-2016); my understanding may be lacking at this point, so any feedback/clarification/correction would be greatly appreciated. Congratulations to the team that delivered this, whom I had the pleasure of meeting during K16!

 

ATF and TEST MANAGEMENT

 

The Test Management application has been available since the Fuji release. It has test cases and suites for tracking manual test activities for ServiceNow or any other applications. First thing I noticed was ATF and Test Management are two separate applications. This means those who've been using Test Management for ServiceNow testing will need to keep track of testing in two separate places. The same applies to those who have a mix of manual and automated tests, which is a common scenario. I wish that ATF was an extension of Test Management, so all tests can be managed in one place; manual tests then can be progressively converted to automated tests without losing continuity and a single dashboard can provide progress for all tests.

 

ROLES

 

ATF comes with two predefined roles: atf_test_admin with all permissions and atf_test_designer who can create tests in addition to other things. I think two additional roles might be useful, similar to those predefined in Test Management: atf_test_manager and atf_test_tester. The atf_test_manager role would manage creation and execution of tests whereas the atf_test_tester role can only run tests.

 

I created two users with each of the predefined roles and here's what they see in the Navigator:

 

atf_test_adminatf_test_designer

 

TESTS AND SUITES

 

ATF was already activated in the Istanbul instance but it didn't have any demo data loaded, except one template. So I logged in with admin role and navigated to System Definition > Plugins to open Automated Test Framework and clicked on Related Links > Load Demo Data Only as shown in the screenshot below:

 

The demo data adds 14 Tests and 14 Suites.

 

ENABLE TEST EXECUTION

 

Before executing tests, the feature must be explicitly enabled. Login with the atf_test_admin role and navigate to Automated Test Framework > Administration > Properties. Here you'll find two checkboxes as shown in the screenshot below. Make sure to check at least the first checkbox to be able to execute tests.

 

 

TEST EXECUTION

 

Using Internet Explorer 11, I logged in as a user with the atf_test_designer role and opened one of the simplest demo tests. This read-only test, named "Verify That Form Defaults Are As Expected", has only three steps that check for default values on a Catalog Task form, impersonating "ATF.User"; this user doesn't have either atf_test_admin or atf_test_designer role.

 

 

When Run Test was clicked, a dialog box showed up with status as in the screenshot below (I noticed the time displayed here is in PST, the system time zone, although I set the user's time zone to EST):

 

 

When Click here of "Click here to open a Client Test Runner" was clicked (see the screenshot above), a new Client Test Runner window opened (if it's not opened, the Test won't proceed). At the top of the window, it showed the progress with a blue progress bar for each step. In the Execution Frame tab, the form was displayed inside an iframe (more on this later), as shown in the screenshot below:

 

 

When the execution was completed, the status dialog box was updated with the results, as shown in the screenshot below:

 

 

TEST RESULTS

 

When the Close button was clicked, the Test form showed the test result in the Test Results tab as shown below:

 

 

When clicked on the test result link, the Test Result page opened up showing the details of the test as shown below:

 

 

The test result page showed step-by-step results in the Step Results tab and included two screenshots as attachments. Below is one of the screenshots from the attachments; notice it doesn't quite look the same as what we saw from the Client Test Runner window previously and the top is obscured by the tall gray header section and there are missing buttons and icons in the header, indicating the screenshots were not captured directly from the screen (more on this later):

 

 

The Test Log tab contains quite a bit of information; for a three-step test, it produced 43 test log entries as shown below:

 

 

When each log entry was clicked, the Test Result Item form showed more details as below:

 

 

The simple read-only test ran with flying colors with no major issues.

 

Next time, let's take a look at some more complex tests.

 

UPDATES

 

2016-12-07 added ENABLE TEST EXECUTION section.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

GlideFilter is used to determine if a GlideRecord meets the conditions specified in an encoded query string. For instance, it's used in the business rule condition builder to determine under which conditions the business rule must run. It's not well documented and its usage is slightly different between global and scoped apps, so there seems to be some confusion, especially on how it's used with regular expressions. I'll share what I know here and update this based on your feedback/additions/corrections.

 

The only official documentation available is Scoped GlideFilter API Reference. However, its variation is available in the global scope as well. Let's take a look at how they're used and how they're different (as of the Helsinki release).

 

SCOPED GlideFilter

 

Scoped GlideFilter is an object that is used without having to instantiate. It has only one method:

 

boolean checkRecord(GlideRecord gr, string filter, boolean matchAll)  // returns true if gr meets the conditions specified in filter

 

gr: a single GlideRecord

filter: an encoded query string

matchAll: (Optional) if true (default), all conditions in filter must match to return true; if false, any single true condition returns true (see below)

 

Here's an example:

 

var gr = new GlideRecord('incident');
gr.query();
var filter = 'active=true^state=2';  // active && state == 'In Progress'
while (gr.next()) {  // iterate through records
  if (GlideFilter.checkRecord(gr, filter)) gs.info(gr.number);  // test each record for the filter conditions
}

 

This returns all incident records that are active and the state is 'In Progress'. Some points to note are:

 

  1. GlideFilter is not instantiated (no need for new GlideFilter()).
  2. GlideFiter.checkRecord() is applied to each record after gr.query() is executed. In other words, GlideFilter is not used by gr.query().

 

USING MATCH_RGX FOR REGULAR EXPRESSION MATCH

 

GlideFilter also allows the use of regular expressions in the filter using the MATCH_RGX operator. Here's an example, modified from the previous one:

 

var gr = new GlideRecord('incident');
gr.query();
var filter = 'active=true^state=2^numberMATCH_RGXINC.*';  // active && state == 'In Progress' && /^INC.*$/m.test(number)
while (gr.next()) {  // iterate through records
  if (GlideFilter.checkRecord(gr, filter)) gs.info(gr.number);  // test each record for the filter conditions
}

 

This returns all incident records that are active, the state is 'In Progress' and the number starts with 'INC' (since all incident numbers start with 'INC', this doesn't really do anything). What's not well known and often a source of confusion is that MATCH_RGX already includes /^ and $/m, the start and end of string in multiline mode; the regular expression filter condition specified, therefore, is what comes between /^ and $/m. So

 

fieldMATCH_RGXcondition

 

is equivalent to, in JavaScript,

 

/^condition$/m.test(field)

 

Also, the test is case sensitive (we'll see below that there's a way to make this case insensitive in the global scope but not in scoped GlideFilter). Some examples and their JavaScript equivalents are

 

fieldMATCH_RGXabc === /^abc$/m.test(field)  // field exactly matches "abc"
fieldMATCH_RGXabc.* === /^abc.*$/m.test(field)  // field starts with "abc"
fieldMATCH_RGX.*abc === /^.*abc$/m.test(field)  // field ends with "abc"
fieldMATCH_RGX.*abc.* === /^.*abc.*$/m.test(field)  // field contains "abc"

 

It's worthwhile noting that MATCH_RGX is not available in GlideRecord's .addEncodedQuery() method (I wish it was); this indicates there are different flavors of encoded queries in ServiceNow.

 

GLOBAL GlideFilter

 

The examples used above for Scoped GlideFilter also work in the global scope. In addition, Global GlideFilter provides the following features:

 

new GlideFilter(string filter, string title)

 

filter: an encoded query string

title: title of the filter

 

boolean match(GlideRecord gr, boolean matchAll)  // returns true if gr meets the conditions specified in filter from GlideFilter instance

 

gr: a single GlideRecord

matchAll: if true, all conditions in filter must match to return true; if false, any single true condition returns true (see below). This is not optional, unlike in .checkRecord().

 

void setCaseSensitive(boolean caseSensitivity)  // sets whether .match() is case sensitive (does not apply to .checkRecord())

 

caseSensitivity: if true, .match() is case sensitive (does not apply to .checkRecord()).

 

Here are some additional properties and methods:

 

boolean caseSensitive  // true if .match() is case sensitive
string filter  // encoded query string of the filter
string getFilter()  // returns encoded query string of the filter
string title  // title of the filter
string getTitle()  // returns the title of the filter
void setDisplayTitle(String displayTitle)  // sets the display title of the filter
string getDisplayTitle()  // returns the display title of the filter (returns title if display title not set)
string script  // a string representing the source code of the JavaScript function for evaluating the filter conditions

 

Here's an example:

 

var gr = new GlideRecord('incident');
gr.query();
var filter = 'active=true^state=2^numberMATCH_RGXinc.*';  // active && state == 'In Progress' && /^inc.*$/i.test(number)
var gf = new GlideFilter(filter, '');
gf.setCaseSensitive(false);
while (gr.next()) {  // iterate through records
  if (gf.match(gr, true)) gs.info(gr.number);  // test each record for the filter conditions
}

 

Just like what we saw earlier, this returns all incident records that are active, the state is 'In Progress' and the number starts with 'inc' (case insensitive). .match() works similarly to .checkRecord(), but with an added benefit of evaluating case-sensitive or insensitive matches.

 

matchAll FLAG

 

The .script property, which shows the JavaScript function that evaluates the filter conditions, gives a clue on how the matchAll flag works. Using the above example:

 

matchAll = true

function trecord() {return !!(current.active == true && current.state.toString().toLowerCase() == 2 && RegExp('^inc.*$','mi').test(current.number.toString()));}trecord();

 

matchAll = false (effectively all conditions become OR conditions)

function trecord() {if (current.active == true ) return true;if (current.state == 2 ) return true;if (RegExp('^inc.*$','m').test(current.number.toString()) ) return true; return false;}trecord();

 

SUMMARY

 

Here's a summary of how GlideFilter is used in both global and scoped apps:

 

  1. GlideFilter is used to test encoded query filter conditions one record at a time; it can't be used to filter a bulk record set.
  2. GlideFilter supports regular expression matches using the MATCH_RGX operator: fieldMATCH_RGXcondition is equivalent to /^condition$/m.test(field) in JavaScript, with the /^ and $/m automatically added.
  3. In scoped apps,
    • GlideFilter is an object that is used without having to instantiate.
    • GlideFilter has only one method: .checkRecord().
  4. In global apps,
    • GlideFilter can be used the same way as in scoped apps.
    • GlideFilter can be instantiated, which gives two main methods: .match(), .setCaseSensitive().
  5. Encoded queries in GlideFilter support MATCH_RGX but those used in GlideRecord's .addEncodedQuery() don't.

 

 

 

UPDATES

 

2016-11-21 corrected matchAll parameter and added .script example output; corrected title parameter; added multiline flag to regular expressions; added additional properties and methods for GlideFilter().

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Last time, we built a simple app called GQL Pad that provided user interface for editing and running GQL statements; it also showed results in various formats that we could quickly inspect on the same web page. This time, let's walk through the steps for installing the app on your own instance from GitHub using the Helsinki Studio. If you're already familiar with importing files from GitHub, you may skip most steps below.

 

 

FORKING A GITHUB REPOSITORY

 

If you don't already, you'll need to have an account at GitHub.com. You can sign up for free as long as your repositories are public. GitLab provides private repositories for free and you can use it with the Helsinki Studio. I'll be using GitHub throughout this post.

 

After logging into GitHub, point your browser to GitHub - snowaid/GQL_v0.1: Glide Query Language initial version from GQL blog where I saved a copy of GQL v0.1 in its own repository. Towards the top right corner, click on "Fork" as shown in the screenshot below:

 

 

This makes a copy of the repository in your own account, which should take only several seconds. After it's done, you'll be back in your account and see a screen similar to below. Towards top left, make sure you see {your username}/GQL_v0.1 and "forked from snowaid/GQL_v0.1" below it.

 

 

Then click on the green "Clone or download" button and copy the URL as shown below (we'll paste it shortly below):

 

 

HELSINKI STUDIO

 

Now, let's open ServiceNow and navigate to System Applications > Studio and click on "OPEN STUDIO" as shown below:

 

 

In the next screen, click on "Import from Source Control":

 

 

Then paste the URL coped from above into the "URL" field and fill in "User name" and "Password": Next, click "Import" (NOTE: although the dialog box says "The account credentials you supply must have read access to the remote repository", you must provide credentials with read/write access or you'll get an error):

 

 

After several seconds, you'll see a success message like below:

 

 

Click on the blue "Select Application" button and then click on "gql" as shown below:

 

 

You'll then be taken to the "Welcome to Studio" page with the "gql" application loaded in the Application Explorer to the left:

 

 

Now you can modify the application files to your liking in the Studio and stash or commit back to GitHub using the "Source Control" menu.

 

RUNNING GQL PAD

 

Once the GQL app is loaded onto your instance, you can run GQL Pad by navigating to GQL > GQL Pad as shown below:

 

 

OPEN SOURCE COLLABORATION

 

The GQL app is open source code licensed under a very permissive MIT license and you're welcome to make contributions. While the app you just installed above will be frozen in GitHub in the GQL_v0.1 repository (to ensure it stays in sync with what's been mentioned in this blog series), the working copy of the app is maintained in the separate GQL repository. If you'd like to make contributions, please fork this repository.

 

LIMITATIONS AND POTENTIAL ENHANCEMENTS

 

This has been an attempt at demonstrating a proof of concept and we've successfully shown what we set out to accomplish: build a web service that takes SQL-like statements and returns result sets. Along the way, we created the GQL class as a Script Include, which can be called from any server-side scripts. We then built a processor to handle web requests and responses. To interactively demonstrate this, we built a UI Page app called GQL Pad. Everything was kept simple, with very little error handling. The app will work OK for positive cases, but may not perform gracefully when errors are encountered.

 

Here are some limitations and potential enhancements:

 

  1. GQL syntax checking: add to UI and GQL class.
  2. GQL syntax highlighting: add to GQL Pad.
  3. Error checking and handling: check for invalid syntax / column / table / encoded query.
  4. Security: may need to use GlideRecordSecure() instead of GlideRecord() to enforce ACL.
  5. Add ability to run SELECT *.
  6. Add calculated fields.
  7. Add aggregate functions: COUNT(), MAX(), MEAN(), etc.
  8. Add UNION: return result sets from multiple SELECT statements as one.
  9. Add JOIN.
  10. Add NEST: allow nesting of result sets from reference tables or related lists.
  11. Add SAMPLE: return random rows based on statistical sampling.
  12. Add XML output format using XMLHelper (can't use in scoped app); alternative might be to use libraries like x2js.

 

This concludes this series for now. I'll continue to update the GQL repository as I receive your open-source contributions and add enhancements. I may also post follow-up articles if any progress is worth sharing. Thanks for staying with me throughout the series!

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Last time, we saw how the processor handles web service requests and responses, executing GQL statements and returning encoded result sets. This time, let's build a simple app where we can edit a GQL statement, run it, and see the results in various formats; we'll call it GQL Pad. This will showcase everything we've done so far and interactively demonstrate the capabilities of GQL.

 

 

REQUIREMENTS

 

Here are some requirements for GQL Pad:

 

  1. GQL Pad is a web application that runs on the ServiceNow Helsinki release.
  2. User must have the "itil" role.
  3. GQL Pad provides a user interface (UI) with the following features:
    1. User can enter and edit a GQL statement in a text editor.
    2. User can select the desired output format among CSV, HTML, JSAN, JSON.
    3. User can execute the GQL statement in the editor.
    4. User can see the results of the execution on the same web page (rather than downloading).
    5. User can see how long it took for execution.
  4. GQL Pad provides GQL syntax checker in the editor. [future enhancement]
  5. GQL Pad provides GQL syntax highlighting in the editor. [future enhancement]
  6. GQL Pad provides Database Schema Browser showing a list of tables and columns that can be entered into the editor. [future enhancement]

 

UI PAGE CONFIGURATION

 

To get started, let's create a new UI Page in the "gql" application scope, as shown in the screenshot below:

 

 

Endpoint is what we need to use in the URL to open this UI Page; we'll use it when we create an application module for this. Be sure to check the Direct checkbox. This allows us to bypass Jelly, since we'll be using AngularJS instead, and use simpler HTML of our own without any ServiceNow additions (you can later view the page source and compare with other out-of-the-box pages).

 

HTML CONTENT

 

The HTML content looks like

 

<style>
  body { margin-bottom:20pt; }

  label { margin-right: 9px; }

  table { font:9pt Arial; border-collapse:collapse; }
  th,td { padding:3px; border:1px solid #aaa; empty-cells:show; vertical-align:top; }
  th { font-weight:bold; text-align:left; background-color:#eee; }
  tr:nth-child(even) { background-color:#fafafa; }
  tr:hover { background-color:lightyellow; }
</style>

<body ng-app="gqlpad">
  <h3>GQL (Glide Query Language) Pad</h3>
  <div ng-controller="EditorCtrl">
    <textarea ng-model="gql" rows="20" style="font:9pt Consolas; width:100%;"></textarea>
    <br/>
    <label>Output format:
      <select ng-model="format" ng-options="format.name for format in formats"></select>
    </label> 
    <button ng-click="run()" style="width:80px;" accesskey="r"><u>R</u>un</button>
    <span ng-bind="timer" style="color:gray; font:9pt Consolas; margin-left:30px;"></span>
    <br/>
    <div ng-bind-html="result" style="font:9pt Consolas; white-space: pre-wrap;"></div>
  </div>
</body>

<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.5.6/angular.min.js"></script>

 

The HTML adds various AngularJS attributes that start with "ng-"; we will use them later in script.

 

Line 13 adds the AngularJS directive "ng-app" and is the start of the GQL Pad app.

 

Line 15 is the start of the ng-conroller "EditorCtrl", which we'll see more of in the script.

 

Line 16 adds a text editor using the "<textarea>" tag.

 

Line 19 adds a <select> list of desired output formats (the options will be added by the script later).

 

Line 21 is the Run button (access key of "r" allows pressing Alt-R on Windows keyboard to invoke the click button action).

 

Line 22 is a placeholder for displaying execution duration.

 

Line 24 is a placeholder for displaying result.

 

Line 28 adds the <script> tag for the AngularJS source, which we're fetching from a CDN (content delivery network).

 

CLIENT SCRIPT

 

The Client Script looks like

 

/**
* GQL (Glide Query Language) Pad
*/

var app = angular.module('gqlpad', []);

app.controller('EditorCtrl', ['$scope', '$http', '$sce', function($scope, $http, $sce) {

  // set initial value
  $scope.gql = 'SELECT  -- sample GQL statement\r  number [Number]\r  , short_description [Title]\r  , dv_state [State]\r  , caller_id.email [Caller Email]\r  , caller_id.manager.email [Manager Email]\r  , dv_cmdb_ci [CI]\r  , cmdb_ci\rFROM incident\rWHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber\rLIMIT 5,10';
  
  $scope.formats = [
    {name:'CSV' },
    {name:'HTML'},
    {name:'JSAN'},
    {name:'JSON'},
  ];
  $scope.format = $scope.formats[1];  // default to HTML

  $scope.run = function() {
    var timer = Date.now();
    $scope.timer = '';
    $scope.result = '';
    var params = { gql:$scope.gql, format:$scope.format.name, now:Date.now() };  // add now() to prevent caching
    $http.get('/x_64935_gql_proc.do', { params: params }).then(function(response) {

      var data = response.data;
      
      $scope.result = $sce.trustAsHtml(typeof data == 'object' ? JSON.stringify(data) : data);  // stringify to show JSON on page
      $scope.timer  = 'took ' + ((Date.now() - timer) / 1e3) + ' seconds';
    });
  }
}]);

 

This is a simple AngularJS script that works with the above HTML to do the following:

 

Line 5 defines the GQL Pad app from HTML Line 13.

 

Line 7 is the beginning of the editor controller, which was declared on HTML Line 15.

 

Line 12 lists the available formats for the output; these will populate the <select> tag on HTML line 19.

 

Line 20 is executed when the Run button is clicked. It clears the output fields and sends a GET command to the processor we saw in Part 6, passing on the parameters gql and format in the query string.

 

Line 27 is the start of the callback function that's executed when a response is returned from the processor. It simply shows the result in the result <div> on HTML line 24; the result is converted to a string to prevent it from opening up in Excel (in case of CSV) or in another window (in case of JSON) to conform to Requirement c.4. It also shows the execution duration in the <span> defined on HTML line 22.

 

APPLICATION AND MODULE MENUS

 

To cap this off, let's add application and module menus. Create a new application menu with the settings in the screenshot below:

 

We set the Roles to "itil" as specified in Requirement b and set Category to "Custom Applications". After saving the application menu configuration page, let's add a new module using the New button in the Modules related list. In the Module configuration page, set the values as in the screenshot below:

 

 

We set Roles to "itil", again as per Requirement b. For Link type, set it to "URL (from Arguments)" and set Arguments to "x_64935_gql_pad.do", which is the Endpoint from the GQL Pad UI Page configuration setting.

 

After refreshing the Navigator (or the entire page), let's make sure the new "GQL" application and "GQL Pad" module menus show up in the Navigator.

 

GQL PAD IN ACTION

 

Here's a screenshot of GQL Pad in action. When you navigate to GQL > GQL Pad, the GQL Pad UI Page opens up with the default GQL statement in the editor. When you click on the Run button, the result is displayed below the Run button. The screenshot below is showing Incident demo data; your results may look different.

 

 

You can change Output format to "CSV" and click Run, which gives the below output:

 

You can change Output format to "JSAN" and click Run, which gives the below output:

 

You can change Output format to "JSON" and click Run, which gives the below output:

 

You can use GQL Pad to try out various GQL statements from other tables and inspect data, which may be useful for some debugging scenarios. You can also use it to test encoded query strings. The beauty is it's very interactive and fast.

 

This wraps up our work to date. Next time, I'll show you how to get the app from GitHub and discuss limitations and potential enhancements for the future.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

ServiceNow Advocate

Last time, we worked on the retriever that takes a GQL query and returns the result set. This time, let's take a look at the data format of the result set.

 

For the GQL query below

 

SELECT
  number[Number]
  , short_description [Title]
  , dv_state [State]
  , caller_id.email [Caller Email]
  , caller_id.manager.email [Manager Email]
  , dv_cmdb_ci [CI]
  , cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10

 

the retriever returns the 10 records in JSON shown below (from Incident demo data):

 

{"records":[{"CI":"EXCH-SD-05","Caller Email":"jerrod.bennett@example.com","Manager Email":"","Number":"INC0000050","State":"Active","Title":"Can't access Exchange server - is it down?","cmdb_ci":"281190e3c0a8000b003f593aa3f20ca6"},{"CI":"","Caller Email":"employee@example.com","Manager Email":"","Number":"INC0000031","State":"Active","Title":"When can we get off Remedy? UI is killing us","cmdb_ci":""},{"CI":"IBM-T42-DLG","Caller Email":"don.goodliffe@example.com","Manager Email":"","Number":"INC0000025","State":"Active","Title":"I need more memory","cmdb_ci":"469facd7a9fe1981015830c43428ca2d"},{"CI":"","Caller Email":"taylor.vreeland@example.com","Manager Email":"","Number":"INC0000018","State":"Active","Title":"Sales forecast spreadsheet is READ ONLY","cmdb_ci":""},{"CI":"","Caller Email":"employee@example.com","Manager Email":"","Number":"INC0000017","State":"Awaiting User Info","Title":"How do I create a sub-folder","cmdb_ci":""},{"CI":"","Caller Email":"bow.ruggeri@example.com","Manager Email":"","Number":"INC0000016","State":"Active","Title":"Rain is leaking on main DNS Server","cmdb_ci":""},{"CI":"Saints and Sinners Bingo","Caller Email":"fred.luddy@example.com","Manager Email":"","Number":"INC0000015","State":"Active","Title":"I can't launch my VPN client since the last software update","cmdb_ci":"46c7318aa9fe198100c76003f0bc82e9"},{"CI":"","Caller Email":"employee@example.com","Manager Email":"","Number":"INC0000007","State":"Awaiting User Info","Title":"Need access to sales DB for the West","cmdb_ci":""},{"CI":"","Caller Email":"employee@example.com","Manager Email":"","Number":"INC0000003","State":"Active","Title":"Wireless access is down in my area","cmdb_ci":""},{"CI":"FileServerFloor2","Caller Email":"","Manager Email":"","Number":"INC0000002","State":"Awaiting Problem","Title":"Unable to get to network file shares","cmdb_ci":"b0c25d1bc0a800090168be1bfcdcd759"}]}

 

There are two things to note in this JSON output:

 

  1. When we get a result set from a SQL query, the columns are ordered. That is, if a SQL statement's SELECT clause lists column_1, column_2, etc., we expect the result set to show those columns in the same order. In the above JSON output, the columns are listed as object properties, which are not ordered by definition in JavaScript.
  2. I like working with JSON primarily for two reasons: a) its format is native to JavaScript so I can readily work with it in JavaScript; b) it's not as verbose as XML, in most cases. What we see above, however, is verbose because the column headings (object property keys) are repeated for every row. As the row count increases, there will be more redundant information.

 

To address both of these points, I'd like to use JavaScript arrays, instead of objects, to represent each row, calling it JSAN (JavaScript Array Notation). This is still based on JSON, but instead of returning object key-value pairs, we'll use an array of values, which may be of any type and are ordered by definition in JavaScript. The column headings will be in its own array, separate from the data records. The savings in bytes, even just for 10 records, is pronounced in the same output below, in JSAN (1,282 vs. 1,909 characters):

 

{"labels":["Number","Title","State","Caller Email","Manager Email","CI","cmdb_ci"], "records":[["INC0000050","Can't access Exchange server - is it down?","Active","jerrod.bennett@example.com","","EXCH-SD-05","281190e3c0a8000b003f593aa3f20ca6"],["INC0000031","When can we get off Remedy? UI is killing us","Active","employee@example.com","","",""],["INC0000025","I need more memory","Active","don.goodliffe@example.com","","IBM-T42-DLG","469facd7a9fe1981015830c43428ca2d"],["INC0000018","Sales forecast spreadsheet is READ ONLY","Active","taylor.vreeland@example.com","","",""],["INC0000017","How do I create a sub-folder","Awaiting User Info","employee@example.com","","",""],["INC0000016","Rain is leaking on main DNS Server","Active","bow.ruggeri@example.com","","",""],["INC0000015","I can't launch my VPN client since the last software update","Active","fred.luddy@example.com","","Saints and Sinners Bingo","46c7318aa9fe198100c76003f0bc82e9"],["INC0000007","Need access to sales DB for the West","Awaiting User Info","employee@example.com","","",""],["INC0000003","Wireless access is down in my area","Active","employee@example.com","","",""],["INC0000002","Unable to get to network file shares","Awaiting Problem","","","FileServerFloor2","b0c25d1bc0a800090168be1bfcdcd759"]]} 

 

JSAN is different than CSV (comma-separated values) in that JSAN is still based on JSON, so no special processing (encoding/decoding) is necessary; we can still use the same JSON encode/decode methods, which are now ubiquitous in both client (JSON.stringify(), JSON.parse()) and server-side ( JSON.encode(), JSON.decode()) JavaScript. JSAN may also contain metadata to provide additional information about the data. For example, we can include not just column headings, but field names as well as data types. Since the data elements (rows and columns) are ordered, it now becomes possible in some cases to generate a hash, such as MD5, or a CRC32 checksum based on the encoded data; this may be useful if an additional layer of data integrity is required, such as for critical systems in life sciences, aerospace, military, or nuclear sectors (this may not work if a column contains unordered data like objects since they can be encoded in any arbitrary order).

 

JSAN would look like this for 2 records of 3 columns (although arrays are 0-based, I'm showing the first column as "column_1" below):

 

{"fields":["column_1_field","column_2_field","column_3_field"]
,"labels":["column_1_heading","column_2_heading","column_3_heading"],
"records":[["row_1_column_1","row_1_column_2","row_1_column_3"]
,["row_2_column_1","row_2_column_2","row_2_column_3"]]}

 

To incorporate this, let's modify the GQL class in the Script Include from last time slightly as below (only the changed sections are shown):

 

GQL.prototype = {
  initialize: function(format) {
    this.format = format || 'JSAN';  // { JSAN | JSON }
  },
};

GQL.prototype.query = function(gql, format) {

  format = format || this.format;
  var isJSON = /JSON/i.test(format);

  return { fields: fields, labels: labels, records: records };

  function getRow() {
    var row = isJSON ? {} : [];
    for (var i = 0; i < fields.length; i++) {
      var field = fields[i], isDV = /(^|\.)dv_/.test(field);
      var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
      isJSON ? row[labels[i]] = value : row.push(value);  // default to JSAN
    }
    return row;
  }
};

 

Line 3 allows us to optionally set the default format for the instance at the time of instantiation; if not set, it defaults to JSAN.

 

Line 9 allows us to optionally set the format at the time of calling the query() method; if not set, it defaults to the instance default.

 

Line 12 adds the additional metadata (fields and labels) to the returned data.

 

Line 19 adds to the row either an object key-value pair if JSON or an array element if JSAN.

 

Next time, we'll look at the processor and see how the returned data from the GQL class are consumed; we'll also see how JSAN makes it easy to encode the result sets into HTML and CSV formats.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we derived the JSAN (JavaScript Array Notation) data format from JSON and saw how the GQL class from the Script Include can return result sets in either JSON or JSAN data format. This time, we'll look at the processor and see how it handles the requests coming from the browser and consumes the result sets from the Script Include, encoding them for transmission.

 

We have a choice between two ServiceNow tools when it comes to processing inbound web requests:

 

  1. Scoped GlideScriptedProcessor
  2. Scripted REST APIs

 

While Scripted REST APIs provide a nice UI with nifty features, let's use Scoped GlideScriptedProcessor since our processor will be simple. The processor handles the following tasks:

 

  1. Receive GET requests from the browser or other sources.
  2. Extract parameters from the query string.
  3. Instantiate and execute the GQL class from the Script Include.
  4. Receive the result set from the GQL class in JSAN or JSON data format (the return value is a JavaScript object and it's not yet encoded in JSON).
  5. Based on the requested format, encode (stringify) the result set into JSON or convert to CSV or HTML table.
  6. Transmit the result via response.

 

Please note that JSAN is an object data format and is still encoded as JSON for transmission.

 

Below is the processor script that captures the above:

 

/** 
* GQL (Glide Query Language) processor
*/

(function process(g_request, g_response, g_processor) {

  try {
    var gql = g_request.getParameter('gql');
    var format = g_request.getParameter('format');
    var result = new GQL().query(gql, format);

    g_response.setHeader('cache-control', 'no-cache');  // disable page caching to avoid stale result

    if (/CSV/i.test(format)) {
      var filename = result.query && result.query.table || 'gql';
      g_response.setHeader('Content-Disposition', 'attachment;filename=' + filename + '.csv');
      g_processor.writeOutput('text/csv', getCSV());
    }
    else if (/HTML/i.test(format)) g_processor.writeOutput('text/html', getHTML());
    else g_processor.writeJSON(result);
  } catch (e) { g_processor.writeOutput('text/plain', 'ERROR: ' + e + '\r' + e.stack); }

  function getCSV() {

    var columns = [], rows = [];
    result.labels.forEach(function(label) { columns.push(escapeCSV(label)); });
    rows.push(columns.join(','));

    for (var i = 0; i < result.records.length; i++) {
      columns = [];
      result.records[i].forEach(function(column) { columns.push(escapeCSV(column)); });
      rows.push(columns.join(','));
    }
    return rows.join('\r\n');

   /** 
    * Takes raw field value and returns CSV escaped value 
    * based on RFC 4180 Common Format and MIME Type for CSV Files
    * October 2005 http://tools.ietf.org/html/rfc4180 
    * 
    * @param {string} raw - raw field value 
    * @return {string} escaped CSV field value, if applicable per RFC 4180 
    */  
    function escapeCSV(raw) {
      var out = (raw + '').replace(/"/g,'""');  // escape all double quotes  
      if (/[\r\n",]/.test(out)) out = '"' + out + '"';  // if it has special characters, enclose in double quotes
      return out;
    }  
  }

  function getHTML() {

    var columns = [], rows = ['<table><thead><tr><th>'];
    result.labels.forEach(function(label) { columns.push(escapeHTML(label)); });
    rows.push(columns.join('</th><th>'), '</th></tr></thead><tbody>');

    for (var i = 0; i < result.records.length; i++) {
      columns = [];
      result.records[i].forEach(function(column) { columns.push(escapeHTML(column)); });
      rows.push('<tr><td>', columns.join('</td><td>'), '</td></tr>');
    }
    rows.push('</tbody></table>');
    return rows.join('');

    function escapeHTML(raw) {  // escape reserved HTML characters
      var MAP = { '&':'&amp;','<':'&lt;','>':'&gt;'};
      return (raw + '').replace(/[&<>]/g, function(c) { return MAP[c]; });
    }
  }
})(g_request, g_response, g_processor);

 

Lines 8 and 9 extracts the two parameters gql and format from the request object g_request.

 

Line 10 instantiates the GQL class, executes the query() method using the two parameters extracted, and saves the result set.

 

Lines 14 through 20 encodes the result set into CSV, HTML, or JSON (for both JSAN and JSON formats) and transmits it back to the requester; notice the content types are set according to the encoding type. The writeJSON() method handles the JSON encoding and also sets the content type.

 

This sums up what the processor does; it doesn't get much simpler. The rest lines are for the CSV and HTML encoders. These don't really need to be part of the processor, but I wanted to illustrate how JSAN data can be easily turned into other formats. The arrays used in JSAN can be quickly converted to a formatted row using the forEach() array method. Both CSV and HTML encoders work very similarly, iterating over records and columns as row and column delimiters are inserted. HTML tables can be as easily generated on the client side from JSAN data using AngularJS, for example; this may be the preferred approach if you want tighter interactions with the tables in the browser.

 

Let's now take a look at the Processor configuration page whose screenshot is shown below:

 

 

I'd like to point out a few things:

 

  1. Application is read-only and automatically set to "gql".
  2. Set Type to "script" since we're using JavaScript in the Script section.
  3. Set Path to "proc", short for "processor".
  4. Path Endpoint is read-only and is automatically set to "x_64935_gql_proc". Here, "64935" is the "Vendor prefix", which, in this case, is a numeric id automatically assigned based on the personal developer instance being used. "gql" is the application name and "proc" is the Path.

 

Using the Path Endpoint, a web service call can be made using a URL similar to (replace "instance" with your own instance)

 

https://instance.service-now.com/x_64935_gql_proc.do?gql=SELECTnumber[Number],short_description[Title],dv_state[State],caller_id.email[Caller%20Email],dv_cmdb_ci[CI],cmdb_ciFROMincidentWHEREactive=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumberLIMIT10&format=HTML

 

This returns an HTML table similar to this (from Incident demo data):

 

NumberTitleStateCaller EmailCIcmdb_ci
INC0000055SAP Sales app is not accessibleIn Progresscarol.coughlin@example.comSAP Sales and Distribution26e494480a0a0bb400ad175538708ad9
INC0000054SAP Materials Management is slow or there is an outageOn Holdchristen.mitchell@example.comSAP Materials Management26e44e8a0a0a0bb40095ff953f9ee520
INC0000053The SAP HR application is not accessibleIn Progressmargaret.gray@example.comSAP Human Resources26e51a2f0a0a0bb4008628d2254c42db
INC0000052SAP Financial Accounting application appears to be downIn Progressbud.richman@example.comSAP Financial Accounting26e426be0a0a0bb40046890d90059eaa
INC0000051Manager can't access SAP Controlling applicationIn Progressemployee@example.comSAP Controlling26e46e5b0a0a0bb4005d1146846c429c
INC0000050Can't access Exchange server - is it down?In Progressjerrod.bennett@example.comEXCH-SD-05281190e3c0a8000b003f593aa3f20ca6
INC0000031When can we get off Remedy? UI is killing usIn Progressemployee@example.com
INC0000025I need more memoryIn Progressdon.goodliffe@example.comIBM-T42-DLG469facd7a9fe1981015830c43428ca2d
INC0000018Sales forecast spreadsheet is READ ONLYIn Progresstaylor.vreeland@example.com
INC0000017How do I create a sub-folderOn Holdemployee@example.com

 

At the end of the URL is the format parameter, which is set to "HTML" in the above example. This can be changed to "JSAN" or "JSON" to get the JSON encoded object, which opens up as a file in the browser that you can inspect. When the format is set to "CSV", a CSV-formatted file downloads and opens up in Microsoft Excel, if you have it set as the default application for CSV files.

 

I'm sure you can think of some use cases where a simple GET command via URL returns data you can easily consume. How about an email notification template with an embedded table listing stale tickets assigned to the individual recipients? Again, GQL can be used as a web service, as we just saw, or in any server-side scripts, just like what the above processor is doing. Either way, you pass a GQL statement and get back a result set without having to work with GlideRecords directly.

 

With this, we have now completed what we set out to do in Part 2:

 

  1. GQL syntax - define GQL syntax to be used
  2. Processor - handle bidirectional HTTP traffic
  3. Parser - parse GQL query for processing
  4. Retriever - query and retrieve result set
  5. Encoder - encode result set for output

 

addressing all of the requirements from Part 1:

 

  1. Focus on SELECT for now.
  2. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  3. Select the raw and/or display value of a field.
  4. Be able to rename column headings.
  5. Support dot walking of reference fields.
  6. Be able to filter the result set.
  7. Be able to sort by one or more columns.

 

I haven't covered the use of the asterisk or calculated columns in the SELECT clause; these will be covered in the future.

 

Next time, we'll look at GQL Pad, an interactive GQL statement editor and executor built using a very simple UI Page with a touch of AngularJS; this will be a small "app" that's built on top of what we have seen so far. I'll also show you how to get the entire working app from GitHub in another blog. We're almost there, so please stay tuned!

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we designed the parser that decomposes a GQL statement into the four main parts:

 

SELECT column_list
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count

 

We saw how the SELECT column_list would be parsed and which Scoped GlideRecord methods we might use for the FROM, WHERE, and LIMIT clauses. This time, let's work on the retriever that runs the query and retrieves the resultset.

 

Let's take a look at how we may put this together:

 

// run query & retrieve data
var records = [];  // array of rows
var gr = new GlideRecord(table);
if (encodedQuery) gr.addEncodedQuery(encodedQuery);
if (offset) gr.chooseWindow(offset, offset + limit);
else if (limit) gr.setLimit(limit);
gr.query();
while (gr.next()) records.push(getRow());

 

This gets quite simple with the help of Scoped GlideRecord; it iterates over all glide records and returns an array of record objects. The getRow() function returns the columns, either in raw or display values (if the field name is prefixed with "dv_"):

 

function getRow() {
  var row = {};
  for (var i = 0; i < columns.length; i++) {
    var field = columns[i].field, isDV = /(^|\.)dv_/.test(field);
    var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
    row[columns[i].label] = value;
  }
  return row;
}

 

The gr.getDisplayValue() and gr.getElement() methods handle dot-walking for us. gr.getElement() returns GlideElement so we need to convert it to string to get the raw value (if not converted, the JSON encoder will treat it as an object, which is not what we want). The columns array is obtained from the column_list and each array element is an object representing a column with field and label (column heading) properties:

 

// parse column_list
columns = columns.split(/\s*,\s*/);  // split column_list column_1, column_2 [column_2_heading], column_3
for (var i = 0; i < columns.length; i++) {  // parse column heading
  matches = /([.\w]+)\s*(?:\[(.+)\])?/.exec(columns[i]);
  columns[i] = { field: matches[1], label: matches[2] || matches[1] };  // if no label, use field name
}

 

Putting everything together, we have a Script Include with the GQL class:

 

/** 
* GQL (Glide Query Language) class
*/  

var GQL = Class.create();
GQL.prototype = {
  initialize: function() { },
  type: 'GQL'
};

/** 
* Takes a GQL statement and returns the resultset in an array of records
* 
* @param {string} gql - GQL statement 
* @return {object} resultset in an array of records
*/  

GQL.prototype.query = function(gql) {

  // parse gql
  gql = gql.replace(/\s*--.*/g, '');     // remove comments
  var rxParser = /SELECT\s*([^]+?)\s*FROM\s*(.+?)\s*(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?(?:LIMIT\s*([,\d\s]+?)\s*)?$/;
  var matches = rxParser.exec(gql);
  var columns = matches[1] || '';        // SELECT column_list
  var table = matches[2] || '';          // FROM table name
  var encodedQuery = matches[3] || '';   // WHERE encoded_query_string
  var limit = matches[4] || '';          // LIMIT [offset,] row_count
  var offset = 0;
  if (limit) {  // parse offset, row_count
    limit = limit.split(',');
    if (limit.length > 1) offset = parseInt(limit[0], 10) || 0;
    limit = parseInt(limit[limit.length > 1 ? 1 : 0], 10) || 0;
  }

  // parse column_list
  columns = columns.split(/\s*,\s*/);  // split column_list column_1, column_2 [column_2_heading], column_3
  for (var i = 0; i < columns.length; i++) {  // parse column heading
    matches = /([.\w]+)\s*(?:\[(.+)\])?/.exec(columns[i]);
    columns[i] = { field: matches[1], label: matches[2] || matches[1] };  // if no label, use field name
  }

  // run query & retrieve data
  var records = [];  // array of rows
  var gr = new GlideRecord(table);
  if (encodedQuery) gr.addEncodedQuery(encodedQuery);
  if (offset) gr.chooseWindow(offset, offset + limit);
  else if (limit) gr.setLimit(limit);
  gr.query();
  while (gr.next()) records.push(getRow());

  return { records: records };

  function getRow() {
    var row = {};
    for (var i = 0; i < columns.length; i++) {
      var field = columns[i].field, isDV = /(^|\.)dv_/.test(field);
      var value = isDV ? gr.getDisplayValue(field.replace('dv_', '')) || '' : gr.getElement(field).toString();
      row[columns[i].label] = value;
    }
    return row;
  }
};

 

This doesn't yet have any GQL syntax checking or run-time error handling, but will give us a good starting point. By having this in a Script Include, we can use it not only for web services, but also in any server-side scripts.

 

Next time, we'll take a look at the encoder.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we looked at the GQL syntax and how to handle raw and display values of columns. This time, we'll look at the parser, which is closely tied to the GQL syntax, and see how GQL statements can be parsed into various parts for execution. We're not skipping the processor; we'll come back to it later.

 

PARSER

 

Let's go step by step and see how we can parse a GQL statement. The below example we saw last time may help you follow the steps:

 

SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , dv_state [State]
  , caller_id.email [Caller Email]
  , caller_id.manager.email [Manager Email]
  , dv_cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10

 

A. Remove comments

 

A comment starts with two dashes "--" and ends with the end-of-line; all comments and their zero or more preceding white spaces will be removed first before further processing using

 

var no_comments = query.replace(/\s*--.*/g,'');

 

This is not foolproof in case there are double dashes elsewhere not meant to be for comments (e.g., column headings), but let's keep it simple for now.

 

B. Breakdown into parts

 

A GQL statement has the following four main parts in the following order we defined last time:

 

  1. SELECT - mandatory; always starts with this.
  2. FROM - mandatory
  3. WHERE - optional (also contains ORDERBY from encoded query strings)
  4. LIMIT - optional

 

Let's try to pick out these four parts using a regular expression. Regular expression may not be the best tool for parsing, but it may work adequately for our simple logic.

 

// parse "SELECT select_list FROM table WHERE encoded_query_string LIMIT [offset,] row_count" where WHERE and LIMIT are optional
var regexp = /SELECT\s*([^]+?)\s*FROM\s*(.+?)\s*(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?(?:LIMIT\s*([,\d\s]+?)\s*)?$/

 

Here's what this does in more detail:

 

var regexp = /
SELECT\s*([^]+?)\s*  // look for SELECT and capture what follows ([^]+?) including white spaces [^], but ignore zero or more surrounding white spaces \s*
FROM\s*(.+?)\s*      // look for FROM and capture what follows except white spaces (table names don't have spaces)
(?:WHERE\s*([^]+?(?=\s*LIMIT)|[^]+?)\s*)?  // look for optional WHERE clause; if found, capture what follows until LIMIT is encountered or to end of line
(?:LIMIT\s*([,\d\s]+?)\s*)?  // look for optional LIMIT clause and capture numbers separated by a comma
$/  // continue to the end of the last line

 

Let me clarify a few things since regular expressions, while compact and powerful, are not always self explanatory.

 

In Line 2, "[^]" represents ALL characters, both white spaces and the rest, so the SELECT list can be written in single or multiple lines. This could also be written as "[\s\S]" but "[^]" is a shorthand (technically, the first caret inside square brackets indicates "exclude the following characters"; since there's no following characters in this case, it excludes none, leading to represent all characters). This leads us to use the lazy quantifier "?" in "([^]+?)" to tell it to "match as few times as possible" and not to capture trailing white spaces; without "?", it gets greedy and keeps going until it sees "FROM" since "[^]" means all characters.

 

Line 4 is a non-capture group, indicated by "(?:" meaning "match what follows but don't capture." However, what's in the inner parentheses "([^]+?(?=\s*LIMIT)|[^]+)" will still be captured and it'll capture the encoded query string that follows WHERE. The inner parentheses could've been simply "([^]+?)" if we mandated that encoded query strings must be on a single line. Since we're not mandating it, we have to use the positive lookahead "(?=\s*LIMIT)" to tell it to "stop when you see LIMIT". The second part in the inner parentheses "|[^]+?" means to "go all the way to the end if you don't see LIMIT". Line 4 ends with "?" to match 0 or 1 time since the WHERE clause is optional.

 

Line 6 ends with an anchor, the end of last line "$", to tell lazy quantifiers not to be too lazy and go all the way to the end.

 

This works well for both single or multiple lines, and whether white spaces are removed or not. We'll add only a simple syntax checker later. A good way to test if a regular expression works as intended is to use an online tool like hifi RegExp Tool; it shows the capture groups in different colors, which is useful for testing. Also, you can view a saved copy of this regex at regex101.com

 

When this regular expression is executed, it returns four capture groups corresponding to the four main parts of a GQL statement.

 

C. SELECT

 

The SELECT part consists of a list of one or more column names, with optional column headings enclosed in square brackets, separated by commas. We'll use

 

// split column list "column_1, column_2 [column_2_heading], column_3"
var columns = select.split(/\s*,\s*/);

 

to get an array of columns. For each column, we'll check for column heading and, if it exists, parse it out using:

 

// parse out "column_name [column_heading]" where "[column_heading]" is optional
var regexp = /([.\w]+)\s*(?:\[(.+)\])?/

 

D. FROM

 

The FROM part simply contains the table name, which we can pass on to either of

 

var gr = new GlideRecord(table);
var gr = new GlideRecordSecure(table)  // enforce ACL

 

from Scoped GlideRecord.

 

E. WHERE

 

The optional WHERE part contains an encoded query string with filter and/or order conditions. We can pass this straight to

 

gr.addEncodedQuery(encodedQuery);

 

from Scoped GlideRecord.

 

F. LIMIT

 

The optional LIMIT part gives the offset, which is optional, and row_count. After parsing the integers using parseInt(), we can pass these on to either of

 

gr.chooseWindow(offset, offset + row_count);
gr.setLimit(row_count);

 

from Scoped GlideRecord.

 

Next time, we'll continue with the rest of the design.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Last time, we discussed the concept behind GQL as being a web service with SQL-like syntax and listed use cases along with high-level requirements. We want to use it similar to JSONv2 by sending a command using GET and retrieve a result set in JSON. This time, let's look at some design considerations.

 

DESIGN COMPONENTS

 

Here are some components we need to consider:

 

  1. GQL syntax - define GQL syntax to be used
  2. Processor - handle bidirectional HTTP traffic
  3. Parser - parse GQL query for processing
  4. Retriever - query and retrieve result set
  5. Encoder - encode result set for output

 

Let's also add that this will be a scoped application. The design will evolve over time as we start simple and refactor as needed.

 

GQL SYNTAX

 

Under the hood, we know that all database transactions in ServiceNow are being done in SQL. However, we don't have direct access to it. Even if we did, it may not prove to be useful in all cases. For example, selecting a display value may require joins or subqueries and the SQL statement may quickly become unwieldy. Instead, we'll use GlideRecord or, more specifically, Scoped GlideRecord. This means GQL will be constrained by what we can do with Scoped GlideRecord.

 

In Part 1, our requirements called for

 

  1. Focus on SELECT for now.
  2. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  3. Select the raw and/or display value of a field.
  4. Be able to rename column headings.
  5. Support dot walking of reference fields.
  6. Be able to filter the result set.
  7. Be able to sort by one or more columns.

 

In its simplest form, we have

 

SELECT *
FROM table

 

This returns all columns from table; we'll define what "all" means later when we discuss raw values vs. display values. Since we'd also like to embed this in a query string and use GET, we may rewrite this as

 

SELECT * FROM table

 

Let's further stipulate that white spaces are optional and all keywords must be in caps. This also helps with readability since ServiceNow table and column names are in lowercase. So the above can be rewritten as

 

SELECT*FROMtable

 

In this article, for readability, we'll continue using white spaces, though. Now, let's add the optional WHERE clause. Instead of reinventing it, we can just use Encoded Query Strings, which I've always found to be compact and powerful:

 

SELECT *
FROM table
WHERE encoded_query_string

 

Encoded Query Strings may also contain one or more ORDERBY clauses for descending or ascending order, so we can sort the result set using them. We'll look at Encoded Query Strings and the syntax in more detail later.

 

Let's also add optional LIMIT, which we'll borrow from MySQL:

 

SELECT *
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count

 

Now, let's turn to the field list instead of the asterisk. One or more columns may be listed, using dot-walking, if needed. Dot-walking allows access to related tables through reference fields (although dot-walking can be nested multiple levels, recommended limit is three levels). We'll also optionally allow renaming column headings by enclosing a new name in square brackets (this is not the same as aliases in SQL since the renamed column headings don't act as new column names that can be used in queries):

 

SELECT column_1, column_2 [column_2_heading], column_3
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count

 

Lastly, let's allow for comments, using double dashes, as in SQL, in multi-line mode:

 

SELECT column_1, column_2 [column_2_heading], column_3  -- this is a comment
FROM table
WHERE encoded_query_string
LIMIT [offset,] row_count

 

Summing it all up, here's an example:

 

SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , state [State]
  , caller_id.email [Caller Email]
  , caller_id.manager.email [Manager Email]
  , cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10

 

Without white spaces, which may be useful for including in a GET query string, it is

 

SELECTnumber[Number],short_description[Title],state[State],caller_id.email[Caller Email],caller_id.manager.email[Manager Email],cmdb_ciFROMincidentWHEREactive=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumberLIMIT5,10

 

Let's work with what we have so far for now and discuss other features later. If you'd like anything specific, please leave a comment below.

 

RAW VALUE VS. DISPLAY VALUE

 

Some fields have both raw and display values. The State field for Incident, for example, has numeric raw values while the display value might say "Resolved". Other field types, such as date, duration, etc., also have both raw and display values. Also, a reference field's raw value is the sys_id from the related table while the display value is from the display value field of the related table.

 

In our SELECT statement, we'd like to specify raw and/or display value of each field. JSONv2, by default, returns raw values unless the "displayvalue" option is set to "true" (display, not raw, values are returned) or "all" (both raw and display values are returned and the display column names are prefixed with "dv_").

 

In GQL, let's return raw values unless column names start with "dv_", keeping consistent with JSONv2 (the same convention also applies to ODBC SQL). Since what we want in most cases are display values, this will add a bit of extra typing, forcing us to rewrite the above GQL as (see "dv_" for state and cmdb_ci)

 

SELECT  -- sample GQL statement
  number [Number]
  , short_description [Title]
  , dv_state [State]
  , caller_id.email [Caller Email]
  , caller_id.manager.email [Manager Email]
  , dv_cmdb_ci
FROM incident
WHERE active=true^priority<2^EQ^ORDERBYpriority^ORDERBYDESCnumber
LIMIT 5,10

 

For

 

SELECT *
FROM table

 

we'll return both raw and display values with the column names of the display values starting with "dv_", similar to JSONv2 with "displayvalue" set to "all".

 

Next time, we'll continue with the rest of the design. Please note that the design may change over time, especially with your input/feedback.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

No, GQL isn't Google Query Language; it's Glide Query Language, and it's exclusively for ServiceNow.

 

Having been a long-time SQL user who appreciates the simple, yet powerful construct of the language, I occasionally miss not being able to use it when I'm working with web services or data export in ServiceNow. Don't get me wrong; I like using JSONv2 Web Service. But how can I select only the columns I want? Or, how can I rename column headings? Can I show only the display values? Can I have calculated columns? Hmm, it's not trivial to do any of these with web services while they are trivial in SQL. So I dream. But I don't dream too big in that I really only want to be able to SELECT data (read only) for now and not UPDATE, DELETE, etc. Yes, there are ways to work with SQL today; although the infamous gs.sql() has now been retired (although I liked using it occasionally), the ODBC still works. But it's not a web service (technically it is SOAP under the cover, but we still have to deal with installing the ODBC driver, configure, etc.). And, yes, there are other ways to partially accomplish this, by using Export Sets, Database/Form Views, etc., but you have to configure them on the server, typically as an admin. Ideally, I'd like to have the convenience of both web services and SQL so data can be readily consumed with ease. So begins my quest.

 

I'd like to send a SQL-like query, similar to the example below, to ServiceNow using web service calls and receive the result set:

 

SELECT
  number [Number]
  , short_description [Title]
  , state [State]
  , caller_id.email [Caller Email]
  , caller_id.manager.name [Manager]
FROM incident
WHERE active=true
ORDERBY priority

 

Some use cases might include:

 

  1. Retrieve Incident (or any table/view) data with specific columns in the order I want.
  2. Build a dynamic report.
  3. Create a CSV file with custom column headings.
  4. Add a calculated column; for example, convert one currency to another.
  5. Apply a custom format to a column.

 

Here are some high-level requirements:

 

  1. Preserve the JSONv2 API as much as possible.
  2. Utilize the GlideRecord syntax as much as possible.
  3. Mimic the SQL syntax as much as possible.
  4. Send SQL scripts using GET.
  5. Focus on SELECT for now.
  6. Don't need complex Subqueries or CTEs (Common Table Expressions) for now.
  7. Select the raw and/or display value of a field.
  8. Be able to rename column headings.
  9. Support dot walking of reference fields.
  10. Be able to filter the result set.
  11. Be able to sort by one or more columns.

 

The above lists may not be exhaustive and I'll update them over time. If you have more ideas, please add comments below. If you like this concept and you might be able to use it, please hit Like. This will be a highly collaborative effort, so your support and feedback would be absolutely essential to get this moving forward.

 

Next time, we'll look at some design considerations.

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

I have come across several questions posted on the Community related to the CSV (comma-separated values) format and felt it might be helpful to review the specifications and some gotchas. You might ask what's so complicated about the CSV format. After all, it's simply field values separated by commas. While it's true, there are a few more details that might come handy if you ever have to troubleshoot CSV import/export issues.

 

DEFINITION OF THE CSV FORMAT

 

The CSV format is quite simple and defined in IETF RFC 4180 Section 2. While I encourage you to read the original text, here's a condensed version:

 

  1. Each line is terminated by CRLF.
  2. The last line doesn't need to be terminated by CRLF.
  3. An optional header line is allowed in the first line, useful for adding field names.
  4. Each line may contain one or more fields, separated by commas. All lines must contain the same number of fields.
  5. Each field may be enclosed in double quotes (optional, but see below).
  6. Fields containing line breaks (CRLF), double quotes, and commas must be enclosed in double quotes.
  7. If a field is enclosed in double quotes, all double quotes inside the field must be escaped by another double quote (replace all " with "").

 

6 and 7 are in bold, as they tend to cause issues in malformed CSV data.

 

GOTCHAS TO WATCH OUT FOR

 

Here are some comments and gotchas to watch out for, in addition to 6 and 7 from above:

 

  1. Depending on locales, commas may be used as decimal points or thousand separators for the display values of numeric/currency fields. Be sure to account for those cases. This also applies to date fields.
  2. If you're seeing more fields than expected, look for commas in unenclosed field values.
  3. If field values are split across multiple lines, look for unenclosed CRLF.
  4. While RFC 4180 only specifies commas, some CSV formats (including some cases in ServiceNow) allow other delimiters.
  5. RFC 4180 Section 2 incudes this caveat: "While there are various specifications and implementations for the CSV format, there is no formal specification in existence, which allows for a wide variety of interpretations of CSV files.  This section documents the format that seems to be followed by most implementations."

 

escapeCSV FUNCTION

 

Here's a simple JavaScript function based on RFC 4180 that can be used in processors or script includes for generating CSV data:

 

/**
 * Takes raw field value and returns CSV escaped value
 * based on RFC 4180 Common Format and MIME Type for CSV Files October 2005 http://tools.ietf.org/html/rfc4180
 *
 * @param {string} raw - raw field value
 * @return {string} escaped CSV field value, if applicable per RFC 4180
 */
function escapeCSV(raw) {
  var value = (raw + '').replace(/"/g,'""');  // escape all double quotes
  if (/[\r\n",]/.test(value)) value = '"' + value + '"';  // if value has special characters, enclose in double quotes
  return value;
}

 

Please feel free to connect, follow, post feedback / questions / comments, share, like, bookmark, endorse.

John Chun, PhD PMP see John's LinkedIn profile

visit snowaid

Filter Blog

By date: By tag: