Skip navigation

Developer Community

3 Posts authored by: Ankit Khetarpal Employee

The Import Set Deleter job cleans records in the import tables. This is a default job which cleans the data of sys_import_set_row and its child tables as these tables tend to grow big very often. Sometimes there is a possibility, although rare, that orphan records are left in sys_import_set_row table and they are not cleaned in future scheduling of the job. This is a problem that was fixed in Fuji addressed out of memory exceptions that arise from MultipleDelete, which the Import Set Deleter job utilizes. However, the fixes did not completely prevent memory concerns for the particular job.

 

You could be affected by the issue if you are on certain early patches of Geneva up to Geneva Patch 9. It is also seen in Helsinki, up to Helsinki Patch 5. The severity may depend on the number of child tables that the import set row tables have for an instance.

 

Here are a few scenarios that may help determine if you may be affected by the issue:

  1. Long-running import set delete job
  2. JAVA in use memory goes up
  3. Localhost logs statements

 

All 3 scenarios need to be experienced in order for this issue to apply.

 

 

Long running import set deleter job

In checking the stats page(/stats.do), you see that the Import Set Deleter was the longest running job on that particular node.

 

Snippet from the stats.do

 

glide.scheduler.worker.5

Current job: Import Set Deleter

Job started: Tue Sep 06 18:00:02 PDT 2016

Job duration: 1:11:24.846

Total jobs: 7612

Mean duration: 0:00:01.276

 

The same can be checked via the sys_trigger table to understand which node this job has been running on.

url: /sys_trigger_list.do?

 

 

JAVA HEAP in use memory is high / free memory low

Check for memory usage patterns for the affected node. If you notice that this is the only transaction running for long and the in use memory is on an uptick, make a note of that and save the information.

 

Snippet from the stats.do

 

Servlet Memory

Max memory: 2022.0

Allocated: 2022.0

In use: 1926.0

Free percentage: 5.0

 

Local Host Logs statements

Download the node files and verify the logs for information/statements such as the ones below:

 

Snippet from local host log files

 

2016-09-29 00:00:01 (609) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning import set ISET6612326

2016-09-29 00:00:01 (633) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=cf96b7b0db46aa00e93ff2e9af9619e6

2016-09-29 00:00:01 (637) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (639) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612283

2016-09-29 00:00:01 (641) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=55e8f7b4db8a2e4090b0ff1aaf96195c

2016-09-29 00:00:01 (645) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (647) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612284

2016-09-29 00:00:01 (649) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=f32b3738db8a2e4090b0ff1aaf961938

2016-09-29 00:00:01 (652) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (654) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612285

2016-09-29 00:00:01 (656) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=5a7d33b8db8a2e4090b0ff1aaf9619b0

2016-09-29 00:00:01 (659) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (662) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612286

2016-09-29 00:00:01 (664) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=81cf3f34db46aa00e93ff2e9af961975

2016-09-29 00:00:01 (667) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (669) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612287

2016-09-29 00:00:01 (671) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=5f02484ddb8a2e4090b0ff1aaf96193d

2016-09-29 00:00:01 (674) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (676) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612288

2016-09-29 00:00:01 (678) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=7d5484cddb8a2e4090b0ff1aaf961900

2016-09-29 00:00:01 (681) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (684) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612289

2016-09-29 00:00:01 (686) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=6ca60885db46aa00e93ff2e9af96194b

2016-09-29 00:00:01 (689) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (692) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612290

2016-09-29 00:00:01 (693) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=fee88c81dbca2e4090b0ff1aaf9619a6

2016-09-29 00:00:01 (696) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (699) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612291

2016-09-29 00:00:01 (701) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=d53bc8c5db46aa00e93ff2e9af96193e

2016-09-29 00:00:01 (705) worker.5 worker.5 *** Script: Import Set Cleaner:: .. 0 rows removed from ldap_import

2016-09-29 00:00:01 (707) worker.5 worker.5 *** Script: Import Set Cleaner:: Removing data from import set table ldap_import where import set=ISET6612292

2016-09-29 00:00:01 (709) worker.5 worker.5 *** Script: Import Set Cleaner:: Cleaning table ldap_import query:sys_import_set=088d0409db46aa00e93ff2e9af9619e9

..

..

** Script: Import Set Cleaner:: Deleting orphaned import set row records if any by querying for import set sys_id being empty for table: ldap_import

 

Workaround long-running Import Set Deleter job causing performance issues on the affected node:

  1. Navigate to Script Includes > ImportSetCleaner.

  2. Change the line:
    this.cleanedImportSetRowTables.push(gr.table_name);
    to
    //this.cleanedImportSetRowTables.push(gr.table_name);

Once you change the ImportSetCleaner line, you will notice that the line is now commented out.

 

Releases this issue is currently fixed in:

  • Geneva Patch 10
  • Helsinki Patch 6
  • Istanbul

 

We recommend that you upgrade to one of the releases, mentioned in ServiceNow KB: Import Set Deleter job causing instances to run low or out of memory, causing performance issues, where the fix is found. Upgrading to one of these releases will prevent or improve performance caused by this particular issue.

ServiceNow has a plethora of debugging tools available on the instance which helps ServiceNow admins and technical support engineers troubleshoot performance issues and narrow down the cause. Out of all the options ServiceNow offers, the SQL Debugger is the most powerful and helps admins find out all queries involved in a particular transaction. It can be used for both MySQL as well as Oracle based queries which are the relational database query languages available for ServiceNow instances. It empowers you to find the slowness based on the queries and using the explain plan tool create the appropriate indexes to help improve the performance of the application.

 

There are two types of of SQL Debugging tools:

  1. Debug SQL
  2. Debug SQL (Detailed)

 

Debug SQL vs. Debug SQL (Detailed)

The Debug SQL option lists all the queries involved but doesn't give the stack trace from where the query is originating as opposed to Debug SQL (Detailed) which does give you the Java stack trace  with a  "+"  symbol. This primarily is the only difference between the two.

 

SQL DETAILED
Detailed sql debug.jpg

 

The MySQL debugger is only valid till your session exists. If you logout it automatically turns the debugger off. It is not necessary that you need to logout or wait for the session to timeout. Once done if you want to continue using the application it can be manually turned by navigating to System Security and clicking on Stop Debugging. If one of your users is experience unusual slowness, admins can turn on the SQL Debugger and then impersonate the user to investigate the queries involved in the transaction for that particular user transaction.

Appl_filter.png

 

Impersonating a user to investigate queries for a particular user transaction

We impersonate a user experiencing performance issues to see find the specific form, incident or transaction that could be causing slowness. In the Application

navigation filter type in Debug SQL. Click on either of the options to turn the MySQL Debugger on. You can impersonate any user after this and navigate to a form or list to the see a list of queries being executed.

 

 

For example:

On trying to open the incident list, it took more then 180 seconds to render. This is reproducible every time and no other list is affected. To determine if the slowness lies on the database, we need to find the slow database query(s) or the source from where it's originating from.

mysql debugger.jpg

There were approximately 3,800 insert queries that were loading with the list. Upon clicking the SQL Debugger (detail) to find the source, we see it is the the business rule. These queries were running against the incident table which was recursively adding gs.log() statements for every record being pulled, causing the slowness.

 

If a long running database query is responsible for the slowness, then you can use the explain plan to determine if there is a way to optimize it using an index. Cases where performance degradation is the underlying source (script includes and business rules) the queries can be identified and the appropriate steps or plan of action can be taken to rectify it to improve the performance of the application. The SQL debugger is a great must-use-tool recommended for troubleshooting performance of many ServiceNow applications. There are certain applications which do not use the SQL debugger such as the Visual Task Boards (VTB), ServiceNow support can help you out with that.

 

 

Thank you john.gonzalez  for the idea to write this post!

Performance Degradation of a transaction could possibly reside on the Network, Browser or the Server. When the slowness is on the database server it is usually slow due to non optimal queries which take most of the transaction time. A quick way to optimize these queries is by creating indexes on them to improve query response time. Creating indexes on the queries will improve the overall transaction time. Adding the appropriate indexes to the slow queries can drastically improve the overall instance slowness issues.

 

How to add database indexes in ServiceNow

The process of Creating a Database index post Fuji is easy. In terms of performance though, one of the most important question for ServiceNow admins is how or what index should be applied to help improve transaction response times. Lets take a step back and ask ourselves what is a database index?

 

"A Database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and storage space to maintain the index data structure."

 

Slow queries is what we are targeting on which you would applying the indexes.

 

DO’s

  1. Always test your indexes in a sub production instance first and see if it helps improve performance before applying in production.
  2. Always consider using "active" in your filter condition as part of your business logic. This enables the query to process on a smaller result set and helps improve query response time.
  3. Use "is", "starts with" in your filter/searches wherever possible with “AND” in the query.
  4. Date fields such as "sys_created_on" or "sys_updated_on" should always be considered in filters/searches which enhances the performance of the query.
  5. Adding a composite/compound index on single large flattened tables to the "active" and "date_time" fields is bound to give you improved query response time if those are the two fields being used in the query.

 

DO NOT's

  1. Indexes occupy space on disk in memory buffers and there can only be 64 INNODB MySQL indexes per table. Hence do not index every field as that defeats the purpose.
  2. It is not recommended to index data type such as TEXT, MEDIUM TEXT, LONG TEXT if your search string is going to be long. These are fields which can hold multibyte characters such as short description, comments, and work-notes. Index on these fields may not useful because it only indexes first few characters. But if your search string is small for example you are searching for incidents with"memory low" the index could be beneficial.
  3. Avoid using “contains” , “OR's” in your filters/searches wherever possible as the MySQL optimizer may end up doing a full table scan.

 

Just a side note, MySQL is a command prompt shell deducing the syntax of the relational query language. It is the storage engine which is the underlying layer that has the commit, rollback, and crash-recovery capabilities to protect user data. ServiceNow MySQL customer databases are on InnoDB storage engine which is ACID compliant. Most MySQL indexes are stored in B-trees as the underlying data structure.

 

How to verify if there is a slow query involved responsible for the slowness:

  1. In the Filter navigator type "SQL."
  2. Click on "Debug SQL DETAILED."

 

Navigate to the form/list/report that is slow and scroll down until you notice the slow query which is taking a long time.

slow query log.jpg

Here, the response time is approximately 68 secs seconds out of which 67 seconds or 1.07 minutes were spent executing the slow query. If you hover the mouse over the periods next to "SELECT" it should tell you the columns being selected. If this query has made it to the slow query log you should be able to the see the explain plan and determine what index it is using if it is using one at all.

 

How to view the Explain Plan to determine the slow query

An explain plan is a representation of a query execution. MySQL optimizer considers different execution plans before devising the optimal one. It helps us read exactly what possible indexes/keys MySQL considered and what it actually went with finally. For further details Use a slow query log for reference.

 

    1. Navigate to Slow Queries.
    2. Use the Filter to retrieve your query (Ex: Created on and the Example URL)

      explain plan slow.jpg

    3. Click on the Record and and click on Explain Plan on the right hand corner

      record explain.jpg

    4. Scroll down and click on Related Lists to see the explain plan.

related list explain plan.jpg

Notice the possible keys (indexes) available for the MySQL optimizer but the one being used is task_index1(active,sys_class_name,number) for the task table going over 60K rows. How do I know what columns are in the index key? You can find the indexes for a give table by navigating to the Tables and Columns module and selecting the table.

 

For example, here we want to figure out the columns included for TASK table(key: task_index1)

task table.jpg

 

This query can be optimized by using a better index which iterates through a smaller set of rows and retrieves the result faster.

 

 

Note: If reference fields are involved in the database query ex(a_ref_1) which are not visible in Tables and Column module, contact support for assistance as this cannot be done via the Index Creation Module.

 

Now here comes the crucial part. Depending on the distribution of data and the query plan above, there are a few options we can think of

 

Option 1. Follow the order of the fields in the where clause and created an index key(sys_class_name,active,sys_created_by); This is called a covering index.

Option 2. Create an index based on the distribution of data to narrow the subset and create an index key (active,sys_class_name,sys_created_by);

Option 3. A higher cardinality column especially for Range Comparisons ( sys_created_on  for "> "or "<") is always a preferred choice as it means less efficient storage, but faster read performance. This due to the fact that it navigates through less number of branches of the B- tree and narrows down the result set. Hence index key(sys_created_by,sys_class_name,active); can be considered

 

Note: A rule of thumb as far as ServiceNow is concerned is to use "active"as the first column in your query and and use it as the first column in index creation. This helps in the reuse of the index. If there at all is a query which just involves "active" in the WHERE clause or "active" and "sys_class_name" this index key would be beneficial.

 

Why do I say the above statement? I tested my theory and based on the results, Option 3 was faster and better having a smaller set to process with higher cardinality

 

Table: Task

Total number of rows: 150K

 

Option 1. Iterating through 20 rows and the execution time was 30 minutes (index key: index_compostite(sys_class_name,active,sys_created_by))Screen Shot 2016-03-15 at 11.12.34 AM.png

Option 2) Iterating through 5 rows which the exact result set and 0.12ms response time (index key: index_alternative(active,sys_class_name,sys_created_by))

Screen Shot 2016-03-15 at 11.11.41 AM.png

Option 3. Iterating through 20 rows and the execution time was 20 minutes (index key: index_cardinality(sys_created_by,sys_class_name,active,))

Screen Shot 2016-03-15 at 11.45.20 AM.png

 

Its not just about the WHERE Clause - consider Selectivity and Cardinality

There is more to indexing than just using the “where” clause and selecting the columns. Determining the right index comes down to two major factors: selectivity and cardinality. These should be considered when creating an index.

 

Selectivity is calculated by: Selectivity of index = cardinality/(number of rows) * 100%

 

Selectivity is the variety in the values for a column of a table by the total number of rows in it. Cardinality is the uniqueness in the values. Cardinality of a column can be found using the "DISTINCT" command in MySQL. It is easier when you can do a sql command directly from MySQL Command box in Background Scripts. This is unavailable for admins post Geneva so you may need to use an alternative.

 

Screen Shot 2016-03-27 at 1.51.20 PM.pngScreen Shot 2016-03-27 at 1.56.16 PM.pngScreen Shot 2016-03-27 at 1.51.00 PM.png

 

Using this concept, an index could be applied when the number of matching rows that need to be selected is small in relation to the total number of rows. In other words index the columns with high cardinality first to narrow down the result set. In this case, the difference is hardly any to make a significant difference. In this scenario, our index key (index key: index_alternative(active,sys_class_name,sys_created_by)) was better as it opted for a binary search kind of scenario which shortened the result set into half and broke it down for faster retrieval. Hence, as I mentioned before it all depends on the number of rows in the MySQl table and how they are distributed.

 

The more indexes you have the harder MySQL optimizer has to work and may incorrectly use/ignore an index and use a table scan instead. The above article is for ServiceNow customer admins to understand how indexes are administered by Service Now engineers and things they consider before making a decision.

 

 

 

Thank you database guru Gurnish Anand for your guidance.

Filter Blog

By date: By tag: