The Now Platform® Washington DC release is live. Watch now!

Help
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Mwatkins
ServiceNow Employee
ServiceNow Employee
< Previous Article   Next Article >
Caching data to improve performance   Improving Slow OR and JOIN Queries

Overview

This guide is written by the ServiceNow Technical Support Performance team (All Articles). We are a global group of experts that help our customers with performance issues. If you have questions about the content of this article we will try to answer them here. However, if you have urgent questions or specific issues, please see the list of resources on our profile page: ServiceNowPerformanceGTS

The goal of this article is to address some common types of configuration issues that can lead to slow query performance problems on the database. Nearly every piece of data and configuration in ServiceNow exists as a row in a MySQL database (technically MariaDB as of 2020). Therefore nearly every operation in ServiceNow requires hundreds of queries to the database to return the results. In the simplest sense, ServiceNow is just a web API for a database. When you open the list of incidents for example, you might notice the URL says /incident_list.do. At the App tier there is logic that interprets this request to mean, "get a list of rows from the incident table in the database".

find_real_file.png

In this article we attempt to address some of the common "gotcha's" that we see administrators hit when configuring their application layer in ways that are not optimal for Database performance.

A note about building efficient queries

ServiceNow allows users to build their own query filters in many areas of the product. Most notably, power users can build query filters on lists and reports. With this power and flexibility comes the potential for negative impact on performance. By knowing how to build efficient filters, power users will be able to get the information they need without negatively impacting performance of the system. The following is a list of things power users should consider when creating queries. This becomes extremely important when creating a query that will be run automatically many times per day, such as a gauge on your homepage that auto-refreshes.

These same tips also apply to administrator and developer users who design solutions by writing code or making configuration changes. Whether you are a power user, administrator or developer, these principles will go a long way to ensuring you and your team have a great experience with the ServiceNow platform. I would say that probably 90% of the slow transaction issues experienced by ServiceNow users can boil down to these few things. Originally the list had only 10 items but has grown over time. We didn't want to change the name because it is frequently bookmarked and referenced externally, so please excuse the fact that there are more than 10 items! We hope this helps you in your pursuit of service excellence!

We have tried to order this list based on how frequently we have given each recommendation to customers.

 

#1 Limit the Rowcount User Preference 

Issue: Best practice for rowcount settings should be limited to 20 rows per page

Business Impact: When the rowcount setting is higher than 20, list rendering will be sub-optimal user experience. There is only so many records that a human wants to read through and, in addition, for every record that is loaded multiple layers of business logic including security rules must be executed - individually these may be only a few milliseconds here and there but they can add up.  In extreme cases, if enough users have the setting above 20, list rendering can cause levels of database impact that can affect other transactions.

Affected: The user with high rowcount will experience slow list, homepage and form render times. If enough users execute inefficient operations at the same time this can result in a system-wide performance degradation. 

Solution:

There are three things that can be done to address the issue:

1) Individual users can change their "rowcount" user preference via the hamburger icon (three horizontal lines) on the list UI header.

2) Administrators can manually set the values of the rowcount preference through the module "User Administration > User Preferences" or the list below that already has the filter added for rowcounts:

/sys_user_preference_list.do?sysparm_query=nameLIKErowcount%5Evalue!%3D20%5EORvalue%3DNULL%5Evalue!%3D50%5EORvalue%3DNULL%5Evalue!%3DNULL%5Evalue!%3D10%5EORvalue%3DNULL%5Evalue!%3D15%5EORvalue%3DNULL

3) Administrators can restrict the options that users are allowed to select by setting the "glide.ui.per_page" property

NOTE: The rowcount setting becomes especially impactful when using the "group by field" option in the list UI. If rowcount is set to 100, each group in the list UI will have up to 100 records in it. For every record displayed in the UI, the platform has to execute hundreds of security and rendering activities. This can all add up very quickly.

  

#2 Use Database indexes with the most efficient operator for the job 

Databases indexes are very much like phone books. A Database index is a file that stores a sorted version of a table based on a certain field or fields. So, one index may be like a phone book that sorts the entries by the last name then first name (a "compound" index - see Multiple-Column Indexes https://dev.mysql.com/ ). Another index may be like a phone book that sorts the entries by their phone numbers. It is important to make sure that you have the indexes you need to support your intended usage of the system. (see Create a table index

Whether you have a supporting index or not, you also need to consider the way your users will query the data in your tables. There are different types of operators that can be used to filter the data in your tables. It is critical that any commonly used filters (like a module or a report) are designed to use the most efficient operator for the job. "Equals" and "starts with" queries are more efficient than "contains" or "ends with". For example consider the relative difficultly of finding the following things in a phone book where the names are organized alphabetically by last name:

Find people whose last name starts with "Bro"

Find people whose last name equal "Brown"

Find people whose last name ends with "own"

Find people whose last name contains "row"

For the starts with and equals operators it is a fairly easy thing to do, right? But if you want to find someone whose name ends with "own" you'd have to read the whole phone book, reading the ends of each person's name. To find all the people with last names that contain "row",  you would have to read the whole phone book and you'd have to read the whole name of every person in the phone book - not just the end.

Think about these other situations, how would they stack up against the above queries?

Find people whose last name does not contain "row"

Find people whose last name is neither "Green" nor "White"

Find people whose last name is greater than "Brown" (e.g., Green and White, but not Alabaster or Brown)

Find people whose last name is less than "Brown" (e.g., Alabaster, but not Brown, Green or White)

 

#3 Add efficient Filter Conditions like active=true to improve slow queries 

Suppose you have a filter that is performing an expensive operation. For example, suppose you want to search the email table for a record whose subject contains a certain term. This is an incredibly expensive operation since it is on a huge table, on a large field, and does a contains operation ("LIKE" in ServiceNow query string syntax) - which cannot leverage an index. To make this query more efficient, you should consider how you could add a second, more efficient condition to your query so that the total query will become more efficient. 

Original query string: subjectLIKEChange Request

Improved query string using an additional condition:

sys_created_onONLast 7 days@javascript&colon;gs.daysAgoStart(7)@javascript&colon;gs.daysAgoEnd(0)^subjectLIKEChange Request 

Why does this work? The second query above limits the timeframe to be searched to only records created in the last 7 days - an efficient way of narrowing the result set. Databases build their final result sets by first creating intermediate result sets. By adding a condition that narrows the intermediate result set in an efficient way, the database can perform the less efficient operation — subjectLIKEChange Request - against the much smaller, intermediate result set. Usually it can do this in-memory, although sometimes it must perform the operation on the disk with a temporary table. This particular strategy — using a field like sys_created_on that reflects chronological order — is sometimes called time boxing. By executing an expensive operation within just a small "box" of time, the total query execution becomes much faster. Note that this strategy requires that the table has an index on the field that is being used in the time boxing strategy!

Far and away, the #1 most efficient filter condition that should be added whenever possible is "active = true". Most work that is done in ServiceNow is done on tables that extends task. Suppose you have over 3,000,000 records in the task table and you want to query all the incidents that have the word "email" in their short description. Further suppose the following distribution of data:

Count State Label State Active

763 New -2 1
20,722 Active 1 1
1,952 Pending Change 2 1
2,178 Pending Customer 3 1
552 Pending RCA 4 1
5,009 Paused 5 1
698 Resolved 6 1
2,997,319 Closed 7 0

 

Consider the following two queries:

Original query string: short_descriptionLIKEemail

Improved query string: short_descriptionLIKEemail^active=true

By adding the active=1 condition, you can immediately reduce the intermediate result set from 3 million down to a few thousand! Keep the number of active tasks low, include the "active" column in your indexes and always add "active=true" to your filter conditions if possible.

You can customize your lists and modules to force the active=true condition to stay in the breadcrumbs by using adding &sysparm_fixed_query=active=true to the URL. https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/list-administrati...

NOTE: In order for the above strategy to work, the number of active records in each table that extends task must be kept relatively small. For active=true to improve your queries, the number of active tasks must be small - like under 10%. Out-of-the-box tasks will go to a closed state and the active flag will be set to false. However, sometimes customizations made to an instance result in tasks not going to active=false. If one table in the task hierarchy has many active tasks, this can affect query execution efficiency for all classes of task tables!! You should make sure that the percentage of tasks in your tables stays low - under 10%.

 

#4 Avoid Conditions on Dot-Walked Fields

A "dot-walked" field in ServiceNow is a field that references a different table than the one currently selected, i.e. a Reference field. For example, a condition to check the name of the assignment group of a task would be a dot-walk from the task table to the sys_user_group table via the dot-walked relationship: task.assignment_group.name. When used in a condition, dot-walked fields create an implicit JOIN operation in the database between the related tables. Whenever possible, attempt to avoid conditions on dot-walked expressions. Relational databases are inherently bad at selecting good execution plans for JOIN queries between large data sets.

For example, the following filters return the same result sets but may mean the difference between a 10 second and a 10 millisecond query when dealing with millions of records.

Using a dot-walked field on the referenced table:
find_real_file.png

Using a direct reference field comparison:
find_real_file.png

 

 

#5 Beware the Out-of-box "Go to" search option for lists

In the ServiceNow list header there is a search bar that allows you to narrow the list results by using a desired term against a particular field.

image001.png

By default the "Go to" search will use the greater than or equal to comparison operator (>=). So, for example, in the screenshot above the system would do a search for any incident whose short description is greater than or equal to "SAP". This behavior is probably not what the customer desired or expected by default.

  1. Set the property glide.ui.goto_use_starts_with = true (true as of Kingston on z-boots, see PRB1149592). For later versions, add the property to your sys_properties table and set it to true.

  2. Use wildcard shortcuts with "Go to" for greater efficiency
    For example, by using SAP% instead of SAP, the search will return records whose short description starts with SAP.
    See the documentation page: Available list search wildcards

  3. Avoid using the "goto_use_contains" property
    Changing the default behavior to do contains searches rather than >= searches is even less efficient.

  4. [Available in Kingston] Specify the default "Go to" behavior on a per field basis
    This feature allows administrators to specify that certain fields should use a different default comparison operator. To activate:
    1. Upgrade to Kingston or later

    2. In the Dictionary record (sys_dictionary) for the field in question (e.g. task.short_description), add the attribute "goto_starts_with_search=true"

  

#6 Break complex reports - especially ÜBER ORs - into multiple reports on a single Home page 

Homepages allow multi-threaded gauge processing. By default the multi-threading is set to 2. Therefore ServiceNow will process two reports on a homepage at the same time. You will be able to render three small reports faster than one big one. It is not always possible to break a complex query into multiple simple reports in a meaningful way without compromising the business requirements, but it should be considered. For example consider the following situation:

Big report takes 35 seconds:

(Closed today) OR (Opened today) OR (Active=true AND Assigned To=Service Desk) 

3 little reports

  1. Closed Today (1.3 seconds)
  2. Opened Today (1.2 seconds)
  3. Active=true AND Assigned To=Service Desk (2 seconds)

On a homepage with multi-threading of 3, the 3 little reports only take about 2 seconds total - the time it takes to render the slowest of the three reports.

This same principle also applies to you developers and administrators who are trying to write complex GlideRecord and GlideAggregate code. See the community article Database Performance: Ways to Improve Slow OR and JOIN Queries

  

#7 Design efficient Database Views

A Database View in ServiceNow is just a way of doing a JOIN query. Consider how you can include limiting conditions to create smaller intermediate result sets into the design of your Database View to avoid unnecessarily querying a huge dataset.

For example, suppose you build a Database View that joins Metric Definition with Metric Incident with Incident.   There is one like this that comes out of the box. It allows you to see different metrics in the same report. However, consider the case where you only want to see a certain Metric Definition. It may be much more efficient to add a condition to your Database View that restricts the results from Metric Incident to only the particular Metric Definition that you are interested in. Due to the shear size of the tables involved you may need to sacrifice the convenience of having every possible Metric Definition option available to be selected for the efficiency of just bringing back what you need.

 

#8 Limit the number of columns that you see in List view

There is a list mechanic (cog wheel icon) feature on every list view in ServiceNow that will allow you to personalize the columns returned with every list query that you make. By reducing this list to only the specific columns that are of interest to you, the list rendering process can be greatly improved.

 

#9 Avoid adding Journal fields to Lists 

The content of journal fields (e.g. "work notes" or "comments") are actually stored in a child table. When a journal field is displayed on a list it requires one extra query per row. With 50-100 rows per page this can result in diminished performance during list rendering. This can also be true with other large or dynamic fields such as the Workflow type field. When designing your UI, test to see if removing such fields from the list view has a large impact on performance.

 

#10 Beware of your "Order By" field (sort order in lists)

One of the easiest ways to make a query run slowly is to "Order By" a field that does not support easy sorting. If you are performing a query that is running slowly, see if you can significantly speed up the query by removing the "Order By" field or selecting a different field upon which to perform the ordering.

If you cannot achieve your goal without performing a certain "Order By" operation, see if your administrator can add a Database index to the table that will support your query running more efficiently.

Databases almost always choose just one index per table when designing a query plan (there are rare exceptions like an index merge). One common situation is that the field that is being ordered on is forcing the database to take a sub-optimal query plan. For example, consider the following two time boxed queries:

SELECT ... FROM incident WHERE sys_created_on > "2017-06-20 01:00:00" AND sys_created_on < "2017-06-20 02:00:00" ORDER BY sys_created_on 

SELECT ... FROM incident WHERE sys_created_on > "2017-06-20 01:00:00" AND sys_created_on < "2017-06-20 02:00:00" ORDER BY opened_at

Let us suppose that both sys_created_on and opened_at have database indexes on them. Query A will operate lightning fast, because it can use the index on sys_created_on and only grab 1 hour of data. However, for query B the optimizer may choose to use the Database index on opened_at and now the database cannot assume that the results are ordered by the sys_created_on field, it must scan through every record in the opened_at index, looking for ones that fall within the 1-hour sys_created_on timeframe. So, whenever possible, use this principle: align your sort field with a database index that will also efficiently restrict the results of your WHERE clause.

  

#11 Load Related Lists on-demand

In the ServiceNow Fuji UI we added the ability to load Related Lists asynchronously or "on-demand".   To change the default behavior for all users to be "On Demand" ServiceNow administrators can do it with the following user preference (note that the User field is intentionally blank to make this default for all users). 

Name: glide.ui.related_list_timing

System: true

User: <leave this field blank>

Value: deferred (or "ondemand")

With this in place, the user can still override the behaviour and change it to something else, but at least we start off with a clean slate. If any existing settings for glide.ui.related_list_timing exist then you can remove those to force them to honor the new default behavior. 

Pros:

  • The top portion of all Forms will load very quickly.
  • Users who are simply trying to update a record and do not need to see the Related Lists will have a much faster user experience (Related Lists are often the cause of 75% to 90% of form load latency).
  • Users have the ability to override the preference for how their individual forms load if they do not like the behavior

Cons:

  • Users who want to see the related lists may feel that the things they want to see are taking longer than before - especially if they are using the "ondemand" style list.
  • Prior to Fuji Patch 5 this behavior had a user experience issue that caused the form to "jump down" to the related list section once the list completed rendering. Most users found this annoying. Fuji Patch 6 and later remove the "jump down" feature.

 

#12 Minimize the volume and frequency of running Homepage loads 

One of the heaviest uses of ServiceNow is the automatically refreshing homepage. This is a very useful feature but can also lead to excessive system usage and performance degradation.

If a user's homepage is loading they will not be able to perform any other operation during that time. ServiceNow limits the number of concurrent transactions for each logged in user to 1 — this is known as session synch. To avoid running into session synch it is a good idea to reduce the frequency of your homepage auto-reload. 

If you have 200 users who want to see a certain report daily or weekly one idea is to have this done as a scheduled report that goes out to them via email instead of putting it on their Homepage. This is especially true for slow loading reports. A homepage will only load as fast as its slowest report. Avoid adding slow reports (ones that go more than 3 or 4 seconds) to homepages. 

If enough users have automatically loading homepages the combined impact can lead to performance degradation across the whole system during peak usage hours.

Select a very fast homepage as your default homepage. Your default homepage will be the last homepage you looked at. When you first log into the system your default homepage will be rendered automatically. If your colleagues all log in at the same time in the morning this can lead to a very heavy system load right when everyone logs in. A common customization is to create a homepage splash screen where users are redirected on login. See KB0712404 - Setting a Light Weight Home Page for Users for instructions about how to do this.

 

#13 Consider the data distribution and future growth 

This is probably one of the most overlooked but important considerations. Whether you are just trying to run a query one time or you are designing some feature that will be used thousands of times a day for the next 10 years, this type of consideration will take on different implications. However, it is always important to think about how the data you are querying is distributed. 

Here's an example. Suppose you have an application that manages many-to-many relationships between Tasks and Configuration Items. There is a many-to-many table named "task_to_ci". The data distribution in this many-to-many table is like this:

id task_id configuration_item_id
1 A 1
2 A 2
3 B 1
4 B 3
5 C 1
6 C 4

In this scenario there are 3 tasks and each has two configuration items's connected - a pretty even distribution. However, configuration item #1 has 3 tasks connected to it, while all other configuration items have only 1 task connected to them. Now imagine that 3 months later this pattern has repeated 100,000 times and configuration item #1 ends up with 100,000 tasks connected to it. Which of these filters will execute faster?

task_to_ci_list.do?sysparm_query=configuration_item_id=1

task_to_ci_list.do?sysparm_query=configuration_item_id=2

The answer is pretty obvious when stated this way. The query for all records for configuration item #1 will be much slower because it will return 100,000 matches. However, if you didn't realize the data was skewed like this, you would not anticipate that one query would be slow and the other fast. This is just a very simple example to demonstrate how important data distribution is to the execution time of a certain query. There are too many permutations of this pattern to elaborate completely, but here are some questions that might help you when you are planning your query strategy for future data distribution and table growth:

  • Am I designing my solution based on an example use case that will be different than the real world data distribution?
  • In places where I am aware of an uneven data distribution, how can I configure my modules, filters, homepages, reports, dashboards to optimize for that uneven distribution?
  • Are there data cleanup strategies that I can employ to keep the size of this table small enough?
  • What logic is in place to avoid unexpected data distributions (these last two are more of a developer question, but certainly important to consider)?
  • One common scenario for uneven data distribution is where most records either have NULL for a certain column or most records are not NULL for that column. In such a case, consider any queries written that rely on a NOT NULL or IS NULL condition.

< Previous Article   Next Article >
Caching data to improve performance   Improving Slow OR and JOIN Queries
Comments
layla
Giga Expert

Really interesting, thanks for these practices


I would add also "Omit nested queries", we have encountered a serious performance issue that was caused by some nested gliderecord queries.


Mwatkins
ServiceNow Employee
ServiceNow Employee

That's a great point, Ait! I wrote a bit about avoiding nested queries here: How large can an array be?

 

However, that thread was having to do with pressure on JVM memory, but nested loops can put pressure on many different resources and cause performance issues in many different ways (e.g. infinite loops, excessive application server CPU usage, exponential processing costs per nesting layer - think the movie Inception).

 

 

 

I think I might write another article that focuses on scripting best practices in particular. The items in this article tend to focus on items that result in database pressure due to inefficient queries.

 

layla
Giga Expert

Thanks for the clarification.


Waiting for the next article


JC Moller
Tera Guru

Hi,



This is an excellent article! Bullet number 3. with the missing Active = true filter is a fairly regular issue that we run into with end user's reports and homepages/dashboards. Educating the users and doing regular follow-ups is also important.



Should something still go wrong with the performance on your instance, there is an excellent KB-article on the HI-portal (publ. 2017/11). Tons of stuff to read about.



"Performance Troubleshooting Guide"


https://hi.service-now.com/kb_view.do?sys_kb_id=bbdde490dbbac384d7e37aa31f96198d



- Jan


Robert Fedoruk
Tera Sage
Tera Sage

"UBER ORs" 

Love it.

Inactive_Us2366
Tera Contributor

Great article!!

I would like to add that use setLimit() to improve the query performance

Normal query

var gr = new GlideRecord("table_name");
gr.addQuery("serach_something");
gr.query();
if (gr.next()) {
//Do something
}

Better Query

var gr = new GlideRecord("table_name");
gr.addQuery("serach_something");

gr.setLimit(1) 
gr.query();
if (gr.next()) {
//Do something
}

 

Also avoid multiple encoded queries

Suri2
ServiceNow Employee
ServiceNow Employee

Thank you very much for this great article !! 

RainDog
Mega Explorer

Great article! Thanks

GTSPerformance
Tera Guru
GTSPerformance
Tera Guru

We have posted a new article for server-side scripting best practices. You can find it here:

Performance Best Practices for Server-side Coding in ServiceNow

(I know, I know, only 3 years later, right?!)

GTSPerformance
Tera Guru

Added a new rule #13 to this article, "Consider the data distribution and future growth"

GTSPerformance
Tera Guru

.

ReginaldFawcett
Kilo Guru

Great article. I learned a lot from it.

I do have some questions though.

How does a table query work, does it go row by row, or column by column?

Which columns does it search first?

"Consider the following two queries:

Original query string: short_descriptionLIKEemail^state!=6^state!=7

Improved query string: short_descriptionLIKEemail^state!=6^state!=7^active=true"

This part of the article makes it seem as if it is checking column by column, and that active=true is checked first, despite being the last condition.

 

 

GTSPerformance
Tera Guru

@Reginald Fawcett Thanks for the compliment and the questions! I'll endeavor to give a simple answer without oversimplifying too much. My apologies if this is too rudimentary of an answer.

Q. How does a table query work, does it go row by row, or column by column?

A. Technically the database might go row-by-row if it can't find an index, but that should almost never happen. In almost all cases, ServiceNow will use MySQL/MariaDB B-Tree indexes to do table queries.

A B-Tree index is a generalization of the Binary Tree data structure; with which you may be familiar. I won't go into the details of how that actually works here - it is well documented elsewhere. Suffice to say, B-Tree Index works similar to how a phone book works. B-Tree Indexes store indexed column data in separate storage in an Ordered List. When we search for a value, the database searches in the indexed column(s) by first checking the middle value in the ordered indexed column - like flipping to the middle of a phonebook. If the value is greater, it will continue to search on the right side of the column. If the value is lesser than the the middle value, it will search to left of the column. This method of search continues till it matches the value. This way, indexed columns need not scan all the rows of the table. It is a bit more complicated than this in real life, but it is close enough to provide a basic understanding.

 

PB7
Tera Expert

Hi Jan,

The system informs me that I lack sufficient privileges to view the linked material 😞

Mwatkins
ServiceNow Employee
ServiceNow Employee
Gabriela Cortes
ServiceNow Employee
ServiceNow Employee

Hi,

 

Question. When building a Service Catalog and many of the topics have variables specific to the process, since the variables are all stored in another table, ¿is indexing enough to generate the reports quickly? ¿Is there something customers can consider to ensure we have the best efficiency in this regard?

GTSPerformance
Tera Guru

Hi @Gabriela Cortes ,

Sorry for missing your question!

Off the top of my head I would say that having all the right indexes is not necessarily the only thing you need to do to ensure that such reports execute quickly, but I really couldn't say what other problems you might encounter without knowing a lot more about your detailed use case. I'm not familiar with the concepts of "topic" or "process" as it relates to ServiceNow's Service Catalog. Perhaps you are referring to the Topics in the Unified Taxonomy in Employee Center? In that case, I would have to know a lot more about what you are trying to accomplish with the reports.

Ultimately, your question is not really within the scope of the content of this article and, unfortunately, we don't have enough information to answer it here. If you haven't already, please reach out to our Technical Support team at support.servicenow.com if you have a specific question about the performance of your Service Catalog Variable reports. They'll be happy to help!

Regards,

ServiceNowPerformanceGTS
GTSPerformance
Tera Guru

This is a great point and something that we cover in-depth in our Scripting Best Practice article here.

Thomas Wright1
Tera Contributor

When querying with GlideRecord for data on other tables, which scenario is more efficient?

  1. Looking up the other record first, e.g. a Manufacturer by it's name, and then doing a second query with the sys_id of that Manufacturer as one of the queries for the record you really want to look up
  2. Dot walking as part of the query, e.g. addQuery("manufacturer.name", "myname");

On a related note, are encoded queries any faster or slower than addQuery? Especially for OR conditions I use encoded queries since it is so simple to build them from a list view compared to multiple addQuery and addOrCondition methods.

GTSPerformance
Tera Guru

@Thomas Wright1 

Thanks for your questions.

 

1. When querying with GlideRecord for data on other tables, is it more efficient to dot-walk as part of the query, e.g., addQuery("manufacturer.name", "myname"), or to do two GlideRecord queries, using the output of the first query (e.g. sys_id from a reference field) as input for the second query on the referenced table.

 

Answer: There's a few factors that may influence this decision. The important thing to know is that when you dot-walk inside addQuery, it requires a database JOIN operation (usually an INNER JOIN). In terms of database efficiency, it is probably more efficient to avoid a JOIN and do two separate queries. However, in most cases, the efficiency gain is going to be at or near 0%. I would definitely say that putting the dot-walk inside the addQuery would be more efficient from a code complexity standpoint and, perhaps more importantly, it will let the database do what it does best: retrieve the data. As long as it is efficient to do so, I would follow the axiom of "let the database do the data processing". For your specific example, I would add that if you can remove the need for either a dot-walk or two queries, that would be ideal. Perhaps by changing it to addQuery("manufacturer", mfrID), where mfrID is a variable containing a reference (i.e. sys_id) of the desired manufacturer, you could reduce it to a single query without the need of a JOIN.

 

2. Are encoded queries any faster or slower than addQuery?

 

Answer: No. Encoded queries are neither faster nor slower than the addQuery and addOrCondition methods. The processing time to execute those methods vs. addEncodedQuery is negligible.

Regards,

ServiceNowPerformanceGTS

 

Abhishek Pande1
Tera Guru
Tera Guru

Hi @GTSPerformance 

Thanks for sharing this is very helpful and informative.

 

Regards

Abhishek Pandey

shamil-ibrahim
Tera Contributor

With one of our client we experienced slowness due to usage of multiple addOrCondition statements.

Mark Edson
Tera Contributor

Very helpful article!

Version history
Last update:
‎01-24-2024 07:27 PM
Updated by: