Help
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Nabil Asbi
ServiceNow Employee
ServiceNow Employee

In this article, I'll explain where you can find the users' utterances' to utilize them in improving the Virtual Agent experience. As the Virtual Agent admin, you'll want to review these utterances as they will serve as a barometer for the quality of the current NLU model.

Before reading ahead, if you didn't read the Virtual Agent Topic Discovery article, it is highly recommended first to understand the three scenarios that occur when the Virtual Agent interprets the user's utterance. One of those scenarios is the Fallback route, which occurs when the Virtual Agent does not find a topic that matches the user's request. In this article, we will focus on this scenario.

All the users' utterances get stored with their NLU prediction results in the table Open NLU Predict Log. You can access this table by typing open_nlu_predict_log.list into the filter navigator.

find_real_file.png

This table, Open NLU Predict Log, contains the logs of all the Virtual Agent predictions. It is labeled "Open NLU" since it is not unique to ServiceNow's NLU logs, and can also contain the logs for IBM Watson and Microsoft LUIS NLU integrations. Here are the most important fields that you should pay attention to:

Utterance:The User's utterance that was sent to the NLU model for prediction intents or entities
Message:Summary of the response, it contains the counts of the prediction results
Request:JSON request that was sent to the NLU engine for prediction
Response:JSON response of the predictions, it contains the prediction results like the intents and entities that are predicted and above the threshold

 

One of the key ways to improve the current Virtual Agent and NLU model is to review the failed utterances. When an utterance fails, there are two reasons why this occurs:

  1. There is no topic that can handle the utterance. For example, let's assume the user asks: "Where can I find the closest supermarket" but you limited the Virtual Agent to help the user with IT or HR-specific topics only.
  2. The user asks for "Help with leave"; although there is a discoverable topic, the NLU prediction didn't discover the topic for reasons that can be fixed by re-tuning the model. To do this, you would first need to locate the utterances that resulted in failed predictions.

The way to find all the utterances that have no predictions (= 0 intents) in the Open NLU Predict Log table is to set the condition in the column Message to Sync Predict Results: 0 intents:

 

find_real_file.png

Tracking utterances that return 0 intents, or even multiple intents, is a first step in determining which utterances make the most sense to add to the model for tuning. This can be done by using these tables in reports.

In the first scenario, it is recommended to evaluate the quality and frequency of the utterances that are returning 0 intents. Users may be requesting something that you do not yet have an intent created for, and this could be an opportunity to expand on the number of available topics. Alternatively, they may be typing in utterances that you will likely not choose to support, as in the example of the "Where can I find the closest supermarket". 

The problem in scenario 2 can be rectified by re-tuning the NLU model. You may notice trends in your data where several users are requesting something that does not result in a clear, decisive topic discovery. Such utterances are generally indicative of the new utterances you will want to include in the model or perform model accuracy tests against.

This is a prime opportunity to use your data for continuous improvement, and you'll also want to track any progress with topic discovery using the open_nlu_predict_log table. It is a good practice to establish a baseline metric and gauge the impact of your model tuning and accuracy results against mapped topics.

 

Comments
Ivan Delchev
Giga Contributor

hi @Nabil Asbi,

This is a great article, it will be very useful for our future improvements, but I have some concerns about this. open_nlu_predict_log and open_nlu_predict_intent_feedback tables are not accessible for people with nlu_admin role in our environment. This means that those people are also not able to take advantage of the OOTB NLU dashboard showing the correct, incorrect, and skipped predictions. On the other hand I as an admin can access the tables. Is this the default behavior or probably we may have some additional restrictions in our environment.

Your advice here will be highly appreciated. Thanks!

Ivan Delchev

e_17
ServiceNow Employee
ServiceNow Employee

very useful article Nabil, as usual!

mrafi
Giga Contributor

Hi @Nabil Asbi ,

 

Thanks for this article!

Can you please let me know what is the best practice to have maximum number of intents under one NLU model?

 

Thanks

Rafi

michellewu
Kilo Explorer

Nice article! Can you also share a little bit about how to show unmatched utterances in PA Dashboard, something like words cloud you showed for us during the training? We've finally upgraded to Orlando and it's nice to see the open_nlu_predict_log table. Thanks!

Yasmin
Kilo Guru

Hi Nabil,

As of Paris, we can no longer access the open_nlu_predict_log.list as we did in New York. Did the security requirements to this table change? 

On the other hand, we are able to access the open_nlu_predict_intent_feedback.list and the open_nlu_predict_entity_feedback.list

Thank you,

Yasmin

D van Heusden
ServiceNow Employee
ServiceNow Employee

Hi Yasmin,

the log tables are on table rotation but you should be able to get entries. I'm getting entries in the table, what do you see when you add the following to your instance url?

open_nlu_predict_log_list.do?sysparm_first_row=1&sysparm_query=&sysparm_view=

 

Steve Kelly
Kilo Sage

Hello @Nabil Asbi,

Our Open NLU Predict Logs only seems to contain data for the past < 60 days. Would there be some sort of flush on this table? We have less than 1000 entries so we can't really use the Model Performance feature properly.

Thanks,

Steve

maggieo
Mega Guru

Hi David,

I have the same issue Yasmin had reported. Our instance is in Rome. I don't see the list, but I see the count displayed. I can see records in sub-prod environments but not in prod. I did use an investigation ID to elevate my rights and it is still not visible.

find_real_file.png

Version history
Last update:
‎05-15-2020 01:52 PM
Updated by: