Skip navigation

If you search for a C#/.NET ServiceNow example you have probably come up with the following:


Web Services C Sharp .NET End to End Tutorial


This is a great first step, and over the years I have implemented several solutions that used this as a starting point.  It assumes you are already a C# developer so I thought I would share a couple of insights, and fill in a couple of steps for bringing this example into the new VS IDE, and, of course, rewriting it to do something different! 


So we actually have two labs here.  First we will be installing the Visual Studio 2015 Integrated Development Environment (VS 2015 IDE or VS IDE).  Then we will be building a getRecords example.


Lab 1.1: Installing VS Community 2015


a. Download from here: link it's free! 

b. Follow the installation instructions.  It goes pretty quick.  link




Lab 1.2: Creating a Get Incidents Example


1. Read through the ServiceNow article, but only do step 1.1.  Again the link is here.

2. In the VS IDE navigate to File -> New -> Project.  The new project form will be displayed.



3. Fill in the form with the following:

a.  WPF Application - do this instead of Windows Forms.  The reasons are legion, but the best two are:  it is significantly less of a resource hog, and it is resolution independent (windows forms are not).

b. Name: CSharpServiceNowInterface.  Or your favorite name for this application.  Make something up!  Show some initiative! 

c. Click on the OK button to create your project.



4. Your new project should look like this:



5. Now navigate to Debug -> Start Debugging.  This will build and run your application in debugging mode.  A blank form should be displayed, and the debugger stuff should be running in the IDE behind it.



6. Close the "MainWindow" form to stop the debugging.  We are now ready to do some development.


7. From the Toolbox tab on the left pull out a RichTextbox control and a button control onto your new form.  You will find the RichTextbox control under the All WPF Controls tree.



8. Click on the button to get the button properties to show in the lower right window.


9.  Change the following:

a.  Name: btnSend

b. Content: Send Command.  You may need to stretch the button out a bit to get it to display the text.



10.  Click on the RichTextbox to get the properties to show.


11. Change the following:

a.  Name: rtbResults


12.  Click on the outermost edge of the form to get the properties to show.


13. Change the following:

a.  Name: ServiceNow

b. Title: ServiceNow Web Service



14.  Now right-click on the CSharpServiceNowInterface project to bring up the project context menu.


15. Navigate to Add -> Service Reference.  This will bring up the Add Service Reference Form.



16. Fill in the form with the following:

a. Address: https://<<servicenow instance name>>

b. Namespace: incidentTable


17. Click the Go button.  A popup will ask you to fill in the userid and password.  BEWARE: It will send these in cleartext to ServiceNow (really???? sigh).  Don't give that user too much authority.  Read should be sufficient.  We will be adding security next.



18. After a proper connection has been made the Add Service Reference form should look like this:



19. Click the OK button.  A new Service Reference will be added.



20. Okay, now we go back to the original article.  In step 1.2.1 we are to add in the security section to the App.config file.

a.   Double-click the App.config entry in the Solution Explorer.  This will open the App.config file for editing.

b.  Replace the the following XML to the <Security mode=... /> line with the following code:


    <security mode="Transport">
        <transport clientCredentialType="Basic" proxyCredentialType="Basic" realm="">
            <extendedProtectionPolicy policyEnforcement="Never" />
        <message clientCredentialType="UserName" algorithmSuite="Default" />


c. Add in a new section at the bottom after the </system_serviceModel> tag.  I used my uid/pw combo that I created in a previous article.  BTW, this is a best practice!  Never hard-code this stuff in your code! 


        <add key="userID" value="svcuser" />
        <add key="password" value="xxxxxxxxxxxxx"/>



NOTE: If you REALLY want to crank down on the security of this file I found the following article excellent reading:

Jon Galloway - Encrypting Passwords in a .NET app.config File


d. Save the file (ctl-s).  The file should look something like this:



21. Click on the MainWindow.xaml tab to bring the form editor back up.



22.  Double-click on the Send Command button.  This will bring up the MainWindow code file.  We will be doing something similar to step 1.2.3 from the article.



23. Next we will add the Configuration reference to our project (this is annoyingly NOT done for you automatically).


a. Right-click on the References section in Solution Explorer, and choose Add Reference from the context pop-up.  This will display the Add Reference form.



b. Scroll down and find the System.Configuration entry.  Click on this; a check box will be displayed.  Make sure this is checked.



c. Click OK to save your work. 


24. Now replace both the MainWindow() and btnSend_Click() methods with the following code:


        public MainWindow()

            // clear out the rich text box

            // make the scrollbar to be visible if results larger than display
            rtbResults.VerticalScrollBarVisibility = ScrollBarVisibility.Auto;

            // kill the pesky extra new lines style
            Style noSpaceStyle = new Style(typeof(Paragraph));
            noSpaceStyle.Setters.Add(new Setter(Paragraph.MarginProperty, new Thickness(0)));
            rtbResults.Resources.Add(typeof(Paragraph), noSpaceStyle);

        private void btnSend_Click(object sender, RoutedEventArgs e)
            // declare our soap object for transmission
            incidentTable.ServiceNowSoapClient soapClient = new incidentTable.ServiceNowSoapClient();

            // pull the cred info from the app.config
            soapClient.ClientCredentials.UserName.UserName = ConfigurationManager.AppSettings["userID"];
            soapClient.ClientCredentials.UserName.Password = ConfigurationManager.AppSettings["password"];

            // Initialize our query
            incidentTable.getRecords getRecordsQuery = new incidentTable.getRecords();

            // Initialize our response. And since we don't know how many will be returned put it into a List object
            List<incidentTable.getRecordsResponseGetRecordsResult> responseList = new List<incidentTable.getRecordsResponseGetRecordsResult>();

            // Go after Beth Anglin's active records
            getRecordsQuery.assigned_to = "46d44a23a9fe19810012d100cca80666";  // beth anglin or pick your favorite
            getRecordsQuery.state = "2"; // active

                // Now fire off our query.
                responseList = soapClient.getRecords(getRecordsQuery).OrderBy(o=>o.number).ToList();

                // Loop through our response records and print them to the results window
                foreach (incidentTable.getRecordsResponseGetRecordsResult response in responseList)
                    string number = "\n" + response.number + "\t" + response.severity + "\t" + response.short_description;
            catch(Exception error)
                // something bad happened!



25. We are ready to test!  Navigate to Debug -> Start Debugging.  This will run our application and display the form.



26. Click on the Send Command button.  The results should appear in the window on the form.



Ta da!



27. Now let's go back and try an encoded query.  Comment out the two query lines, and replace them with the encoded query from our List View.


            // Go after Beth Anglin's active records
            //getRecordsQuery.assigned_to = "46d44a23a9fe19810012d100cca80666";  // beth anglin
            //getRecordsQuery.state = "2"; // active
            getRecordsQuery.__encoded_query = "active=true^assigned_to=46d44a23a9fe19810012d100cca80666^state=2";


28. Now navigate to Debug -> Start Debugging, and run our application again.  You should have the same results displayed after clicking the Send Command button.


29. Finally always clean up your unused Usings (another best practice)!


It should then look like this:


Of course you will need to re-test everything to make sure that last step didn't break anything.


30. Finally you may not realize this, but you have created an executable.  It is not ready for prime-time, but it is there.  If you ever want to release your code you will have to build a release version (this is a debugger version), and then package it all up in an installer.  You can find your executable simply by doing the following:


a. Right-click on any tab to bring up the tab context menu.  Select "Open Containing Folder".  This will bring up an explorer window showing your files.




b. Double-click on the Bin folder and the underlying Debug folder. 



c. The debug folder will contain the files necessary to run your application.  The .exe file is your executable.  The .exe.config file is your App.config file.





That's it!  Between the original C# article on the wiki and this one you should have most of the tools you need to do some really serious development!  BTW, I will likely write a couple of more articles showing how to implement other features of the WSDL.


Have fun, and have a Happy New Year! 


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!

I have covered the following in my previous articles:



Analysis and Prototyping


Development and Unit Testing


What's left?  You as a developer no longer care after you are done Unit Testing right?  Wrong! 


There are still a couple of steps in the process that need to be covered by the developer.  These are JUST as important as any of the previous steps to ensure quality of your changes.  Yet another checkpoint to reduce risk of failure. 


NOTE:  At this point there should be a significantly reduced risk of a design flaw.  The QA team will review it, obviously, and UAT might uncover a problem, but frankly if your Business Owner has been involved up to this point then there should be no surprises for anyone.  QA and UAT really should be for verifying there are no defects in the execution, and that performance is nominal.  Suggested usability "tweaks" are acceptable at this stage IF 1) they do not impact the schedule, 2) they do not introduce complexity that will require going back and writing new functionality (this is known as a CHANGE, and needs to be set aside for a future release - manage the Business Owner's expectations; even though this might not be part of your job! ).


So, what are the responsibilities of the Developer at this stage in the process?



Quality Assurance Testing

1. After the update set has been promoted to QA the developer is responsible for addressing defects with their code, or that their code might create (remember that integration thing?).

2. QA proceeds to test the new features.  Here there should be a mechanism for defect tracking.  The ServiceNow SDLC plugin has a basic version for defect tracking. 




NOTE:  If you are going to use this mechanism you should probably enhance it to include a couple of new fields:  1) Test number with a tie-in to the test case to be able to look at the test steps, expected, and actual results.  Then on the Test Case application (from PPS application) you might want to add a field to contain requirements number(s).  This will aid in traceability.



3. The developer is responsible for resolving each defect. Defects are not to be worked until QA has completed the round of testing.

4. A new Update Set is created.  Use a naming convention that will identify it with the current release being tested.  I usually label mine "_2", "_3", etc.  as it will contain the changes/fixes.

5. This back-and-forth continues until all QA tests have passed.



User Acceptance Testing


1. When QA has passed all tests it is ready for User Acceptance Testing (UAT).

2. The developer is then responsible for merging all updates sets into one, and then re-releasing this to QA for a quick smoke test to verify that everything is present.

a. The ServiceNow Admin who installed all of the update sets during testing will be responsible for backing these out in the reverse order.

3. QA notifies the business owner to begin UAT. The same process for defect discovery and resolution applies.  However, the process is a bit more involved.

a. When UAT finds a problem they will record the test used, the steps to recreate, and the expected result.

b. When UAT has completed a round of testing they will communicate the list of testing results (good and bad) to QA for review.  If defects were found by UAT these will be retested by QA before being communicated to the developer for resolution.  The reason for doing this is that the UAT test failure may be a false-positive, and needs to be reviewed by QA prior to being passed on.

4. When UAT testing has successfully completed, and there were any additional fixes required the developer; these will have to merged into the core update set as well.

NOTE: I would recommend rolling this merge out one last time to QA and have them do a quick (smoke) test of the release.

5. The release is now ready for roll-out to production.





Release Preparation


1. After successful UAT the QA team notifies Project Management that the User Story is ready for release to production.

2. Project Management is responsible for creation of the release plan, the rollback plan, and identification of support personnel (contact information, etc.). The developer is brought in to fill in any blanks.

3. The roll-back plan is a list of ordered steps that will be used to remove the new release from production (should it be incompatible or fail).  Don't skip this step!

a. The roll-back plan needs to include Update Set back out order, data removal and replacement, and Fix Scripts that might be used to reverse data changes (need to be taken into consideration as part of the development process - sorry, forgot this in my last article ).  Essentially work done to make sure of a clean reversal if possible. 

b. NOTE: If a new plug-in is installed; this will not be able to be backed out.

c. NOTE: Data updates are a bit trickier. You will want to back up any tables to be modified locally before applying a data import (update transform or upload).

4. At this point, or earlier, the Project Manager cuts a change request for the entire release and sends it through the approval process. The Change Request is a great engineering step to schedule the change; this is a planned release after all.  This should place the modification for roll-out at an off-hour or maintenance window that will not, and should not, impact production users!



Production Release


1. The developer is responsible to assist the ServiceNow Admin if necessary. Unless something goes badly the rollback should not be needed.  If problems are encountered and they are configuration issues then they can be dealt with in production.  If an actual defect or coding issue is found this needs to be handled via a hot-fix roll-out process (simply a sped-up version of the SDLC).

a. The Admin will pull the merged update set for the release from the DEV environment.

2. Should a hot-fix be required, for a post-production defect, it goes through the same process as a normal release.

3. Should a rollout to production fail it may be necessary to roll it back. This is done through the roll-back process created for the release.

4. Admin smoke-tests the application.  This is simply to see if things look ok; a quick-and-dirty look at the modifications to see if they were really applied.

5. If everything looks good then Admin updates the change request.

6. The Project Manager will then announce to concerned parties that the release has been completed successfully with a list of features pushed.  Victory has been achieved! Time to party!



Post Release


1. Here the developer is basically done.  If problems do occur then defect fixes need to go through the process starting at the beginning and working their way back through the requirements and so on.  This will end up being what is called a maintenance release. Again, planned not slammed through.





The entire development process I have described in this set of articles is not meant to be written in stone (although that would be cool).  It is meant as best practice advice from someone who has had to do this a lot in development over the last 30+ years, and in ServiceNow over the last 4+ years.  It is of course flexible; to meet your company's business needs.  I highly recommend adopting some form of development process if you do not already have one.  You will see your defect rate drop like a rock, and, of course, your quality and user satisfaction soar!


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!

Now for the nuts-and-bolts step: Development!  We have covered requirements gathering, analysis and prototyping, and design.  These process steps are necessary to maximize the highest quality software even for the smallest story.


After having done all the work with analysis, prototyping, and design the development stage should be relatively straight-forward.  Essentially you will be following the instructions you set up for yourself in the design documentation.  There should be no surprises, no technical hurdles, no blind-siding with new requirements right?! 


Well, unfortunately that isn't always the case.  The purpose of all of this work is to mitigate risk of failure or poor quality.  These phases will reduce risk to a high degree.


In the diagram I show everything going back to requirements should a problem be found.  However, this can also go from the current to any previous step depending on what was found, and the severity of the issue.


So what is involved then at the development phase?  Let me list what I consider necessary for successful development, and then drill-down on each topic:


  1. Environment - ideal setup, promotion steps, cloning strategy.  Announcement of clone.  Export of in-work Update Sets, and reapplication.  Stability considerations vs. working with the latest clone.
  2. Coding - best practices and standards.
  3. Unit and Integration Testing - from design document, exercising the code.
  4. Update Sets
  5. Peer Review - prior to deployment.  Development, Update Set(s), Fix Scripts and data uploads.
  6. Test Deployment - what is being promoted, installation steps and necessary documentation (the dry run; treat like a deployment to production).  Fix Scripts and data uploads.



Ideal Promotion Environment and Cloning Strategies


So why do I bother to tackle this topic in an article on development?  This is probably the most important part of developing in ServiceNow.  Without a solid cloning strategy to encompass the development life-cycle; all post-development stages have a higher degree of risk associated with out-of-date-software or incompatibility with the Production environment.  If your development environment is not updated from production on a regular basis then there is a higher degree of failure when your software is installed.


1. Best Environment Setup


The best development to production setup would include a Quality Assurance instance, and a User Acceptance Testing instance as intermediary stages between development and production.  This isolates each step from the other, and reduces the need to clone back to the development instance after every push to production. 


a. Developers should never have access to the QA, UAT, and Production instances.  The temptation is too great to do in-place modifications as defects are discovered.  As defects are discovered by QA they will be communicated to the developer who will rectify them on the development instance.  The developer will open a new Update Set that will capture a number of these for each round of QA testing.  After testing has been successfully completed all of these will be merged into a single update set which will be promoted to QA for a final test.  Again, don't skip these steps.  They produce high-quality code, and significantly reduce risk!


b. There are a number of strategies for a promotion environment.  These are listed from best to least desirable.


i.   Development -> Test (QA) -> Staging (UAT) -> Production

ii.  Development -> Test (QA) -> Production

iii. Development -> Production


NOTE: A lot of companies view the UAT instance as not necessary.  My feeling is that if the budget allows for it; it is the best approach.  A UAT environment should be as close to production as possible, and not have QA testing/development defect correction going on while users test.  QA, on-the-other-hand will be somewhat shifting-sand as the back-and-forth between development and QA; as defects are found and corrected, will cause too many interruptions (think resets) for UAT to be quickly completed.  I will address this topic in more detail in a future article.


2. Team Development


Let me take a moment to address Team Development.  If you have a large shop with several development teams this is a terrific way to approach the difficulty of each team interfering with the other. With this strategy you have multiple development instances that allow for merging when ready for integration testing.



3. Best Cloning Strategy


This is perhaps one of the most controversial topics I have run into with ServiceNow!  I have spoken with a lot of different admins, and management, and some guy I bumped into on the sidewalk in Chicago, and every one of them has a different opinion on how this has to be handled!


So here is my two cents worth:


1. Clones should always be done from Production.

2. It is a good practice to clone back to ALL instances after a major production push to bring everything into sync.  This would be items such as table structure, code, lookup table data, CI relationship data, etc.  I would advise against cloning the actual production table data itself as this might contain sensitive information.  Use some thought when cloning over the user table data as you might be clobbering special access accounts that will have to be recreated.  Don't clone the logs (this will reduce your clone time significantly).

3. Maintenance releases are a different story.  It may be that you will want to clone back periodically (once a month perhaps) after every so many of these.  Such a release would consist of defect fixes, data refresh of lookup tables, minor table structure additions, and so on.

4. When cloning to DEV and development for a future release is ongoing it is an obvious practice to have all of the developers export their current update sets to disk prior to the clone.  Then upload them after the clone.  Communication is all in this situation!  I have seen work lost because of the lack of communication.  It wasn't pretty.

5. When cloning to QA/UAT again make sure that all parties are communicated with.  QA may have specialized tests that need to be exported to disk prior to the clone.

6. After any major or minor ServiceNow release.  This is an interesting situation as the implementation of a patch or major release usually progresses from Sandbox to Dev to QA to Prod.  Then clone back to all from Prod after the final release.  Remember that to clone you have to be on the same version of ServiceNow in both instances.


NOTE: You might say: Why bother cloning at all?  With promotion from Dev to Test to Prod don't you already have the same structure?  The same code?  The same everything?  Not necessarily.  With QA maybe, but with DEV you want to do periodic refreshes to keep things in sync.  Dev just receives too many changes; a significant number of which may never see the light-of-day in production.





Development Constraints:


1. Development should only be done in the DEV environment.  Do not make changes in QA or Prod.  Period.  As a matter-of-fact if your development team and admin team are different teams it is recommended that developers NOT have access to anything but DEV.  This reduces the temptation to fix a defect at the location it was discovered.  Remember:  All problems should go through process.  Accumulate minor issues into a single release.  Major issues into their own release.  As a rule:  do not do development in your production instance.  Exception:  you have just installed ServiceNow for the first time, and have yet to activate it for production use (what we like call a phase 1).

2. If possible do QA and UAT on their own respective instances.  I know that some shops only have a DEV and PROD instance.  If that is the case then STOP all development on DEV prior to doing QA testing (and during).  Control your environments!  Reduce the risk of a shifting sand situation where the very code you are testing may be changing while it is tested!

3. Production is your "pristine" environment.  Keep it that way.  It should only be modified as far as configuration (a term that has many meanings in ServiceNow).  This means properties, and data; NOT code.  I am talking development here remember.

4. It is a best practice to turn on auditing for the more important table structures.  This reduces the "who the heck changed that!" problem considerably.

5. Code repository.  This is another really hot topic of discussion at our company.  So how do you do versioning?  Baselining?  Document repository?  What is the process?  Can you reset back to a baseline?  How would you do that?  Well, I'm going to leave these questions for a future article.  It would triple the size of this one!

6. All development should be in an update set. 

7. Use Fix Scripts for code that has to be run pre- or post-update set implementation.  These have the beauty of being included in an update set.  They are usually for data preparation.



Development Process


1. Create an Update Set. Standardize on naming.  This should be something that will help you, at-a-glance, know what is in the Update Set.  If you are an Agile shop the story or epic name could be used for example.  Try to keep your update set small and manageable.  Use comments for each change in the update set.  All of this helps with audit and traceability.  Greater granularity allows for easier rollback of a particular set of features; while allowing others to proceed.

2. Adopt coding standards for your scripts (Douglas Crockford's site is a good start, as is the ServiceNow Coding Best Practices wiki site).  Use variable and function naming standards.  Use code formatting.  Use comments where it makes sense to describe a difficult to understand piece of code.  Think carefully on the maintainability of your code!

3. Adopt interface standards for your Forms.  Keep them simple.  Bust up busy forms into sections that can be viewed as tabs in the tabbed format.

4. Unit test during development.  Use Fix Scripts and/or Scripts - Background to check portions of your code.  I have created a simple way to test Business Rules with these tools as well (Current Factory).

5. Communicate with other developers while doing development!  Don't step on each other.  Coordinate with each other.  Don't work blindly in a vacuum assuming all will be well and no one could possibly be modifying the same bits as you.  Danger, danger! The coordination will save a lot of headaches.

6. Use peer reviews.  Code review with someone who is not familiar with what you have been working on.  Sometimes when explaining my code to another developer I find things that weren't done quite the way they should have been.  Better to catch it at this juncture than when it goes to QA!  When another coder asks: Why did you do it that way?  Don't get offended!  Be prepared for an answer.  Defend your implementation.  If you can't you need to make some changes!  BTW, the peer review process in ServiceNow code is rarely involved, and usually takes only a little time.  Make room for this step.  Remember: Quality of code.

7. Don't be afraid to ask for help should you run into something you can't solve.  There are several resources on the community; you can usually get your question answered in a lot less than 24 hours.  Research your code solutions (this should have been done in the Analysis and Design phases, but it is too late after the development has been done and it is in QA!).  There are tons of code examples, and solutions for most of your needs on the web.  Don't forget about ServiceNow share!


You, as a developer, are responsible for ensuring that your code is as good as you can make it prior to handing off to the test team.  Do NOT expect the test team to find your mistakes for you!  Frankly, I find it humiliating and embarrassing when a bug makes it to the testers.  As the developer I am the beta tester, not them. Your reward should be the immediate gratification when the test team (and management) exclaim on how smooth testing your code went!  Be personally responsible.  Police your code.  Be proactive, don't be lazy.


Note:  This is a good place to plug something that also gets left out of the picture a lot.  Even with time constraints; over-engineer your coding solutions!  Put in try/catch, and error catching code wherever it makes sense.  Even the catch-all:  "An error occurred that should not ever have occurred" is a great thing to have in your solution.  Try to anticipate the weak points in your code and compensate for them.  Be the true software engineer!







You know, I often get the impression that this is the elephant-in-the-room that everyone tries to ignore.  This is the step where you, as the developer, should be testing the tar out of your code!  If I had a nickle for every time I have run across developers throwing their code out to QA untested; well I would be a rich man!


Don't expect QA to alpha- or beta-test your code!  This, in my book, is irresponsible.  A conscientious developer should be paranoid about their code making it to QA with ANY bugs in it. 


1. Exercise all facets of the changes you are implementing.  This is the nature of the unit test.

2. It is important to try to anticipate the unit tests needed during the Design phase.  These will not necessarily be all-inclusive.  As you build your solution you will run across tests you didn't know you needed.  Back-fill these to your Design document.  Personally I like handing all of my unit tests to QA so they have a starting place to be begin constructing their tests.  Don't view this as doing their job for them!  Don't assume that you as the developer know how to do their job. 

3. Integration test your solution.  This is NOT unit testing.  Instead you are checking to see if your solution plays well with others.  Try to anticipate what other applications in ServiceNow might be affected by your changes.  Don't assume that it will not.  Back-fill these tests into your Design document as well.  This is a best practice.

4. Write up your test results.  I like doing this in a spread sheet for easy organization.  I usually have this as an attachment to my Design doc; if there are a lot of them.  Use what makes sense.  Don't get too hung up on the format.  I go for thoroughness of the testing; as a bit of overkill is a good thing.


Format (organize by functional area if possible):


a. Test Number

b. Test Description - what is being tested

c. Test Steps (numbered)

d. Requirements being addressed (just the numbers)

e. Expected Results

f. Actual Results


5. Have a notification process.  Status reports are a good thing.  Make sure your project management know your progress to completion of the plan.  Flag problems immediately.  Don't sit on them hoping they will resolve themselves!

6. When you are done with all coding and testing, and have verified that all of the requirements have been met, you are ready for release to QA.

7. Pull everything together: Documentation, implementation order (if there is more than one update set, and/or Fix Scripts to run; to prepare data, and/or data to upload).  This documentation will be used by the admin who actually installs on the QA environment.  Note down anything that did not install as expected, and refine your installation process accordingly.  The idea is to work out all of the kinks prior to the final move to production.





Having a development process in-place is a best practice.  It improves quality by having steps that reduce the number of defects and improve code maintainability.  Even a one-person shop should still have a development process.  Don't cut this corner just because you have a small group of one!


BTW, our company keeps a Knowledge Base.  These are articles for capturing any new techniques or coding breakthroughs.  Our developers are encouraged to contribute to this repository.  This has proven very useful, and keeps us from having to reinvent the wheel when covering the same or similar ground at a later time. Add this to your Analysis research step:  Search the KB! 

In my next article I will be talking about the importance of Quality Assurance and User Acceptance testing!


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!

The ServiceNow Platform has a seemingly infinite set of use cases in the enterprise. The combination of the new Studio IDE and over twenty Platform services such as Workflows, Scripting, Integrations and APIs, Orchestrations, Self-Service Catalog, Reporting & Analytics, Mobile UI, Connect, Visual Task Boards, and on and on mean that ServiceNow admins, developers, architects,and partners can utilize the Platform to literally deliver "Everything as a Service" for departments, shared services, and business units across the enterprise.


A great example brought to my attention by my colleague keeshenniphof (thanks Kees!) is Sky. Sky is a major U.K.based  television, telecom, and broadband provider that initially implemented ServiceNow ITSM to upgrade to a modern, efficient, and user-friendly Incident-Problem-Change solution for IT. However it's clear from the video  that they're doing so much more with the Platform.




Together with ServiceNow Preferred Solutions Partner TeamUltra, Sky has developed and deployed numerous apps, developed by in-house developers (via a very cool "Hothouse" rapid app development approach), TeamUltra, and even apps from ServiceNow Technology Partners including Bomgar remote control (available on ServiceNow Store). Other cool apps mentioned in the video are a gamification app for the helpdesk, and a consumerized shopping experience for all employees.


This is a great example of how customers are leveraging the ServiceNow Platform into multiple use cases and solutions across the enterprise by extending ITSM, developing new Platform apps with in-house devs and Solutions Partners, and deploying pre-built, certified apps from Technology Partners (ISVs)  available on ServiceNow Store.


Enjoy the video!

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

Design is where the rubber-meets-the-road.  We have our requirements.  We have investigated and analyzed each of these and turned over all the technical rocks that might impede our progress.  We are now ready for design!





This is another of the software engineering steps that I often see get tossed aside.  What appears to happen is that the requirements are gathered, and coding begins.  Almost at the same time!  This is similar, in my mind, to the old joke where a manager tells his developers to start coding, and he will go find out what the users want!


Design is where you, as the developer, take the requirements, and analysis, and begin detailing and organizing how they are to be developed into a solution.


My rule-of-thumb:  When I am done with the design I should be able to hand all the documentation to a junior developer who should be able to make ALL of the changes, and do the unit testing with a minimum of supervision.


Also, even after you get into the Design Phase it is possible to find a process or design problem that will require the team revisit the requirements, and/or analysis steps.  Each of these steps acts as checks-and-balances to reduce risk and improve quality.  Take them seriously!  They are not just bureaucratic make-work!


A side note:  The combination of Requirements, Analysis & Prototyping, and Design should constitute roughly 60% of the time of a project.  The remaining 40% could and should be roughly divided in two and allotted to coding and testing.  Frankly, by the the time you finish with Design, coding should be a snap.  You have overcome your technical difficulties in prototyping, and with design you should have all of the to-dos, how-tos, in-what-orders, and unit tests defined.


BTW, I do not limit prototyping to the big stuff.  Nor is it something that I relegate to the Analysis step only.  I will be working with code-snippets and using Scripts - Background or Fix Scripts throughout the project.  Don't be too rigid with the process.  Use common sense. 





What the Design Should Contain:


1. Detail on where changes will be made and what they will consist of.  Here is where coding standards can begin to dictate how the solution will look behind-the-scenes.

2. The order in which the changes will be accomplished.

3. The unit tests to be used to exercise all of the changes.  Yes, this will contain the detail and steps of the unit tests!

4. Any integration tests identified by the developer to ensure that the changes do not break the existing codebase.


I am not terribly locked into a rigid format on this.  I use common sense to achieve the purpose.  Remember:  Don't think of this as make-work!  It is truly an important step on the process.



Document Organization


What follows is how I usually organize my design.  Note the order:  Each of these can have cumulative dependencies on the previous.  In addition, starting with table changes, there should be a reference to a requirement that is addressed by the change.  Some of these may address more than one requirement.  This is where requirements numbering shines as it is a way of specifically showing which requirement was addressed.  This is another facet of software engineering:  requirements traceability.



1. Overview - a high-level "executive" description of the tasks that are to be accomplished with this design.  Include any referenced requirements documentation.


For example:



2. Next in "constructive" order the design elements.  What this means is that each element build or relies on the previous. 


For example:  Table changes should come before form changes.  You would add new columns to a table before adding those fields to a form.  So it is pretty much common sense.


      This list is a good rule of thumb, but by no means is it inclusive.


a. Table - these will include new tables, column changes, column additions, ACL additions and changes

b. Forms - these will include field locations on the form, label changes, UI Actions and ordering.  Here I pull in the screen shots from the requirements.

c. Database Views - these will include new views, modifications to existing views.

d. Code changes - here is where the underlying code is detailed.  This includes Client Scripts, Business Rules, Events and Script Actions, Script Includes, etc.

e. Workflow changes - Diagram this using Visio or something like it.  Roughly what the workflow will be doing.

f. Service Catalog and/or Record Producer changes

i. Storyboarding - How will the Service Catalog or Record Producer actually flow.  What will be on each page.  The variables involved.  What will be passed to the underlying workflow.  What client script or ui action will be involved.  This has a dependency on the workflow.

g. Scheduled Jobs, Reporting, and other modifications that rely on everything prior.

i. Scheduled Jobs can have a dependency on Script Includes.

ii. Reporting can have a dependency on Database Views, or table changes.



h. Navigation - Application Menus, Modules

i. Unit tests.

i. Detail these out.  These will include what is being tested for (the requirement), and the steps to produce the test.  Expected results.

j. Integration tests.

i. Detail these out.  These will be to the area of ServiceNow that is affected by the change.  They are to verify no impact.  Include the steps to produce the test.  Expected results.

k. Process tests.

i. These are to verify that the process flow (if any) has been achieved.  This is usually a requirements validation step.

ii. Example: 


Service Catalog -> Collects information (variables) -> Workflow -> Active Directory command -> MID Server -> Active Directory -> Expected return.


Suggested Tools


This is by no means an exhaustive list, or an endorsement of any of these, but I find the following to be useful:


Visio - One of my main tools.  With this I can do flows, use-cases, storyboarding, etc. 

MS Word, Libre Office, Open Office or any other document tool - I can do diagramming, and documenting here as well, it is just a bit more difficult.

Evernote - This is a super tool for cloud collaboration. 

OneNote - I am probably most comfortable with this for design documentation.  It allows for great network collaboration, and with Office 365 you have cloud collaboration available as well.



Design Wrapup


Give the team the chance to feedback on the design.  Keep everyone in the loop (thus the need for collaboration).



When you have completed your design the final step is to get sign-off from the business owner!  Do NOT move forward to development without this!  An email confirmation should suffice.






Design is an important part of the software engineering process.  It has the purpose of making development simpler by logically organizing the various tasks that will need to be performed to achieve the requested changes.  The detail provided is useful not only to the developer who will be doing the work, but to the business owner, project manager, business analyst, and test team. The entire team benefits from understanding what the solution will actually involve. 


If you have not already included this step in your projects; I would highly recommend doing so.  To me this is one of critical components of software development.  Remember software engineering reduces risk, and improves quality!


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!

Incoming emails are the emails sent to the instance. The inbound emails are classified by following an algorithm as New, Reply or Forward. Inbound email actions enable an administrator to define the actions ServiceNow takes when receiving emails. The feature itself offers a superb level for scripting and a well balanced design for the classified emails, saves you time on coding and guarantee a clear understanding to expand and keep your email actions up to date.


Inbound email action 'Type' field offers flexibility to the user with the ability to be set to None that matches all incoming emails. The 'Target table' field offers a similar feature. Those options are well positioned the top of the incoming action form so pay attention when setting them. Here are two tests I did to experiment with None matching all incoming emails on incoming email options 'Type' and 'Target table.'


Incoming email options 'Type' and 'Target table.'

The 'Target table' selects the table where the action will be added or records updated. The 'Type' selects the message type required to run the action. The action runs only if the inbound email is of the selected type. Setting the inbound action type to None, increases the complexity as the rule matches all incoming email received types. This could be of advantage on may designs. e.g. if you are using them to stop certain executions based on conditions regardless of their type.

'Type' set to None matches all incoming emails


type target none.jpg



Testing the inbound actions with Type set to None

I have created an inbound email action as follow:

Inbound Email action




Target table








email.subject && (email.subject.match(/\b(?:spam|loan|winning|bulk email|mortgage|free)\b/i) != null)





Script =

gs.log('CS - test_type_none start'); // dev only - remove on prod
// No further inbound actions are required - stopping them
sys_email.error_string += 'processing stopped by test_type_none by subject keyword: ' 
    + email.subject.match(/\b(?:spam|loan|winning|bulk email|mortgage|free)\b/i) + '\n';
gs.log('CS - test_type_none ends'); // dev only - remove on prod


Then I have sent 3 emails with subject "... freeware offer" classified as new, reply and forward each.

Also  I have sent 3 emails with subject "... free " classified as new, reply and forward each.

type set to none.jpg


Results: All incoming emails are processed as expected by the incoming action. As per condition, some processing is bypassed.



Testing the inbound actions with 'Target table' set to None

I have created the following inbound action:

Inbound Email action




Target table











gs.log('CS -test_target_table_none start'); // dev only - remove on prod
gs.log('CS - test_target_table_none ends'); // dev only - remove on prod


Then I have sent 3 emails that classify as new, reply and forward.

Results: Setting Target table to None, it will not execute the inbound action correctly.

If Target table set to None the followings are the results of incoming emails :

Incoming emails

Classified as



Usual logs if there is no update



Always error

Error while classifying message. org.mozilla.javascript.EvaluatorException: GlideRecord.setTableName - empty table name (<refname>; line 1)
Skipping 'test_target_table_none', a suitable GlideRecord not found



Always skipped

watermark's target table '<found-table-by-watermark>' does not match any Inbound Action table, setting to 'Ignored' state



Always error

Error while classifying message. org.mozilla.javascript.EvaluatorException: GlideRecord.setTableName - empty table name (<refname>; line 1)
Skipping 'test_target_table_none', a suitable GlideRecord not found


I advise that you always double check and make sure to select your Target table correctly and only set your Type appropriately to match the correct classified emails. Just because you have the option, does not mean you need to use it. Plan and validate.


I have tested this with Fuji and Chrome as the browser.


More information here:

In this podcast we provide an update on what’s new with the ServiceNow Store.  We also interview Don Casson the CEO of Evergreen Systems and learn more about their apps found on the ServiceNow Store.



For more information on ServiceNow Field Service Management, please see:





As we ramp up planning for CreatorCon at Knowledge16, we are seeking feedback on the outcomes that members of the developer community have experienced since passing the ServiceNow Certified Application Developer exam, to help us determine how best to offer free certification exams at the next CreatorCon.


If you are a Certified Application Developer, please respond to this 2-minute survey (tops - maybe 1 minute).


Thank You!

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

cfs.jpegBuilt an app on ServiceNow Platform that your boss can't stop gushing over? Integrated ServiceNow with a big, hairy ERP system that still gives you nightmares?  Established governance around ServiceNow Platform for your organization?  Consider yourself an expert developer? Then we want to hear from you!


Share your success and best practices at CreatorCon, the largest gathering of Service Management developers by submitting your proposal here.  If you are accepted as a speaker, we will give you a complementary pass to the conference.  Look forward to hearing from you.



Call for speaker details

CreatorCon event page

Analysis is a step in the development process that I think gets thrown out of our ServiceNow projects too often.  Too many times I have seen analysis done on-the-fly during development!  And prototyping or proof-of-concept?  Shoot we'll do that when and if we run across something during development; that requires it!  Right?  Wrong!



So what are these necessary processes?  How can they be used to best effect in a ServiceNow project?  What is the best way to capture and use what is found?


Analysis and prototyping are just as necessary to software engineering as requirements gathering.  They are part of a thoughtful and necessary progression from requirements to final product.


So let's break the two out.


1. Analysis - the investigation into the actual execution of the requirements.  This is the gathering of all business process, and necessary technical requirements.  Analysis encompases the discovery of what it will take to actually do the work, and is usually where sizing comes in.  With this step you are finding out what it will really take to accomplish the tasks.  This can lead to a revisiting or rethinking of the requirements if a discovery is made that makes the current requirements invalid.


2. Prototyping or Proof-of-Concept - this is where the analyst and/or senior developer will determine if any special technical requirements are feasible or even doable.  This can trigger make-or-break decision by management if the technical requirement is undoable.  Alternatively the prototype can become the core component to the solution.


A couple of notes:


1. Beware!  Push back if management suddenly views the prototype as the complete solution.  I have bumped into this situation several times.  If you can stop this "short-cut" (which isn't) to development then do so!  What usually ensures is a situation where management says: "The POC is sufficient, we can add features later."  If you allow this then everything becomes unplanned, untrackable, and unsizeable!  The prototype will the become a frankenstein!  If management pushes for this then you need to go back to the negotiating table, and rework the entire project.  Get it back under control or it will be out-of-control.  Those ARE the options.


2. Make sure when doing the requirements, and you get a handle on what the general scope will take, that you make room for the Analysis and Prototyping phase.  Don't short-change yourself on time and money with this or you WILL end up doing it anyway in the development phase, and run out of both.  The idea with all software engineering is that there will be NO real surprises, and that planning captures the majority of the facets for the solution.  The only surprises I like to see are things like: "Gee, our performance from the code is better than we expected!"



So let's talk about the nitty-gritty.


Analysis Elements:


1. Interviews


As a developer you want to have a chat with any relevant SMEs, business owners, process owners, other developers; that may have worked on a similar project, and may have code or design components to contribute.  Your task here is to dig out the corners.  Anything relevant to implementing the requirements.  Your take-aways will be notes, emails, code-snippets, design documentation.


2. Business Rules (this is not ServiceNow BRs) and Processes


During the interviews you will want to try to garner any business rules or processes that may be relevant to the requirements.  These could affect the way you implement the requirements.  It is good to get these written down.  If you don't already have them; I guarantee that you will develop good note-taking skills.  Take-ways will be notes, process documentation.


3. Technical requirements


These are interesting in that while some may come from the requirements gathering phase; most will come from the analysis phase.  Technical requirements can encompass execution location (server or browser), components (MID Server), installation (AD Extensions for Powershell), interface (Service Catalog, UI Page), and so on.  Basically anything that is technically specific to implementing the requirements. 


4. Necessity of a prototype, and prototype construction


This is really an extension of the technical requirements.  Creation of a prototype is necessitated whenever there is a technical requirement, and it is complex enough that it will require a model to see if the requirement(s) can even be implemented.  I run into this usually when I can't find an example from my library, my company's library, or the web (in that order), AND it is something I have never seen done before.  Sometimes I have had this be a request from a customer so that they can see what a particular customization will actually look and feel like before making the move to proceed (the POC).


5. Sizing the project


One of the two major take-ways of the analysis phase: The actual scope of the development.  What will it take to accomplish to realize requirements into reality?  How many hours?  How many people will be needed?  You have some of these from the requirements phase, but not all.  Here you should be able to nail down how many developers, testers, etc. you will actually need, and how many hours to accomplish all the tasks.  This will make your project manager very happy.  It is always good to be pre-emptive with this information.  You will look the hero!


6. Design Structure


Here is one I don't see much of, and it really is the other major take-away of the analysis phase.  Essentially this will be, at a high level, what will be in the design.  The analysis should lead directly to a better organized and structure design phase.  Usually by the time I sit down to design the solution I am mostly there because of my analysis.


7. Risk


This can be another major deliverable if there is a probability of any failure points.  This is where the prototype is of great benefit as it can mitigate a possible technical failure by determining if something is even feasible.  I try real hard to identify these during the requirements gathering phase, but that is due to my experience and background.  Even with that I still end up re-reviewing everything while I am wrapping up my analysis to determine if there are any major (or minor risks) that could cause a failure to complete the project.  BTW, the sizing could be a failure point.  It could size out to be too big to complete in the timeframe demanded in the requirements.



Analysis Resources:


Subject-Matter-Experts (or SMEs) - these are the people who actually know what the current processes are.  They are mostly involved if there is an automation of a manual process.


Requirements Documentation - this is any pertinent documentation from the requirements phase.  I use it as my baseline when digging out everything in the analysis phase.


Previous projects (what I like to call precedents) - if someone has done something even remotely like what is being asked for then why reinvent the wheel?  This may very well contain code snippets that are directly useful to your project! 


Code libraries - what? You don't have a code library?  High time to start one!  Just saying!  One of our Implementation Specialists calls my code library "The Basement."  Guess it is; I have so much junk in it now that it is hard to find stuff.


Emails - anything you may be sent from a SME or business owner or developer get the idea.  These usually have bits of gold in them that can be used to help define something in the design.


Interview notes from the meetings you have scheduled with the SMEs, business owners, developers, etc.


ServiceNow Community - need I say more?!  You really need to be using this.  It could help short-cut the need to doing a prototype!  It may be that someone already has!


ServiceNow Wiki links - 'nuff said.


Web surfing for information - search engines are your friend!  I prefer Google for technical searches, but I also use Bing and Duck Duck Go a lot.  Exhaust your avenues of research.  I find a lot of OIDs, MIBs, Javascript snippets, and ServiceNow stuff.


Investigation notes - your own musings on what may be necessary.  I usually will keep a running commentary of what I have found, and where, to help me organize my analysis.  Basically this is how I keep track of everything, and what may occur to me.  It is also where I put my "questions to ask" for my interviews.



Recording Tools:


Essentially your note-taking tool has to be robust enough so that you can drop screen prints, graphics, and text onto it.  The tool has to be somewhat useful to help you organize it all.  I prefer Microsoft OneNote as it has the ability to break everything up into tabbed pages.  You could do something on the cheap I suppose with Excel and keep things on different worksheets. 


Here is the non-exhaustive list:


Microsoft Word or OpenOffice or Libre Office or notepad.  Pick your favorite.


Evernote - a cloud-based OneNote.  Similar, but not quite the same in functionality.  This has a great price: Free.  It has the ability to allow you to break things apart for organizing it, but I just don't quite get the usage out of it as I do with OneNote.  It has a super feature in that it allows very easy real-time web-based collaboration.


OneNote - With Office 365 this is now on the cloud as well.  However, most of the background I have with this tool is with real-time Analysis and Design collaboration on an internal network. This is my favorite as I like the tab/page feature.  It allows me to keep multiple analysis, and design elements organized at the same time.


Example of a possible OneNote project:





In a nut-shell this is basically what should come out of your Analysis:


Sizing - major deliverable

High-Level Design Elements and Organization - major deliverable

Code snippets (if any)

Prototype (if needed)

Risks and Issues (if any) - usually these are technical





Let me take a moment to argue for you to have a corporate Sandbox instance.  If your company does not have one then my advice would be to push for one.  With this you can do your prototyping, POC, and or investigative research without doing these tasks on your development instance; with the possibility (very real) of causing problems for other developers.  Technically you could use your own instance to develop code snippets, but resist the urge to use it for company development.  You should never keep company data on a personal instance.


Use Fix Scripts for code-snippet investigation instead of Scripts - Background.  They have the added benefit of versioning, and audit trails.


Don't forget to use an update set when you are working on your prototype.  It is the best way to hang onto a copy of what you have developed.  Place it in your library for later reference.


As you can see the Analysis Phase is a step that needs to be done separately from the Design or Development Phases.  It is a major factor in mitigating risk!  It helps in the further detailing of the requirements.  It helps in the organization of the design.  In my book it is too important to do on-the-fly.  It is a fundamental step in software engineering. 


In my next article I will be tackling Design Best Practices for developers.


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!

Geneva launched on Dec 8th, and one of the highlights is the upgraded and streamlined development environment, in the form of the new Studio cloud IDE. The combination of Studio and ServiceNow Platform services makes leveraged enterprise cloud app development a reality with Geneva.


What do I mean by “leveraged enterprise cloud app development"?  I'll break it down. But first, let’s recall what “leverage” actually means.


If you use $100K of your own money to buy $100K of land, you haven’t used (financial) leverage.


On the other hand, if you use $100K of your own money and borrow $400K to buy $500K worth of land, you’re using financial leverage. You now control $500K of land with only $100K of your own money.


If the value of the land goes up 10%, your profit without leverage is $10K. With leverage, your profit is $50K.


So think of Studio + Platform services as that $400K loan (interest free and no payment required unless your app is deployed onto a production instance with fulfiller licenses)  in the above example. As the developer, you put in 20% or $100K worth of the overall code of the app – the most valuable part that solves the actual business need for the end user – while  Studio + Platform services provides you with the $400K worth of developer tools and Platform services, enabling you to deliver an app that’s actually worth $500K to the business. And, if the number of users of your app exceeds the initial expectation, then the business makes more “profit” ($50K vs $10K). That’s leveraged application development.





Let’s take a closer look at how this is actually delivered in Geneva:


  • The ServiceNow Platform comes with a comprehensive set of built-in services that do a lot of the heavy lifting that's required with true enterprise-grade app development, for you. This means that every app you develop on the ServiceNow Platform can leverage whichever of these services you choose to incorporate - which slashes app dev and testing time, and exponentially increases your productivity the more apps that you build with these re-usable services. To see what these over twenty Platform Services are, see the almost-complete list in the Platform section of the Geneva release notes, as well as the pre-Geneva Platform documentation.http://https/ say "almost-complete" because there are even more Platform services available than are listed in those two places - things like Performance Analytics (which all apps can enable), Reporting (which all apps get automatically), and Orchestrations & MID Server (which all apps can leverage).


  • Studio puts these Platform services at your fingertips in a highly efficient manner you'd expect from an IDE. Features such as JavaScript linting, syntax checker, API prompter, code search, and application explorer massively improve the developer experience on the ServiceNow Platform.


  • Studio enables you to easily package and post scoped apps and manage updates to the app repository where they can be published to your test and production instances or the ServiceNow Store if you're a Technology Partner with a certified app or integration. This means that you can make apps available instantly to teams, workgroups, departments, your entire enterprise, or the entire ServiceNow customer install base globally, at the push of a button.


If you're a professional app developer in IT and you're facing increasing demands from across the enterprise for more and more new cloud-native apps and integrations faster, while still needing to adhere to stringent enterprise requirements for security, availability, governance, analytics, regulatory compliance and more




If you're an ITSM or ITOM administrator that develops apps to help you and your team automate your work and address more use cases with the ServiceNow Platform




If you're an ISV looking to publish scoped apps and integrations to ServiceNow customers globally on the ServiceNow Store




Studio+Platform services is the knockout combo you've been looking for. Dive in with your own always-free-to-develop Geneva developer instance at ServiceNow Developers.


Now that's what I call leverage.

Martin Barclay
Director, Product Marketing
App Store and ISVs
Santa Clara, CA

Full credit to Kenny Caldwell for this list.  Kenny is a brilliant engineer and I always learn something when working with him, so I am passing some of the info onto the community. If you are wondering what you get for extending the Task table in ServiceNow these are some things to consider.


  1. Tables/Fields which are limited to the Task table.
    1. Approval Rules[sysrule_approvals]
    2. Assignment Rules[sysrule_assignment] -
    3. Assignment Rules/Data Definition LookUp -
    4. Assessment conditions[assessment_conditions] -
    5. Service Level Agreements[sysrule_escalate] Inactivity Monitor/Legacy SLA -
    6. State Flows[sf_state_flow] -
    7. Rate Cards[fm_rate_card] -
    8. Task Relationships[task_rel_task] -
    9. Execution plans[sc_cat_item_delivery_plan]
    10. Visual Task Board[vtb_board]
    11. Survey Conditions[survey_conditions] - legacy
    12. SLA[contract_sla] -
  2. Workflow Items
    1. Approvals
      1. Available in a workflow for Standalone task
        1. Approval – User
        2. Approval Action
        3. Rollback To
        4. Not Available in a workflow for Standalone task
          1. Approval – Group
          2. Approval Coordinator
          3. Generate
          4. Manual Approvals
    2. Tasks: Selection Not Available

As a developer you should be intently interested in, yes, requirements!  A lot has been written on this topic, but not much from a developer perspective I am afraid.  So with that in mind here goes!


Requirements fall out of a request from someone for new functionality to be created or added to software.  This would be a request from the business and/or management side, and would usually be devoid of technical information (there are exceptions).


So, what has this to do with the developer, you say.  Everything!  As the developer you should know what good requirements should contain.  Poorly defined requirements are, unfortunately, all too common.  Remember the old adage:  Garbage-in-garbage-out (GIGO)?  That works for requirements as well.  Too vague and they lead to back-and-forth with the requestor that could take days and push any deadline way out into the future.  Too detailed, and they 1) could take too long to write down, and 2) constrain the solution to such a point that any other better way of solving the problem could, and probably would, be tossed aside!  Oh, and push the deadline out into the future.


So what exactly are "good" requirements?  And what exactly is a good requirements gathering process?



1. Let's start with who gathers the requirements? 


You want someone who has some idea of what the technical solution will probably be after the request has been made.  This person should guide the requestor in the technical capabilities (i.e. what can-and-cannot be done).  This should be done by either a Business Analyst, or by a developer. 


Business Analysts are trained to act as a mediary between requestor and developer.  They have enough technical knowledge to make recommendations as to best approaches for a solution.  Here at Cloud Sherpas many of them have their Admin Certification, and some have completed their Implementation Specialist certification.  This gives them insight into the workings of ServiceNow and what would be proper as far as a solution to a request.  Another benefit of having a Business Analyst is to insulate the developer from the requestor.  This allows the developer to focus on, well, development!


Developers usually do not have the experience with, or the exposure to the requirements gathering processes.  For some companies it simply boils down to not having a large enough organization; such that the line between the requestor, and the ServiceNow admin is simply walking to the next cubicle.  The luxury of a Business Analyst just does not exist.



2. What are requirements?


Simply put these are the detail of any request.  They spell out how the request will be solved.



1) The problem to be solved

2) The solution to the problem to be solved


For example if a request is made to add the field "Tracked By" to the Incident form the requirements might be:




1) There is a business need to have a designated incident tracker.  This person will be made responsible (volun-told) by management to oversee a particular incident.




1) Add a new field labeled "Tracked By" to the Incident form directly under the "Assigned To" field.

2) This field will only be editable by users who have the ServiceNow Admin role, or those that have ITIL Manager role.

3) The manager will be responsible for choosing a person to be the tracker and entering that individual into the Incident form (process)

4) Once entered into the field the "Tracked By" individual will be notified by email that they have been assigned as an official tracker of the incident.

5) A report will be created that will allow for monthly tracking of the "Tracked By" users.


And so on.  This could include screen mock-ups, workflow mock-ups, process story-boarding, etc. 


In regard to the ServiceNow platform I am a HUGE advocate of mock-ups and story-boards!


For mock-ups I will often use a screen print and MS Word, and will draw in what is wanted by the requestor.  However this could be Visio, or a printout, or if no screen print then a white board, or chalk board, or napkin.  Gosh, I love phone cameras!


With story-boards I will often use a series of mockups to show process flow.  A leads to B leads to C.  This could be a high-level workflow as well as screen prints showing what happens.  Visio rocks!


The idea is to convey meaning by both sides and to get a VERY good idea of what is wanted and how it is to behave.


The developer or Business Analyst must guide this process, and help the requestor understand what is possible, and more importantly:  what is not!



3. Recording Requirements


Now to address the part most developers simply hate:  documentation.  Let me start by saying I USED to hate it, but now consider it a most necessary evil! 


Develop excellent note-taking skills!  Write down everything!  I like using exclamation points!!!!


Get everything nailed down. Use the following questions as a guide-line:


i. What is the problem?

ii. What is the existing process (if any)?  Are there diagrams showing the existing process? Workflows? How complex is what exists? Any relevant process documentation?

iii. What is the perceived solution?  Avoid vagueries.  Don't be lazy.  Search out the corners.  Do the due diligence.

iv. Does a workaround exist? 3rd party? Custom? Manual?  Is a simpler solution available?  If you have one; will the solution you are thinking of work instead (for example: Discovery and ServiceWatch vs. Help-the-Helpdesk)?


You must, must, must (ad infinitum) organize the requirements into a format that can be viewed by both the requestor AND developer that allows for an understanding by both parties as to what WILL be accomplished.  How's that for a run-on sentence?


This must include all of the mock-ups, process flows, and descriptions that describe not only the problem, but the proposed solution.


Each requirement must be numbered!  This is probably the single most important thing that is left out of the Agile methodology today, and is being leaked back in through Agile-Waterfall hybrids.  The lack of these, as a developer, is a pet peeve of mine! 


If there isn't a clear picture, and if things aren't organized you can't expect to deliver what is wanted, let alone on time!


Numbering allows for traceability:  The ability to trace the requirements all the way to QA and UAT testing.  How else would anyone know that all the requirements had actually been achieved?   The next step in the process, Analysis, does not rely so heavily on this, but the step after, Design; most definitely does.  You must be able, as a developer, to show that you met ALL the requirements.  Traceability is all! 


BTW, a side-note: This seems to be some sort of religious thing.  The rabid frothing-at-the-mouth Agile-only advocates do not believe in numbering let alone traceability.  They say:  "Stories should be small enough that they encompass only a few requirements at a time, and therefore it is unnecessary to do numbering or even a true requirements document."  This is bunk.  In the real world what I have observed and dealt with is serious abuse of stories.  Instead of keeping each story small (the exception); the person creating the story crams as much as possible into it (the rule).  I don't know how many times I have seen a story that is really an Epic, or an Epic of Epics!  This is a training issue you say?  No, I have seen this done by Agile certified individuals!  The temptation appears to be too great.  It does not self-correct at sizing (should that step even be observed).  And lest you think I have limited exposure to this; I would remind you that I have been a software developer for over 30 years and have extensive Agile experience. 



4. Approval


Finally get the completed requirements approved by the requestor.  Meet with the requestor.  Together agree that the proposed requirements are correct and will result in a satisfactory solution.  There must be sign-off of the proposed solution, or you should NOT move forward.  I don't know how many times I have heard from a requestor: "Looks great!  Roll with it!", but had nothing in writing; only to have the requestor say at roll-out: "That isn't what I asked for!"  Get an email confirmation from the requestor at least!  Get buy-in.  It is important.  Do not cave in on this as it is the closure point of the requirements process.  Again, you should not move forward unless you have it.


Once approval is received (again in writing; not verbal); then you are done with the initial requirements.  Uh...initial?!  That's right; it is a living document, and can be revisited if something was left out.  This will usually be discovered during the Analysis and Prototyping phase.  Be careful that the "revisit" does not cause too big a change.  If that happens you will be forced to reevaluate the solution, timelines, and effort!   Don't fall for the old: "...and can you get it all done by the agreed upon deadline?"  Really?  I usually jump up-and-down and pitch a fit if that happens, and the answer will be no.


What's next?  Analysis and POC/Prototyping which I will cover in my next article.


Steven Bell



If you find this article helps you, don't forget to log in and "like" it!


Also, if you are not already, I would like to encourage you to become a member of our blog!


If you want to learn more about this release and what we're excited about, we have some great resources for Geneva. Our Geneva resource page and overview page include new release information about enhancements and changes, including videos, community links, and other valuable resources.


Looking for something specific? Check out our new Knowledge Base articles for product enhancements and notable changes in Geneva:


Learn More About Geneva


Some of our new product enhancements include:

  • Security Operations Management – The Security Operations Management Suite (OMS) delivers security incident response and vulnerability response capabilities for the security practitioner. We have Security Incident Response (SIR) and Vulnerability Response.
  • IT Service Management - Service 360 is an extension of Service Portfolio Management in the Geneva release. This enables monitoring of Business Service Performance, consolidates data, identifies areas for remediation, and more.
  • IT Operations Management - Service Mapping is a new application in the Geneva release. In the Fuji release, ServiceWatch version 3.6 was a stand-alone product with its own infrastructure including a database, a collector component, the credentials store, and a user interface. In the Geneva release, Service Mapping is a native ServiceNow application.
  • Platform - The Edge Encryption application plug-in provides customers with an end-to-end native solution to manage the encryption of their ServiceNow data that helps them solve challenges tied to sovereignty concerns, data loss prevention, and regulatory compliance.
  • Business Management - Teamspaces enable functional and data separation between Project Portfolio Suite (PPS) applications. You can assign teamspace-specific roles to allow divisions in your organization, such as Marketing, Finance, and Facilities, to access a dedicated teamspace.
  • Service Management for the Enterprise - The Service Management Core installs the core Service Management items used to allow other-related plugins to work, such as Field Service, Facilities, HR, Legal, Finance, Marketing, and other Service Management applications created using a template.
  • User Interface - The UI16 interface is available in supported browsers and is enabled by default for new instances. UI16 provides usability improvements and design changes, including an enhanced application navigator, new themes, and updated icons. For upgraded instances, administrators may need to activate UI16.


Ready to Upgrade to Geneva?

Check out Upgrading Resources.



For the Geneva release and beyond, we now have a new product documentation site Move over wiki, the new product documentation site is live!. The information available on the wiki will still be available for users who are on releases prior to Geneva.

Awhile back in July I wrote an article describing how encoded queries worked as far as order of precedence.  I worked my way logically through the problem, and came to an unexpected conclusion.  I thought I would re-publish the link here for y'all to think on. 



ServiceNow Scripting 101: Encoded Query - A Breadcrumbs Issue


Steven Bell.



If you find this article helps you, don't forget to log in and "like" it!  I would also like to encourage you to become a member of our blog!


Please Share, Like, Comment this blog if you've found it helpful or insightful.


for Click for More Expert Blogs and also Find Expert Events!

It is often a design decision to normalize data into multiple lookup tables and create reference fields for that data.  Do take note and caution when doing this that there are limits on the number of Indexes and Columns that can be created on a single table (and if you are extending a table like Task then realize that some indexes and columns are already being used up and or get used up in the flattening process).


Reference fields are indexed and can push you closer to the upper limit of 64 indexes (indices?).


What I am trying to say is that it is a good design practice to control your reference fields.


Introduction to Fields - ServiceNow Wiki

Tables and Columns Module - ServiceNow Wik

This may already be a known issue in the community, but this week was the first time I encountered the problem.  You will notice when creating a table within a scoped application, as you create fields the field names in the dictionary do not begin with u_ . 


This is great, but this week I encountered a customer naming a True/False field type a value that started with a number.  They were seeing INVALID_CHARACTER_ERR as an error upon commit.  I am not sure what other field types may have this problem, but maybe as best practice don't name any columns starting with numerics.  There can be several weird form rendering side effects with formatters and other things. 


Remember you can use a label for a column field, if you want, that is different than the field name (labels can include or start with numerics).


Avoid starting field names with numerics. 

Inbound email actions are a truly stunning feature for 'processing emails.' It is scalable and simple design to a high standard that does not require expertise to develop them. Incoming emails with matching inbound actions are an ideal combination to provide a fine level for processing for incoming emails. It provides real feel of control and the scripting power mixed with modern email settings for very different scenarios. Independently tucked away, the inbound actions are executed by matching the incoming emails.




Let's talk about incoming email actions that do not have a real update. Not all emails that are processed will have a target set, even if the system has classified it as a reply with a matching record. This is normal.


Lets talk about:

  • Incoming emails and matching inbound actions
  • What is a 'real' update
  • Examples to test incoming emails with no 'real' updates
  • How to force the incoming email target
  • Advanced cases when target table is none in the inbound actions


Incoming emails and matching inbound actions

Incoming emails are emails sent to the instance. The incoming emails are classified following an algorithm as New, Reply or Forward. Inbound email actions enable an administrator to define the actions ServiceNow takes when receiving emails. The feature itself offers a superb level for scripting a well balanced design for classified emails, saves you time on coding, and guarantees a clear understanding to expand and keep your email actions up-to-date.


Inbound email actions are similar to business rules, using both conditions and scripts. If the conditions are met, the inbound email actions run the script. The inbound action's conditions include the Type, the Target Table and the condition itself. The 'Type' can be None, New, Reply or Forward to match the classified emails. None will match all types of incoming emails. The target table in the inbound action will help to define the GlideRecord created for 'current.' For inbound actions, the "current" is a GlideRecord based on the target table and the information gathered by the email system.


Here is a table to show the relationship between incoming email received type and matching inbound actions:



Received type

Classified as New

Classified as Reply

Classified as Forward


Target record





Target record if success


Found if data updated




Target set based on inbound

action target table

Target set based on

target found on reply

Target set based on

inbound action target table


Logs if there

is no update

Skipping 'xxx', did not create

or update incident

Skipping 'xxx', did not

create or update

Skipping 'xxx', did not

create or update



New or None

Reply or None

Forward or None



Usual inbound

action update





Target table


Set – it needs to match

email target found

with this table



The table shows the incoming emails are classified then matched to the respective inbound actions. Setting the target table makes the inbound actions much easier to understand. Also, setting the inbound action type to None, increases the complexity as the rule matches all received types.


A 'real' update

A real update means that at least one field on the 'current' record has been changed or the 'current' record has been created. If after receiving an incoming email there is not real* update on the inbound action 'current' record, the target field on the matching incoming email will remain empty.


It makes sense as a way to control which emails would display in the activity formatter.

The Incoming email target is only set if the 'current' record is updated or inserted in the inbound action. Otherwise, it remains empty


Examples to test incoming emails with no 'real' updates

Besides inbound actions that do not meet the conditions, there are a few cases where the current.update() does not execute because the data has not changed.


I've created the following incoming email action to validate the behaviour:


Inbound action





Update Incident.JS

Target table








current.getTableName() == 'incident'


Script is

gs.log('CS - Update Incident.JS starts'); // comment on prod

// the following line set impact to 2 = Medium
current.impact = 2;  

// As the previous change is a static field, 
// if current is already 2, no update happens

// No further inbound actions are required - stopping them
gs.log('CS - Update Incident.JS ends'); // comment on prod 


The inbound action looks as follow:

real update1.jpg


For the test, I have created an incident 'TETS' with impact = 3.

real update 2.jpg


Example #1: First reply email to the instance to update 'Incident: TETS'.

After sending an inbound email to the instance, once it get processed the first time, the following is the result:


real update 3.jpg

real update 4.jpg


The incoming email target is set to 'Incident: TETS' as expected. This is because current.impact was 3, then the script change it to 2, causing the current.update() to execute, the the system will set the target to current.



Example #2: Second reply email to the instance to update 'Incident: TETS'.

After sending a second inbound email to the instance, once it get processed, the following is the result:


real update 5.jpg

The incoming email target is set to (empty) as expected. This is because current.impact was already 2, then the script set it to 2 again, which is not causing any change, then the current.update() do not to execute. Then the system will set the target to (empty). This does not mean the watermark did not match.


Example #3: Inserting a different record than 'current'

I've created a new inbound action that creates a new problem called "vproblem." It looks like follow:

current 1.jpg


After sending an inbound email to the instance, once it get processed, the following is the result:

current 2.jpg

Results: The incoming email target is set to (empty) as expected. This is because the system only tracks the inbound action 'current' to set the incoming email target.

As current did not have any update or insert, the the system will set the target to (empty). That is the reason I prefer to use 'current' on inbound actions.


Force the incoming email target

You can manupulate sys_email.instance to set the target and sys_email.target_table to set the target_table.


The following is an example of an incoming email action that explicitly set the incoming email target:

Inbound action





Update Incident.JS_1

Target table








current.getTableName() == 'incident'


Script is

gs.log('CS - Update Incident.JS_1 starts'); // comment on prod

// the following lines will create a new incident
var vproblem =  new GlideRecord('problem');
vproblem.short_description = 'New incident - test - short';
vproblem.description = 'New incident - test - descr ';

// No further inbound actions are required - stopping them
gs.log('CS - Update Incident.JS_1 ends '); // comment on prod 

// WORKAROUND: To force setting the email target (not recommended)
// This set it to the new record vproblem created 
sys_email.instance = vproblem.sys_id;
sys_email.target_table = vproblem.getTableName();
      // Or if you need to be set to current
      // sys_email.instance = current.sys_id;
      // sys_email.target_table = current.getTableName();


The inbound action looks like:

inbound action.jpg


After sending an inbound email to the instance, once it get processed, the following is the result:

inbound action.jpg


The incoming email target is force to be set to the problem created. This is because we manipulated the sys_email record on the script. It could be forced to any record. If the target is empty on the incoming emails, we can assume there were no valid update on the matching inbound actions. Sometimes simple is more.


I have tested this with Fuji and Chrome as the browser.


More information here:

Dave Slusher

Developer Instance Tips

Posted by Dave Slusher Employee Dec 2, 2015

When we answer the feedback from the Developer portal, the biggest single topic are questions related to the developer instances. The bulk of these break down into one of two buckets:

1) My instance was reclaimed from inactivity - how can I get that restored?

2) I want to request an instance but it says none are available. Now what?


Let's look at both of those situations with some tips on avoiding negative impact on your development experience.




ServiceNow's Developer Program has been quite successful. One of the reflections of that success is that a few weeks back, demand for developer instances began outstripping demand. When the program went live, there was a chunk of capacity allocated for these instances. Although there are plans to increase that capacity, it will be 2016 before the additional instances can go online. If there were no timeout, instances that are not being actively used would prevent new instances from being commissioned. 10 days was worked out as the happy medium that allows most users enough time to keep them alive, while also allowing enough to expire that new instances can be created.


My Instance Was Reclaimed


Periodically, if you have a developer instance provisioned you will get emails warning you that you are reaching the inactivity timeout. Take these very seriously, as you have roughly 24 hours at that point to avoid the reclamation process. When the instance is reclaimed, there is nothing that can be done to restore any work you created on your instance. Getting an instance reclaimed when you intended to keep it will always be a negative situation, but there are ways to mitigate that.


Our recommendation is to treat your developer instance the way you would treat a word processor with a critical document being composed. You are going to want to save that often, and how often depends on the value of the document and the amount of work lost if it were to disappear due to an error. Take update set backups of the work from your developer instance, at the very least weekly. If the work is of higher value to you, consider doing it daily. If you make it part of your routine to take this backup at the end of the work week or the work day, the amount of work that can be lost is minimal. Even if your instance is reclaimed, you can resume work right where you left off in minutes after receiving a new one.


In order to keep your instance from getting reclaimed, you need to have some form of development activity or else the timeout clock will begin counting. "Development activity" in this case means something that would appear in an update set. Editing a Script Include or a Business Rule counts as activity, creating an Incident would not. You need to be editing the code or configuration of the system, not the data.


With the holidays coming up, it is increasingly likely that people will be away from their jobs for long enough that an instance might be reclaimed. Even if your intention is to keep working with the instance to keep it alive, the possibility of forgetting to do that is real. Before you go on your holiday break, save a local copy of all update sets and guard yourself against nasty surprises.


I Can't Request An Instance


Through the fall of this year, it has become a pattern that peak instance allocation occurs somewhere around Thursday, continues through Friday and then eases over the weekend. It makes sense, that as instances are requested or used at the beginning of work weeks, some fraction of those will expire at the end of the following work week. This leads to a sawtooth type chart when you look at the graph.


Screen Shot 2015-12-01 at 10.05.38.png


We have been recommending to developers that write us to request an instance as early in the work week as possible, and over the weekend if that is feasible. These are the times that represent the lower points in the sawtooth graph. If it is late in the day Thursday and you try to request an instance, there is a reasonable chance that capacity will be exhausted at that point. If it is first thing Monday morning, there is a very good chance you will be able to claim an instance. By early 2016 these issues should be less prevalent but until then, it may take a bit of patience and persistence to request new developer instances.


As we go forward with the Developer Program, these are our recommendations to minimize disruption. Take frequent update set backups so that even if your instance is reclaimed, it is a small matter to restore your progress. If you need an instance, Mondays or weekends are the best times to request them.


Happy developing!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at

Filter Blog

By date: By tag: