Skip navigation

Developer Community

15 Posts authored by: Dave Slusher Employee

Did you know that you can enable Multi-Factor Authentication on your Personal Developer Instance in only a few minutes? It is true. We recently published a video that walks through the couple of simple steps.

 

 

 

It breaks down to this:

 

  • Log in to your developer instance (or request one at the Developer Portal if you don't already have one)
  • Enable the Integration - Multifactor Authentication plugin on your instance.
  • Go to the Multi-Factor Authentication properties and enable it. Make sure you have a sufficient number of attempts to login without MFA or you can lock yourself out of the instance without much recourse. The default is 3 and shouldn't go lower.
  • Edit your User form to include the "Enable Multi-Factor Authentication" checkbox.
  • Open the record(s) for the accounts you want to add MFA.
  • Log in as that user. You will be prompted to create a Google Authenticator account for this account on this instance. Pair up with authenticator.
  • At this point, you'll need the authenticator code to log in to this account going forward.

 

If you'd like to increase the security of your Developer Instance, this will do it. Now even a brute force attempt at guessing logins will still require the authenticator code which makes it even less feasible. Note this works for any ServiceNow instance but I am speaking to the Developer Program because these are my people. Rock on.

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

When we left Part 2 of this series, I had added an inbound email action to create GTD actions from forwarded emails. As this development project was always intended to be a demo tool as well, when it came time for us to do demos developing Angular Service Portal widgets I looked around for functionality to add. I had already done some experimentation with using ngMaterial to create a card interface for this Helsinki feature webinar so I decided to bring a similar interface into DoNow.

 

pt-3-dependencies.png

Step 1 of creating this widget was to set up the dependencies. One is already delivered by default (ng-sortable) and one needed to be imported from external sources (ngMaterial). In the Widget form at the bottom are some related lists. In the Dependency section you have the option to create new ones. It is pretty straightforward to create a new one. You can point to external resources or choose to import the libraries into your instance. This was how all Angular work was done prior to Helsinki, by importing the libraries as UI Scripts. There are tradeoffs for either. If you need to lock to a specific version it makes sense to paste the code into a UI Script. In my case, I opted for the simplicity of just pointing to the Google hosted copies of the libraries. With a single JavaScript import and a single CSS import, ngMaterial was set up. ng-sortable was even simpler, just choosing it from the slush bucket of available choices. ngMaterial is adding some of the UI elements I want to use, ng-sortable is adding in drag and drop capabilities.

 

Having created that prerequisite piece, it was time to actually code the widget.  First a very quick bit of background in how ServiceNow has incorporated Angular into Service Portal. (Docs here for more reading and more resources here.) You'll see on creation of widget that you have an HTML piece, a client side controller script, and a server script. This breaks down very cleanly into thinking in MVC terms where the server script maintains the Model, the HTML the View and the client script runs the Controller. The Service Portal environment creates a variable for you automatically called "data" which is available in the server scripts. It works much like g_scratchpad. Any data you want available to the front end can be packaged in here and will transfer to the UI. The server script has a full Glide scripting environment and can do anything you expect from script fields (subject to any applicable scoping rules, of course.) This data variable is automatically in the $scope variable in the controller script and can be acted upon by addressing $scope.data . The $scope variable is implicit in the HTML front end so you can then present your values by referencing the data variable.  Enough background, let's look at code!

 

pt-3-server.png

Although you could edit any of these components from the Widget form, you will want to use the Widget Editor. It gives you a nice interface that allows you to show, hide or edit any of these scripts as well as a live preview of your widget. This is actually operating on the data of your instance so you can see your code in action immediately. One thing to remember is that although you have a live preview and can act on the data instantly you have to save the code with the button before you see any code changes take effect. It can sometimes feel like so much is happening automatically that you can forget to hit the button and get confused as to why you aren't seeing updated code. Remember to save.

 

To start this widget out, I do some Glide scripting. I add the current user's name and sys_id to the data object because it is very simple to get here and less so from the front end. I also create a hash map with 6 empty arrays mapping my 6 GTD priorities to the empty array. As we loop over the data the GlideRecord is converted to a simple JSON object and added to the appropriate array by dereferencing the hash map. This may seem like overkill if you know that Angular can filter. This may actually get factored out in the future but for now it helps with the hiding and showing of the records. One thing to be aware of is that this script is called each time data is packaged and pushed to the front. Although you have the ability to write any code you want in here, minimize side effects and expensive computation because this code can potentially run frequently.

 

pt-3-html.png

Next I built the UI portion. Part of the beauty of Angular is that you write your interface in HTML peppered with things called "directives" which can add extra functionality to the rendering of that tag. Those dependencies that were added previously all bring in their own set of directives, which is how they allow you to do different work on the interface. You'll note that there are tags called "md-checkbox" and "md-content" and "md-card". These were brought in from the ngMaterial dependency and allow for the creation of the swim lanes full of cards. By organizing into the six md-content buckets each filled with their individual sets of data this allows for some easy showing and hiding. I'm not going to delve too deep into the workings of Angular itself (if you need remedial work, there is a lot to read at the official site) but suffice it to say that ng-repeat is the engine of the looping and ng-model is used to bind various pieces of the data to the user interface. If you look at this you'll see that we basically fill out a data card for each of the actions that we packed into the data object. If you notice some of the ng-models, you'll notice that the md-checkbox tags are built by looping over data.priorities and set their ng-model to the priority.model for each priority as it comes up. This is a boolean value (if it wasn't before, md-checkbox would force it to be one).   The md-content containers are built by looping over the same array and each has an ng-if associated with the same boolean. This means that when that value is true, the element shows and when it is false it hides. Let's see that in operation.

 

Now by checking or unchecking those Priority checkboxes, each of those columns shows or hides. This is the core of Angular in operation, binding various elements to the same data and having the interface respond to it real time. One of the concepts that it took me a while to internalize is that even though we are referencing things that look like strings in these HTML tags of the directives, everything in there is binding variables unless you make it do something else. Almost everything in here is operating by reference so setting ng-if="priority.model" means that the tag is now bound to the state of the model field of the object contained in the priority variable. I have seen people have problems thinking they are passing values around when actually they are mutually binding user interface elements to the same underlying model and sharing a single data store.

 

pt-3-6-col.png
pt-3-2-col.png

You'll note that there is a little magic in here to make the layout respond to how many of these columns are showing. I'll go into more detail in the next post in this series when I dig into some of the code that lives in the controller script. For now, the take home lessons are that you can do the typical Glide scripting and queries on the server side and typical Angular coding in the HTML section. It is a nice compromise for the complexity of bringing these technologies together. There is a little secret sauce that configures some things for you that you would have to do for yourself in pure Angular which is definitely a thing to be aware of should you learn on Service Portal and later build a pure Angular app.

 

For those who are getting interested in this application (which I hope is at least some of you) the Github repository for it is public. You are welcome to fork and examine the code at will. Be aware if you do that this is not a final product and still very much a work in progress. It seems to work but is not guaranteed to be bug free or complete. IOW, caveat emptor and no warrantee exists on this. Note that because of the way ServiceNow integrates with Git we can't accept pull requests so we lose a little of the power of the platform that way. It is a shame but that is the way the internals work. Feel free to reach out if you do something interesting and we can go look at your repo to see how you have moved the ball down the field.

 

In the next section I will dig into the controller script and show how REST APIs can be integrated into Service Portal code. Keep watching the skies for that!

 

Summary:

 

It is simple and straightforward to bring in external dependencies into your Service Portal; you can easily query data and build an operational user interface with just a few HTML tags and Angular directives.

 

Series so far:

Building an Application: Part 1, Setup and Background

Building an Application: Part 2, Using Inbound Email Tokens

Building an Application: Part 3, Adding Service Portal Widgets

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

at-1020063_640.jpg

In my first part, I talked about the background of an application we are building. With this post, I want to get into the specifics of the first big problem I tackled.

 

In my previous GTD implementation, I used Evernote as the main tool. One of the features I got for free with that was the ability to receive email from arbitrary email addresses. Evernote gives you a private email address that you can use to forward email to the system, where it will be converted into a note. This is exceedingly helpful in a GTD implementation because for most information workers, a lot of the actions you’ll be tackling on a daily basis originate as emails.

 

This first thing I ran across in my design is that I want to be able to forward emails to my ServiceNow instance from multiple originating email accounts and have them all associated with the same user. This means that the out-of-the-box behavior cannot be relied on. By default, an Inbound Email Action will create a record owned by the same user whose email matches in the User table (sys_user). I want to do almost the opposite, I want an email address that will create a record for a user no matter what account sent the email.

 

This sounds like a job for the SMTP addressing loophole. When you send an email address, for most modern email systems addressing to <username>+<arbitrary_text>@domain.com will deliver to the account of <username>. I use this frequently on my personal Gmail account where the arbitrary text assigns the email to a folder. This trick absolutely works with the default inbox that ServiceNow uses. Thus you can send email to <instancename>+<text>@service-now.com and process that email on your instance. Now we have something to work with! We can get emails delivered to the instance with additional information about how we want them processed right there in the address. Good start!

 

Given that situation is true, I created a table to hold Email Tokens. This is a very simple table that contains only a string field (the token) and a reference to a user. The basic strategy here is that when an email is received, in order to find out if it is one of these emails sent to the Action Inbox, we will scrutinize the email token. If we find an active email token in the “To” address, then it not only is one of our Action emails to process but it will be linked to the user that owns that email token.

 

In our application we have a Script Include called DoNowUtil in which we put most (or hopefully all) of the hairy logic. I created a pair of functions called getGTDUser(emailAddress) and hasGTDUser(emailAddress). The former will take in a given email address. If the address has a token and that token has an entry in the table, it will return the sys_id of the user associated with it. The hasGTDUser(emailAddress) does similar work but returns true or false. Now that we have this, we can begin to do some work creating the Inbound Email Action.

 

Screen Shot 2016-08-26 at 10.11.33.png

Screen Shot 2016-08-26 at 10.08.48.png

Screen Shot 2016-08-26 at 10.18.36.png

I created a new Inbound Email Action against the Action table in our app (note we have two things named “action” in some way). I set the order to 99 because I wanted it to run before the default built-in email actions. I left the type as “None” because I want this to work on either new or forwarded emails. The condition field on whether to run is now pretty simple:

 

(new DoNowUtil()).hasGTDUser(email.recipients)

 

If that is true, this will run. Inside the script (under the “Actions” tab for extra confusion and naming overload) I create a new DoNow Action record and set the opened_by and assigned_to fields to the user found via the email token. I do some parsing of the subject to look for context and priority indicators (more on that in a future post), I set the short_description to a sanitized version of subject line. Originally we were putting the full email.body_text into the description field (remember that our Action record extends Task). We decided that we wanted to use that field for arbitrary information and that it was a tossup if the body of the email was actually valuable context. Instead, we created a more_information HTML field so that we could both maintain the incoming information from the email but also edit a description if so desired.

 

This strategy has worked pretty well so far. I actually do use this in the production version of our DoNow application. This allows me to have a secret email address to which I can forward emails from my various email accounts, personal and professional. It is pretty close to a clone of the Evernote functionality or a “Send to Kindle” type situation. In practice, I have not run across a problem either sending or receiving emails or having them create my Actions from these emails.

 

Future Work:

 

The original implementation of the logic in getGTDUser(emailAddress) was suboptimal. It looped over the email token table and did a substring look of each token against the email address. If the table was large, that would be grossly expensive. I recently improved it to parse the token out of the email address and then query for that one record. I’m including a screenshot of the code of the improved version.

 

There is also a big flaw in the current handling of the tokens. Originally my plan was that one would create and delete the tokens at will, and that would be the totality of managing them. However the use case for deleting them is that some rogue process somewhere has discovered the email address with the token and is sending bogus emails creating bogus actions. In that case, deleting the token would just make hasGTDUser(emailAddress) return false. That means that all these bogus emails will fall through to whatever the default Inbound Email Actions are, probably creating an Incident for each one. What is really needed is at least an active column on the table. Even more thorough would be the ability to blacklist and whitelist specific emails from sending. This is in the backlog for the future but not a high priority at this moment.

 

Summary:

 

This has turned out to be an interesting exercise in creating a new strategy for accepting emails inside of ServiceNow. It is not a common use case that an application needs to route emails from multiple sending email addresses to a single user on the instance but if you have that use case, this is a way to accomplish that.

 

Series so far:

Building an Application: Part 1, Setup and Background

Building an Application: Part 2, Using Inbound Email Tokens

Building an Application: Part 3, Adding Service Portal Widgets

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

david_allen.jpg

I have been working on a skunkworks project with ctomasi, josh.nerius  and a few other people for months now as a low level background task. One of the downsides in working in an evangelism role is that sometimes you do lots of things to communicate about developing without actually doing any development yourself. In order to change that, we carved out a problem space that all of us were interested in, had an opportunity to improve toolsets and would be something that we ourselves would want to use every day.

 

We opted to use the ServiceNow platform to build an application for the GTD organization method from the mind of the brilliant David Allen. All of us on the project use some implementation of it and none of us love our current tools. This seemed like a rich space to mine for the opportunity to bring us onto a tool we all liked better than our current. I personally was using an Evernote based solution, but OmniFocus and other tools were in play. In fact, no two of us used the same thing.

 

The first thing we did was work out a list of use cases for the application. The upside to implementing GTD is that much of the design is already done. There is no discussion about the priorities, for example, or that there is a review phase. It just comes down to the implementation details. All of us threw in use cases important to us to make this a tool capable of being a daily driver. These included:

 

  • Ability to filter down lists to specific contexts
  • Quick entry of new tasks (Ubiquitous capture in GTD terminology)
  • Convert emails into tasks
  • Ability to delegate and track tasks
  • Notifications around changes done by others, upcoming due dates
  • A template system for creating recurring tasks

 

Screen Shot 2016-08-12 at 12.35.30.png

There were many others, this is just an example to give a taste of the types of features that were on our minds. Given that set of use cases, Chuck went off and created an initial cut of the data model and a framework for the basic operation of the system. Because of the timing of the project, we were able to begin on the Helsinki release. This made Service Portal available as an option for some of the user interface, and gave us the ability to use a Github account as the backbone for our changes. We set up instances for our dev and production, and then each of us connected our personal Github accounts to our own development instances. One note of caution, we learned the hard way that when you update from source control, it has the potential to scramble data since it uninstalls and reinstalls. Because of this, we pull to our development/test instance from source control and then promote to production via update sets.

 

All this is background to get us to the subsequent posts, where I will talk about some of the programming challenges we ran into and how we addressed them. Our goal is to make this available to the world eventually. Although we are close, we are not quite at the point where we consider what we have as a releasable V1 product. I will post an update when we make that available. At some point in every post like this, people will ask for a pointer to the GitHub repo. For the same reasons it is not quite releasable yet, I’m going to hold off on publicizing that repository at this moment. However, it isn’t private and there are tools on the internet for the really motivated if you know what I mean.

 

Progress on this application moves in fits and starts because it is no one’s top priority (or even top 5) but sometimes we devote some time to it anyway, frequently over evenings and weekends. I hope that people find value in following along with our progress and our thought patterns. The very first time we did the Live Coding Happy Hour was for this project, in fact. It was a reaction to the slick demos we frequently work out. We wanted to do the opposite because we felt there was also value in showing the struggle, the problems you can bump into and how to work your way out of those. This blog series is from a similar mind set, walking along with how we pursue the project. With any luck, you will take something away that you can use in your own application development work.

 

As always, feel free to leave any comments on this post. This series is for y’all so please let me know what you do and don’t find valuable so I can dial it in as I proceed. Next post: how to deal with incoming email (aka, getting to the fun stuff!)

 

Series so far:

Building an Application: Part 1, Setup and Background

Building an Application: Part 2, Using Inbound Email Tokens

Building an Application: Part 3, Adding Service Portal Widgets

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

One of the ongoing issues that we deal with in the Developer Program is the continuity of the free developer instances. No matter how well intentioned you are, it is always possible to have a time period where you are out of the office and miss the email about your Developer Instance expiring. I don't like the idea of anyone losing their work but there is only so much we can do to prevent it. There aren't enough resources to give developers free instances that last forever, so we do the best we can.

 

However, you as the developer have the ability to mitigate this. I've been advocating for developers to periodically take update set backups to prevent data loss. If you have a backup of everything important to you, then losing your instance is a trivial operation. You request another, reinstall from backup and proceed ahead. Now that we are in the Helsinki era, you don't even need to deal with the update sets anymore. You can set yourself up a Git repository, save any of your work to it, and away you go. Not only does this keep you from losing your work, it gives you the ability to revert your code if you ever find yourself in a position where you have broken something that previously worked. You can tag commits and then later use those as branch points, all the great functionality of source control. Nowadays, a free GitLab account will allow you to have private repositories so there is no reason to not do it.

 

Below is a video we put together to show you the very simple steps it takes to get your app committed to source control. In the video where it says "Install on another instance", think in your head "Another developer instance after I some day lose mine." That's where the real power comes in. Rather than it being a devastating loss, it should be a 5 minute hiccup if your developer instance is reclaimed. If you do any work on there that is important to you, treat it as important and save a backup.

 

 

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Screen Shot 2016-07-13 at 16.28.43.png

One of the most requested features of the Developer Program is the ability to choose the version of a developer instance that one gets assigned. The current state of affairs is that you request an instance, are assigned a random one from the available instances. If you choose you can upgrade from there but it is difficult to go downwards once you do that. Some people want specific versions of an instance to match the training documentation they have, or to match production at their workplace or a client.

 

Screen Shot 2016-07-13 at 16.41.39.png

As of today, you can now do exactly this. You request an instance as you do currently, either from the button on the sidebar or from Manage -> Instance in the navigation bar. You'll now be presented with a dialog that asks you which version you want to request. There is text in the dialog that suggests that if you don't have a reason otherwise, Helsinki is your best bet. However if you have a desire for Fuji or Geneva you can certainly choose whichever you want/need.

 

Once you click the version, your request will begin being processed. After a few seconds, you will be presented with the page that shows you information about your instance including it's name, URL, and the temporary password. NOTE: log in to the instance immediately with that password. You'll be asked to change it on first login, and please keep a record of that. Pretty much every day I get requests from people who get through this process without knowing what their password is. Make my life easier, keep up with the password. If you don't, you can reset it from the Actions dropdown on the Manage -> Instance page.

 

If you have a instance currently and you want a different version. here is what you do. If it is a higher version, you can upgrade your currently assigned instance. Included in this release is the ability to upgrade specifically to Geneva or Helsinki. Previously, the only option you had was to upgrade from what you had to Helsinki, now you can do either if you are starting from a Fuji instance.

 

f-blurred.pngg-blurred.png

If you want an instance that is a lower version from what you have currently assigned, you will first release your current instance. This means you will have no more access to it, so please backup anything you want to keep via update sets, integration with Git and/or data exports. Once you release, that instance will be wiped and is no longer yours. There will be a 15 minute waiting period, and then you can request a new instance. Using the steps above, select the exact version you want and you'll be assigned an instance of that version.

 

On this post you'll see screenshots of the results of two requests I made. I first picked the Fuji version then I turned around, released it and requested specifically a Geneva version. At this point the power is in your hands. As the text of the dialog says, if you aren't sure then you should request Helsinki. However, if you have a reason for wanting another you can now go to it.

 

Do be aware, this allows you to pick lettered versions, or selectively upgrade to lettered versions. It does not allow you to select patch level within those families. That is maintained across all developer instances and is outside of control of the individual developers. All developer instances of a given lettered version are at the same patch level, and they will all upgrade at once (give or take some processing time), managed at the program level.

 

Screen Shot 2016-07-13 at 18.00.41.png

This should help solve some of the issues of developers wanting to get an instance of specific versions, or especially when that need changes over time. And don't forget, there is a version selector on the Developer Portal that controls what you see. Whether you are looking at training courses, documentation or API docs, look at the lower right corner of the website and you'll see that version selector. If you are ever looking at a context other than what you want, you always have the option to change it there. Flip it to the version you want, and all docs on the site will reflect that.

 

Hopefully this release will make it easier for people. Happy developing, developers!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Capture+_2016-06-29-17-14-40.png

I personally am an Android user so it has been like a pebble in my shoe that the iPhone users have had their app for almost a year with no love for the poor Android users. Well, friends, our day has finally arrived. You can now join the beta program and get access to the ServiceNow app for Android.

 

I have been using it for a day now and am very excited to have it. I am slowly building up the library of instances that I use on a regular basis. It's kind of a lot, so there has been a non-zero administrative hit just in the sheer logging in to all of them.

 

One of the reasons that I am excited is that a group of us have been working on a GTD app to manage our own (shaky) personal productivity. Although using the website from the Android phone isn't impossible, it just isn't in the same class as using a native app. Being able to open the list on my phone, and much more importantly quickly capture incoming items with a click of the button makes a world of difference.

 

At this point I don't have a lot of flight time on the app but almost everything has been great. I've seen one filter issue but I have successfully sent Connect messages, managed my to-do list and checked in my location. I am excited to be able to work with this app and really have been waiting impatiently for 9 months to get it.

 

Be aware, this requires Geneva Patch 6 or newer in order to connect to the instance. When I first tried to connect it to my developer instance, it wouldn't do it because I was on Geneva Patch 4 so I had to upgrade. With luck, a good bit of the field out there should qualify. If not, you'll have to wait for your instances to catch up I'm afraid.

 

Also be aware, this is a beta. Although the general release is not far off, you do have to accept that it is a beta and join the beta program before you can install it. It should be pretty good to go but you may find bugs or situations that don't work. If you do, feel free to send anything you find to me at dave.slusher@servicenow.com and I will route them on to the appropriate team. (I don't work with the mobile team, I'm just a fan.)

 

If you meet the above instance criteria and you have Android 4.4 or greater (and at this point if you don't, what is going on?) join the beta program, install the app and take it for a ride!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Hibernation Now

Photo by Metassus, licensed CC BY 2.0

The Developer Program has grown steadily since it was announced at Knowledge15, and along with that growth has been a corresponding increase in developer instance usage. The ability to give these away for free use for the community of ServiceNow developers is a great benefit and has reaped benefits for the entire ecosystem. However, it does come at a cost. The program was granted a dedicated chunk of infrastructure, and then another chunk later. Those of you who have been around for a while remember the Great Instance Crunch of late 2015/early 2016. We've done a lot with this gift of infrastructure, but at this point we can no longer keep going back to that well.

 

In order to be able to provide more instances to more developers, hibernation was recently introduced to the program. Roughly 2/3 of instances are not actually used on any given day. Hibernation allows the same hardware to serve a larger number of instances, which means we can keep handing them out without having to shorten the inactivity timeout (which we really do not want to do.) Hibernation is not ideal, but we are not in a situation to decide whether to hibernate or not. The choice is whether to have hibernation or no more instances to distribute.

 

In this post, I want to clarify a few misconceptions and set some expectations around this situation. Some of these are in the FAQ but they bear repeating:

Wake your Instance from the Developer Portal

 

If your instance is hibernating, you should get a notice to that effect when you attempt to access it via HTTP. The hibernation page will have a link to the part of the portal you need to load to wake it up. Alternately, if you just load your Manage -> Instance page in the Developer Portal, that will automatically wake your instance if it was hibernating.

 

Hibernation and Reclamation are Different

 

We see these in feedback and in instance help frequently, messages that conflate the two things. These are entirely separate. When hibernation was originally introduced, a number of developers were confused the first time they received a timeout warning. I personally answered a number of messages of the form "You said you would reclaim my instance in two days, but it was already inactive." A hibernating instance by definition has not been reclaimed. If it was reclaimed, you wouldn't have it around to wake up. You can always still extend your instance with the button on your management console at Manage -> Instance even without waking it up, or by the standard development activity that resets the time.

 

Waking Up Needs an Active Session

 

Another common pattern is that people report having to wake up hibernating instances multiple times. If you wake it up and do not log in, it will go back to sleep in around 30 minutes. If you log in and then don't actually keep the session around for very long (log out in less than 15 minutes) it might also go back to sleep in around 30 minutes. If you are continuing to use it actively through the day, you should have around 6 hours of inactivity before hibernation begins. That should be more than enough for anyone using it on a normal workday. If you come back to it in your evening, it might require waking up again but in most cases a single waking should last through the entire day unless you do it early and then don't touch it all day.

 

It is also worth noting that developer instances are reclaimed based on developer activity, AKA script changes, configuration changes or anything that would show up in an update set. Hibernation is based on any activity in an interactive session. In other words, creating records or changing data does not reset your activity time for your 10 day timeout. It absolutely does reset your hibernation timer. Any activity through the web interface will keep the instance from hibernating.

 

Hibernation Affects Your Code

 

One aspect to note about hibernation is that it does affect the scheduled jobs and other aspects of your development instance. If you have scheduled jobs or other deferred executions on your instance and it is hibernated, those will not execute on the set schedule and possibly not at all. It now makes sense as you configure those jobs on your developer instances to do it for business hours. Although standard in production to push those types of code to the overnight hours, for a developer instance you want to do the opposite if there are jobs you want to maximize the possibility that they run.

 

Waking Will Get Faster and Easier

 

At this time, it takes around three minutes to wake a developer instance from hibernation. Over time that will get faster. The development team behind it is shooting to get that down closer to one minute. Additionally, they hope to make the timeouts a little more permissive with a net effect of fewer hibernations in a day with a shorter time to get back to work when it happens.

 

There are Alternatives to Developer Instances

 

In the case where you just cannot handle the hibernation as a concept, there are alternatives. If you have the inclination and resources to join the Technology Partner Program, you will receive an instance with the program that does not hibernate nor expire from inactivity. This also has some advantages if the ultimate goal is to develop apps that will be published in the ServiceNow Store. The cost (currently $5k/year) is prohibitive for most individuals but if your developer instance is in use for a consultancy and you find hibernation in your way, this is a possibility.

 

A frequent question asked is some variation of "Can I pay $X/month and get an instance that does not (hibernate | expire)?" We do appreciate the sentiment and willingness of developers to want to pay some freight of the program in exchange for removing these aspects. For the moment, that is enough outside our model that it is not in our roadmap. We aren't currently set up to take small retail amounts from a large number of users. This may change at some point in the future but at this time, it is not in the plan.

 

Conclusion

 

Hibernation, while not ideal, is now a part of life for users of the developer instances. It does add a little bit of inconvenience for developers. Although we would love a situation where we did not have it, the reality of our program is that we need it to continue to provide more developer instances to an ever larger pool of developers. Every effort is being made to reduce that impact but this is a side effect of the success of the ServiceNow developer program. We thank you for your interest in the program and use of these instances, and will strive to make the program ever better.

 

Let's build something great!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

I wanted to point to an external blog (off the community) run by the consultants at The ServiceNow Guys. Their blog ServiceNow Pro Tips has a lot of good resources and is worth a look periodically to pick up new tips and tricks.

 

In particular, the posts on  What's New in Geneva Part 1 and Part 2 are insightful and deserve a read, as is this one discussing the use of GlideRecord and how it differs from the Client Side vs the Server Side.

 

I'll be following the blog and looking for interesting posts to highlight here. If you run across other resources that you find valuable, let me know via a comment on this post and I will add them to a daily ritual of reading the ServiceNow blogosphere.

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

One of the new features in Geneva is the ability to write Scripted REST APIs. These take the place where Processors were used previously but have a richer feature set and more flexibility.

 

Scripted Rest API basics

 

When you create a new API inside the Studio environment, you define a name for it and an API ID (which will default to the name but can vary independently). Based on your currently selected application scope, it will automatically choose that application and namespace which corresponds to your scope identifier. Finally, you can set a protection policy to use no protection, make it read-only or protected.

 

After saving or submitting, you will see the Base API path that was created for you. This will be /api/<APPLICATION_SCOPE>/<API_ID>/ . That will be the prefix for all the individual resources you create, so if you create resource “foo”, your fully qualified URL would be:

https://instancename.service-now.com/api/<APPLICATION_SCOPE>/<API_ID>/foo

 

Screen Shot 2016-02-24 at 11.04.28.png

 

Upon creating a new resource, you will be asked for the name of the resource, the HTTP method or verb used to access it, and the relative path.  The relative path can be arbitrarily complex and as in this screenshot, can contain multiple levels of path. You are not limited to a single level.

 

In addition, you can define path parameters as part of this relative path. Any parameter defined as part of the path as in the example will then be available to the scripting environment. The bracketed text will become a variable that is available from the request.pathParams object. More on this later.

 

 

Screen Shot 2016-02-24 at 10.21.40.png

 

Verbs

 

As you would expect in any modern API, you can support any of the HTTP verbs for any resource you define. This allows you to use the style of a single endpoint with multiple verbs where

 

GET /item/{id}

 

would be used to get the read view for a single data item while

 

PUT /item/{id}

 

would be used to update it and

 

DELETE /item/{id}

 

would then remove the item from the database. This allows you to create a simple externally facing interface that conforms to the style that other systems expect.

Scripting

 

Associated with every resource is a scripting window. This is where the work of Scripted REST APIs really happens. The context of the script window provides a request and response object. As part of the Geneva enhancements you will be able to see all the methods available on these objects using the autocomplete functionality.

 

The main functionality provided by the request object is the hashes for pathParams and queryParams. In the example above where the resource is defined with an {id} template for the path parameter, that is accessed in scripting by

 

     var id = request.pathParams.id;

or alternately (both syntaxes work)

    var id = request.pathParams['id'];

 

The same thing can be done for query parameters, so that if the resource was called with a query string of “?maxRecords=30”, it would be available to the script by

 

    var id = request.queryParams.maxRecords;

 

The response object allows for low level control of the return, including manipulating headers, status codes, and the stream writer of the response itself. It is not necessary to use it, if a JSON object is returned by the process(request, response) function then that will be the output, defined as “results” similarly to the default behavior of GlideAjax.

 

Inside this scripting environment, you have full control over the system up to the limits of the scope, with access to GlideRecord, GlideSystem and the functionality you would expect in a scripting environment. This allows you to within a scoped application expose any functionality that the scope can access through the system. For example, it would be possible to create a scoped application called “IncidentPlus” that provided a full logical API for interacting with Incident records on the core system. Although the scoped application would own the REST API endpoints, the functionality can be anything allowed in that scope and does not need to be confined to records within that scope.

 

Screen Shot 2016-02-24 at 22.20.35.png

 

 

Rest API Explorer

 

Once the endpoints are defined, the REST API Explorer on your instance can be used to interact with and test your API. It works exactly the same for your custom defined endpoints as it does for system provided APIs like the Table API. You will be presented with an interface to pick the Namespace, the API, the version and which resource to access. If the endpoint has path parameters defined there will be fields to enter those, and also the ability to add arbitrary query parameters to the request. Additionally, you can manipulate the headers such as the request and response data formats, and as well can add arbitrary headers if you desire.

 

Screen Shot 2016-02-24 at 10.39.43.png

 

Sending the request will then display the return value. If you did everything correctly, you will see a status of 200 and the data you expected. If debugging, you might commonly experience a 500 error and some error messaging. You can use the REST API Explorer as a debugging tool to exercise your API until you get the logic correctly working.

 

Screen Shot 2016-02-24 at 10.44.24.png

Versioning

 

Versioning was briefly mentioned above. Scripted REST APIs support versioning, which is also a best practice of using them. This allows for adding, deprecating or altering behavior of the API in a subsequent version while leaving the original API with original behavior operational as a previous version. This way current users of the API can be protected from changes. If your request uses a version qualified endpoint, then it will not accidentally be using functionality that was not intended.

 

This implies that when publishing an API version that behavior be held consistent within that version. Any change that would break the logic of consumers of the API should always be contained in a subsequent version to minimize disruption to any deployed accessors to your API. When publishing your APIs, publishing the versioned form of it allows consumer to "pin" their implementation to your version of the API, again isolating the consumer from any future changes.

 

 

Summary

 

Scripted REST APIs present a powerful tool to developers. They allow users to define logical level APIS, with little or no implementation details required to use them and arbitrary scripting to deliver the resources. This enables more robust, less brittle integrations with external systems and less time spent maintaining this connection.

 

If you have interesting idea  for (or even deployed) REST APIs, please leave them as a comment on this post. These are an exciting addition to the toolkit that really have no upper limit to the value they can enable. I am personally looking very forward to seeing the work that happens out in the world with Scripted REST APIs so let me know how you are using them.

 

For more information you can look at the Documentation site and also watch the episode of TechNow in which I did a demonstration of some of the capabilities of Scripted REST APIs. Happy exploring!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Dave Slusher

Developer Instance Tips

Posted by Dave Slusher Employee Dec 2, 2015

When we answer the feedback from the Developer portal, the biggest single topic are questions related to the developer instances. The bulk of these break down into one of two buckets:

1) My instance was reclaimed from inactivity - how can I get that restored?

2) I want to request an instance but it says none are available. Now what?

 

Let's look at both of those situations with some tips on avoiding negative impact on your development experience.

 

Background

 

ServiceNow's Developer Program has been quite successful. One of the reflections of that success is that a few weeks back, demand for developer instances began outstripping demand. When the program went live, there was a chunk of capacity allocated for these instances. Although there are plans to increase that capacity, it will be 2016 before the additional instances can go online. If there were no timeout, instances that are not being actively used would prevent new instances from being commissioned. 10 days was worked out as the happy medium that allows most users enough time to keep them alive, while also allowing enough to expire that new instances can be created.

 

My Instance Was Reclaimed

 

Periodically, if you have a developer instance provisioned you will get emails warning you that you are reaching the inactivity timeout. Take these very seriously, as you have roughly 24 hours at that point to avoid the reclamation process. When the instance is reclaimed, there is nothing that can be done to restore any work you created on your instance. Getting an instance reclaimed when you intended to keep it will always be a negative situation, but there are ways to mitigate that.

 

Our recommendation is to treat your developer instance the way you would treat a word processor with a critical document being composed. You are going to want to save that often, and how often depends on the value of the document and the amount of work lost if it were to disappear due to an error. Take update set backups of the work from your developer instance, at the very least weekly. If the work is of higher value to you, consider doing it daily. If you make it part of your routine to take this backup at the end of the work week or the work day, the amount of work that can be lost is minimal. Even if your instance is reclaimed, you can resume work right where you left off in minutes after receiving a new one.

 

In order to keep your instance from getting reclaimed, you need to have some form of development activity or else the timeout clock will begin counting. "Development activity" in this case means something that would appear in an update set. Editing a Script Include or a Business Rule counts as activity, creating an Incident would not. You need to be editing the code or configuration of the system, not the data.

 

With the holidays coming up, it is increasingly likely that people will be away from their jobs for long enough that an instance might be reclaimed. Even if your intention is to keep working with the instance to keep it alive, the possibility of forgetting to do that is real. Before you go on your holiday break, save a local copy of all update sets and guard yourself against nasty surprises.

 

I Can't Request An Instance

 

Through the fall of this year, it has become a pattern that peak instance allocation occurs somewhere around Thursday, continues through Friday and then eases over the weekend. It makes sense, that as instances are requested or used at the beginning of work weeks, some fraction of those will expire at the end of the following work week. This leads to a sawtooth type chart when you look at the graph.

 

Screen Shot 2015-12-01 at 10.05.38.png

 

We have been recommending to developers that write us to request an instance as early in the work week as possible, and over the weekend if that is feasible. These are the times that represent the lower points in the sawtooth graph. If it is late in the day Thursday and you try to request an instance, there is a reasonable chance that capacity will be exhausted at that point. If it is first thing Monday morning, there is a very good chance you will be able to claim an instance. By early 2016 these issues should be less prevalent but until then, it may take a bit of patience and persistence to request new developer instances.

 

As we go forward with the Developer Program, these are our recommendations to minimize disruption. Take frequent update set backups so that even if your instance is reclaimed, it is a small matter to restore your progress. If you need an instance, Mondays or weekends are the best times to request them.

 

Happy developing!

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

In Geneva, the script editor has been given significant upgrades in functionality. As you edit scripts, you’ll see a number of changes that make the developer experience smoother and more productive.

 

Real Time Syntax Check

 

One of the first things you will notice as you edit scripts is that there is now a real time syntax checker. This can be toggled on and off with a button. Rather than the manual syntax check via button or the automatic check on submitting the form, you get real time highlighting of syntax warnings and errors in you code. As you type, the bar on the left side will display line numbers as well as noting lines with syntax errors.

Screen Shot 2015-11-18 at 10.33.56.png

 

By hovering over the dots, you will see the text of the error message. The Javascript compilation is happening in the background as you type and giving you feedback about lines of code that need to be fixed. As you continue to type and fix the errors, you will see the indicators disappear.

 

 

Autocomplete Known Record Types

 

New in Geneva is autocomplete in the script editor. In any situation where the editor can determine what your object is, it will autocomplete fields and methods of that object. For example, using the Marketing Events application from the developer program training courses as an example, here is the editing of one of those business rules. After a rule has been saved (and it has to have been saved first), the editor now understands what table it is acting on so it can determine what “current” is. This rule is acting on the Marketing Event table, so typing an “e” character shows the available fields that start with that letter. The editor autocompletes functions as well, which you can note by the “F” icon to the left.

Screen Shot 2015-11-18 at 10.43.24.png

 

One word about the limitation of auto-complete: as of Geneva you cannot dot-walk through reference types and have them complete each one. If your record has a Reference type field, you can complete that but not the fields of the referenced type. As an example, if you try to auto-complete “sys_updated_by” on a record, you will not be able to auto-complete the fields from the sys_user table directly.

 

 

Autocomplete Any Object Type

 

The autocomplete doesn’t end at the inferred object types. If you define an object in code, the editor will be able to auto-complete in that context. In my example, I am creating the variable named object, which contains two data fields and a function definition. If we then reference object and auto-complete, it shows the three available possibilities - the two data fields and the function. In this case, it even displays a different icon for the numeric field versus the field of string type.

 

You’ll note that as you auto-complete on any GlideRecord type object, all the data fields have an “O” icon even if it is a String type variable or Numeric. The reason for that lies in the way GlideRecords are implemented. Although they are typed, at their heart every data field on a GlideRecord is an object of type GlideElement so that is why they register as objects and not as data fields. Any object defined by you in your scope will display the type specific icons.

Screen Shot 2015-11-18 at 12.42.34.png

Note that you can even autocomplete the GlideSystem functions just by typing “gs”. Since the editor knows what type of object gs is, you see the autocomplete information as soon as you type the following dot. If you are ever in a situation where the autocomplete is not popping up but you want it, such as if you used the arrow key to navigate around, you can type Crtl-space to bring it back.

Screen Shot 2015-11-18 at 10.48.53.png

 

 

Hotkeys

 

That leads us into the next topic - hotkeys in the script editor as of Geneva. At any point you can click the question mark icon and it will pop up a list of the editor hotkeys.

Screen_Shot_2015-11-18_at_13_08_16-with-box.png

 

Since the Fuji release, almost all of the hotkey combinations have changed for the simpler. For example, in Fuji to comment code was Ctrl-Alt-C and to uncomment was Ctrl-Atl-U. As of Geneva, Cmd-/ will toggle the comment, so the single combination will add or remove a “//“ style comment prefix from a single line or even a selected block.

Similarly, searching in the script editor has been simplified to Cmd-F to start searching, Cmd-G to find next and Cmd-Shift-G to find previous. This brings the keystroke combinations more in line with what developers would expect from using other similar tools. It makes the experience of authoring code in the script editor more intuitive. As with many of the Geneva enhancements to the development environment, it brings the ServiceNow platform closer to parity with the IDE style tools a desktop developer would be familiar with.

Screen Shot 2015-11-18 at 13.07.11.png

 

Autosave

 

One additional feature that is new to Geneva is autosave. As I was writing this blog post, I had a runaway Chrome page that forced me to close my whole browser window. When I returned to my business rule to continue editing, you’ll notice the icon in the upper left corner. That shows that this particular application file has auto-saved data that has not been saved to the instance. Although I had not saved the example of creating the local object, even after a browser crash when I reloaded the “Calculate total cost” business rule, the edited code was inserted back into the script field. This is not a feature I expect developers to see a lot (at least I hope not) but on those occasions when it is triggered, it will be a great time saver.

Screen Shot 2015-11-18 at 12.40.21.png

 

As you can see, the Geneva release brings many great features to help developers get their job done faster, with less error and lower risk of losing work.

 

Other entries in this series:

 

What's New in Geneva for Developers? - the overview post

What Developers Need to Know about Geneva: Part 1 - Studio

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

The Geneva release has a number of new features of interest to developers on the ServiceNow platform. In this series of blog posts, I will cover some of these topics in a little more depth. As always, please leave feedback about which topics you would like to see covered.

Screen Shot 2015-11-03 at 9.28.36.png

The first thing you will see as you develop applications is the new Studio environment (System Applications > Studio). From here you can manage your downloaded scoped applications, as well as all the applications you are developing on the instance.

 

Screen Shot 2015-11-03 at 9.29.46.png

If you edit an application such as the Marketing Events application from the training courses, you will be presented with the new IDE interface.

 

Screen Shot 2015-11-03 at 9.31.00.png

 

The IDE interface allows you to interact with your application more like you would with a desktop development tool. You can open any of your already created application files with a click, or create new ones with a keyboard shortcut (Cmd-Shift-o).  Each of these application files will open in a new tab in the IDE so multiple files can be edited simultaneously without leaving any form pages or creating new browser tabs. You don’t have to complete work in any given tab to open another one since each tab is an independent context. If you want changes to reflect in another one (such as changing a table in one tab and then editing that table’s form in an another) you will have to save the work in the tabs and reload the other forms.

 

Screen_Shot_2015-11-03_at_9_35_26-with-box.png

 

As you edit application files, you will see the close control change to a blue dot. This is the indicator of an IDE tab with unsaved changes. If you try to close one of those tabs, you’ll be presented with a confirmation dialog to make sure that is what you intend. This makes it more difficult to accidentally close a tab with unsaved work in it. The system will try to prevent you from doing that.

 

Screen Shot 2015-11-03 at 9.38.49.png

Screen Shot 2015-11-03 at 9.39.06.png

 

When you open the “Create New Application File” dialog, you can either navigate from the category hierarchy or use the filter bar to select the type of file you want to create.

 

Screen Shot 2015-11-03 at 9.46.30.png

 

Note that as soon as you leave the name field, you’ll see the tab title change to display the information you entered. The IDE environment is highly responsive and aims to feel less like a web application and more like a native client tool.

 

Screen Shot 2015-11-03 at 9.48.06.png

 

You can also use the Go To functionality to search across the names of all your application files. This search works across both the Name field, as well as the object names themselves as you can see in this example.

 

Screen Shot 2015-11-03 at 10.05.21.png

 

The Code Search functionality allows you to search for code snippets. You can limit it to code on a single table via the drop down or you can even expand it across all applications on your instance with the checkbox.

 

Screen Shot 2015-11-03 at 10.06.05.png

 

When the search is executed, the results will show you the number of application files that have a match to the search string. For each matched file, it will show the title of the object with the number of individual lines that match and then show the matches in context.

 

Screen Shot 2015-11-03 at 10.13.21.png

 

This is just a small portion of the changes to make the developer experience more efficient and pleasant. In the next blog post, we will examine some of the new features for authoring script code in Studio.

 

Other entries in this series:

What's New in Geneva for Developers?

 

What Developers Need to Know about Geneva: Part 2 - Script Editor

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

The Geneva release is here, and it includes a number of features of interest to developers. These will be explored in depth over time, but here is a high level overview of some of the key points to explore.  If there is any specific topic you would like to see covered in more depth, leave a comment on this post.

 

Studio

 

The environment you use to interact with scoped applications, either from the ServiceNow App Store or ones that you are developing, has been revamped from the ground up. Studio has an IDE like editor experience that lets you edit multiple application files simultaneously. You can have a table definition open in one tab, a business rule in another and a script include in a third. Navigating back and forth between forms is a thing of the past, you can have them all open simultaneously and each with their own context.

Screen_Shot_2015-11-09_at_14_09_42-box.png

 

Studio also includes several new searching tools. You can search by name and/or type across all the files in your application to find a specific one. You can also do a code search across all the files of your application that will search the content of the files. This will search across them all, regardless of file type, so that will solve the problem of looking for a bit of code but not remembering if it was in a business rule, a script include or a Workflow. Even better, with a checkbox the search can be expanded to all scoped applications on the instance.

Screen Shot 2015-11-09 at 14.12.12.png

 

The Studio also includes several kinds of protection to keep developers from losing work. If you accidentally try to close a tab with unsaved work, you will be presented with a confirmation dialog that asks if you are sure you want to close.

If you are editing application files and get interrupted by session timeout or internet outages, you will be able to auto-recover the lost work even if your browser page refreshed.

 

Script Editor

 

Any place in the UI where you can edit code, HTML or XML you are now presented with auto-complete information. When editing the script of a Business Rule, for example, after you have selected which table the rule is for any time you access “current” in the script editor, it can auto-complete all the functions and fields on that object. This prevents you from having to look up the exact field names of tables as you write the script, now they are all accessible on the objects as you type. This also goes for JSON objects and any other Javascript object for which the editor knows the definition. Typing “Ctrl-<space>” will open any auto-complete information that is available for that variable.

Screen Shot 2015-11-09 at 14.14.08.png

Instead of having to do an explicit syntax check with the button as you edit code, there is now a real time syntax checker. As you type, you’ll see a yellow or red dot to the left of the code, indicating if there is a warning or error associated with that line. Hovering over the dot will show you the text of the warning or error.

Screen Shot 2015-11-09 at 14.15.13.png

 

Scripted REST APIs

 

As of Geneva, it is now possible to create a fully custom REST API inside of your scoped application. Once you name your API, you are given a base path that represents access to that set of APIs. Within that, you can specify individual resources with any of the HTTP verbs (GET,POST, PUT, DELETE, and PATCH) and write a script for that resource. The script has access to both the request and the response objects, as well as any parameters you define in the URL itself. This allows you to script any arbitrarily complex functionality you like as part of these APIs. You can query multiple tables within your scoped application and take action or return data based on them.

Screen Shot 2015-11-09 at 14.17.32.png

 

This allows you to create logical level APIs for your scoped application that are different from the built in REST APIs. It also allows you to create more abstract APIs that don’t require any knowledge of the underlying implementation details. For example, you could have a an API like

 

PUT http://instance-bame/api/x_14806_marketing/marketing_app/updateAttendeeName/{id}

 

that will take in data and update the attendee accordingly. The user of the API doesn’t need to know where there is a first_name field, or a single name field. The script implementing the API can handle that. Similarly, you could create an API that matches an existing legacy API. If existing applications rely on certain actions being available under a base URL, you can create all those endpoints and then implement the behavior you desire.

 

Rest Attachment API

 

Geneva includes a REST API for handling attachments. You can create a new attachment with a POST request, delete one with a DELETE request or use GET requests to read them in several ways. You can GET the list of attachments, GET the metadata about a given attachment or the binary of the file itself.

Screen Shot 2015-11-11 at 10.40.27.png

 

 

CORS Support

 

CORS support is new in Geneva. This allows you to define a set of external domains that are allowed to access REST apis on a ServiceNow instance. If, for example, you wanted to allow for AJAX calls from you main portal to interact with the Attachment API previous discussed, you could set up rules to allow that traffic. In this way, both the instance knows that this external traffic is permissible and modern browsers also understand that. It prevents the browser from giving an error about accessing AJAX resources that don't originate from the same domain as the current website.

 

Screen Shot 2015-11-11 at 10.47.42.png

Add Related Lists to external tables

 

New functionality that was added in Geneva allows a developer of a scoped application to add related lists to other forms. Let’s say you created a custom application that had an Outage table, with a reference to Incident. Inside your application you could configure the Incident form to have a related list that showed any referenced Outage records. On any instance that installed your custom application, the Incident form would then automatically receive this related list. This is without any changes in the Global scope or configuration to the Incident form itself, this relationship is contained within the scope of your application.

Screen Shot 2015-11-09 at 14.22.37.png

Screen Shot 2015-11-09 at 14.25.16.png

Other Enhancements to Integration

 

Other functionality added to Geneva includes Export Sets. Once you establish a MID server as the Export Target, you can define recurring exports to that target. You can export only the deltas from previous exports, define filtering or format and otherwise control the output to that MID server.

 

Outbound REST messages have expanded OAuth 2.0 support. This includes support for authentication headers, including the OAuth 2.0 Authorization Code flow to obtain access and refresh tokens. OAuth 2.0 profiles can be used for authentication as well.

 

Other entries in this series:

 

What Developers Need to Know about Geneva: Part 1 - Studio

What Developers Need to Know about Geneva: Part 2 - Script Editor

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

The software that runs this community was upgraded yesterday. This gives a few new opportunities to get the most out of the community experience here.

 

The “My View” page (next to the home icon in the nav bar) is a great resource to use as your primary view on the community. It is customizable with tiles that give you shortcut views to different kinds of information.

 

To edit your view, click the “Edit page” link at the upper right. This allows you to edit the page which has a lot of “tile” areas. It’s similar to editing home pages in ServiceNow. Click on a “Add a tile” link and you can choose which type of content to populate that area. You can also reorder them by dragging, or move up and down one with the icons in the upper right of each tile. If that tile has some specific configuration (like choosing categories or tags), you can access that with the gear icon in the upper right.

 

Central section tiles that you can use to optimize your experience are the “Frequently viewed” and “Recently viewed” tiles. You can configure them to show content, people and/or places in each of them. On creation of the tile or by clicking the gear icon, you will presented with a set of check boxes to choose which of the three the tile will display (and you need at least one checked.)

Screen Shot 2015-10-23 at 10.25.04.png

 

The sidebar tiles have some more choices available than the central tiles. You can set up tiles for “Tagged Content”. This allows you to specify a comma separated set of tags that you are interested in, and see the most recent posts with that tag. For example, “geneva” would be a great tag to follow now, along with others of relevance to your current work. The most popular tags right now are “catalog”, “notifications”, “script”, “cms” and “workflow”. This also implies that when you are posting discussions and questions, tagging them well is a good idea as it might help surface your question to people interested in following the topic. It is possible to see the tag cloud of the top 200 most commonly used tags here.

 

Screen Shot 2015-10-23 at 10.26.38.png

 

Some of these tiles are automatically populate, like the “Frequently viewed”, “Trending” and “Tagged content.” Some of these tiles consist of static content you manage yourself. You can do this with the “Key Content and Places” tile. It allows you keep a scratchpad like area of topics you are following currently, as well as places (categories) you frequently read. Similar to this is the “Helpful Links” tile, which does much the same thing but with arbitrary links around the web. You can have a shortcut to HI, the Wiki or the API explorer right there in your sidebar on the community. By configuring it with your common destinations, you can make the “My View” of the community act as your central launching pad for your ServiceNow experience.

Screen Shot 2015-10-23 at 10.27.03.png

 

There is a lot of great functionality and a lot to learn in this new release of the community software. Here’s hoping it enhances your experience in the developer community. If you have any tricks or configurations that are making your life better, please leave them in the comments.

Dave Slusher | Developer Advocate | @DaveSlusherNow | Get started at https://developer.servicenow.com

Filter Blog

By date: By tag: