BT Sync & AWS March 10th 2015

BitTorrent Sync (BT Sync) is a tool that can be used to synchronize files between devices using peer-to-peer (P2P), technology. Using this method means that no server is storing the files between your devices. Think of it as a theoretically more secure Dropbox. I say theoretically because BT Sync is not open source, so we don't really know what it is doing beneath the covers.

I've been trying out BT Sync for a little while now and I think it is pretty good. The recent update to version 2.0 was much needed. I have it setup on my phone and my home computer. The one thing I miss is the ability to shutdown my home computer but still be able to sync files. Since BT Sync has a linux version I decided to give installing it on Amazon Web Services (AWS) a try. It turned out to be fairly easy. Below are some rough steps to get it setup. I'm not going to include anything to walk you through the setup of an EC2 instance, but I used the Ubuntu Server quick start option and set the instance type to t2.micro. All you are doing is running Bt Sync, you don't need much power. A Raspberry Pi B+ can run it without issue, trust me I did it. I will assume you used the Ubuntu quick start and have logged in to your instance via SSH. Be sure to keep the EC2 Management Console up as well, we will need to open a few ports.

  1. First you need to download the program to your EC2 instance. In the SSH window to your EC2 instance type/copy-paste in
    and hit enter. This will download the file.
  2. Now we need to extract it, type in tar -xvvzf BitTorrent-Sync_x64.tar.gz and press enter.
  3. In your EC2 Management Console click on the EC2 instance you created for BT Sync, then click on the link next to Security groups.
  4. Now you should be looking at the Security Group used for the instance. Click the Inbound tab, then click the Edit button.
  5. Now add a Custom TCP Rule with the Port Range as 8888 and the Source as Anywhere. Then click Save.
  6. Head back to your SSH console and run ./btsync. This starts BT Sync in the background
  7. Get your Public DNS url from your EC2 Management Console and put that into your browser at port 8888. It will be something like You should see the BT Sync interface and it should help you get everything setup.
  8. Click the cog in the upper-right corner, then click Preferences, then click Advanced. Make note of the Listening Port. You will want that later. Next we need to make sure BT Sync runs on startup of the EC2 instance.
  9. In your SSH terminal type the following
    killall btsync
    sudo mv btsync /usr/local/bin/btsync
    sudo chown root:root /usr/local/bin/btsync
    sudo chmod 755 /usr/local/bin/btsync
  10. Go to this gist by Mendelson Gusmao. Click the Raw button next to the btsync file. Copy the URL of that raw output.
  11. In your SSH terminal run wget [PASTE URL HERE] to download the file.
  12. Using whatever text editor you want edit the new btsync file. Replace BTSYNC_USERS="mendel" with BTSYNC_USERS="ubuntu"
  13. Run the following commands to move the file into place and to set the permissions
    sudo mv btsync /etc/init.d/btsync
    sudo chown root:root /etc/init.d/btsync
    sudo chmod +x /etc/init.d/btsync
  14. Now we need to create a config file for btsync. Run this command:
    btsync --dump-sample-config > /home/ubuntu/.sync/config.json
  15. Open /home/ubuntu/.sync/config.json in your text editor. The first few lines will look like this:
    "device_name": "My Sync Device",
    "listening_port" : 0, // 0 - randomize port
    /* storage_path dir contains auxilliary app files if no storage_path field: .sync dir created in the directory where binary is located. otherwise user-defined directory will be used */
    // "storage_path" : "/home/user/.sync",
    Change My Sync Device to whatever you want, leaving the quotes intact. Change the listening port to the value to wrote down earlier. Remove the // before the storage_path line and change /home/user/.sync to /home/ubuntu/.sync. Save the file and the config is done.
  16. Run sudo update-rc.d btsync defaults to make sure everything runs on startup.
  17. Run sudo /etc/init.d/btsync start to start the daemon. It should say Starting BTSync for ubuntu. Go to the URL you used in step 7 to make sure it is running
  18. Follow steps 3 - 5 again, only this time use the listening port number you used earlier.

That's all there is to it.

C# Memory Leaks January 17th 2015

For the past three or four work days I have been trying to find a memory leak in a Windows Service we created that is used to synchronize data between two systems on a nightly basis. I had just added a single method to it for a new job and the memory usage was climbing until it hit the 2GB process limit.

At first I thought it might be something going wrong in my foreach loop. That was the only thing I added running enough to create that kind of usage. I was looping over 200,000 items for the initial sync so all it wouldn't be that hard if I had a object keeping track of everything I synced. But I didn't have any of that. I went through my loop several times. Then I started wondering about the method generating the API calls to the other system. If something was going wrong there it would easily explain the memory usage. I had to make up to 3 API calls for each loop iteration.

At first I thought it might be because the class for making the API calls was static and I saw an article that said static objects aren't garbage collected, so I started digging through there but couldn't find anything. Then I decided I needed to confirm a memory leak. I pulled out Performance Monitor and set it to track the Private Bytes for my process. Sure enough as soon as I triggered the sync the memory usage steadily took off. Once I had whittled down the objects I had to work on so I wouldn't run out of process memory I saw that when it was done it wouldn't release the memory. Now I really had to find the problem, restarting the service nightly wouldn't be viable.

I found a tool called DebugDiag and that proved invaluable. I was able to create a dump of the process and then generate a report. That report had a great warning at the top that lead me to an article explaining exactly what was going on.

DebugDiag Results

As it turns out XmlSerializer will generate a new temporary assembly when the constructor is called. In my case, up to 6 calls for each loop iteration covers the memory issue I was having. XmlSerializer was being used for tracing and was surrounded by preprocessor directives, so if I had compiled for production instead of debugging the issue would never have shown itself. One of my coworkers had that code there because he needed it for debugging some issues, so I commented it out and added a warning about the memory issue.

Now the service runs with a maximum of 56MB of memory as opposed to shooting up to 2GB. The problem may not have shown up in production, but at least we know about XmlSerializer for the future. If you can help it instantiate it once and use that over and over.

ServiceNow Record Producer Caveats September 23rd 2014

I recently ran into a problem when using Record Producers. In the script for the Record Producer I am using the applyTemplate function on the current GlideRecord. The template used varies, which is why I can't use the template field on the Record Producer. I kept ending up with duplicate task records with the exact same value in the number field.

After some tinkering I came to the conclusion that the applyTemplate method causes an insert, and that the Record Producer also does an insert after running the Record Producer script. After a lot of frustration I was looking through the Sandbox instance of ServiceNow when I noticed that the New LDAP Server inserted the current GlideRecord and they used the setAbortAction method. So I decided to give that a try.

Here is what I ended up with at the end of my Record Producer:


This works because the insert done by the Record Producer is aborted, and thus stops the duplicates.

GORUCK Light May 22nd 2014

Disclaimer: This post will have absolutely nothing to do with programming

Last month I took part in an event called GORUCK Light. The company that puts these events on is called GORUCK. They make amazing rucksacks and other gear. Don't be surprised when you see the prices. They aren't cheap, but that is because they are well made and made in the USA. The company was founded in 2008 by Jason McCarthy, a special forces veteran.

The first of their events started as a way to test their gear. They want the gear to hold up to what special forces can put it through. From that event a new business for them was born. They have 4 levels of Good Livin' events. Light, Challenge, Heavy, and Selection.

I took part in GORUCK Light #263 in Indianapolis, IN. The gist behind the light is 4 - 5 hours, 7 - 10 miles, and 10 or 20 lbs of weight in your ruck (based on your weight). Sounds easy, right? I mean, it called "light"... but it is anything but. I had to push and dig deep to make it through. I hurt for a solid 3 days afterwards, and I am planning to do it all again. It was the most physically demanding thing I have ever done and I am glad I did it. I've give you a run down of what we did as best as I can remember it.

We started off at the Circle where Cadre Matt had us form up into ranks, then he did an inspection of our gear. After all the administrative stuff was done we put on our rucks and headed off to some green space near the Eiteljorg Museum and White River. That was an easy and I let me guard down at that point, wondering if the light was going to be too easy. Then we started PT at the green space. I don't remember what all we did, but here is a short list:

  • Bear crawls
  • Inchworms
  • Lunges
  • Wheelbarrows
  • Push ups
  • Leg lifts
  • The tunnel of love (40+ people)

Everything was done as a team and with our rucks on. It was the second hardest part of the whole event for me. We had one guy who was throwing up in the bushes and another who looked like he was going to pass out. Both were made to drinks tons of water to get hydrated. We had no quitters.

At this point a team leader was selected and we were told our mission. I don't remember the details, but we had to go to the zoo to "rescue animals". And since we were going to save some animals we had to carry our packs at our sides, not on our backs. After a little confusion we ended up across the street from the zoo near the railroad tracks. I can't remember the story for this part, but we got some 5 gallon water jugs out of the bushes and had to carry those now. We went a little further down Washington Street and then cut through the bushes to get to the railroad tracks and we started going down those. As we were going Cadre saw a railroad tie and decided we needed to bring that with us, so a group of guys picked that up and started lugging it.

We kept going down the tracks for a ways until we got to an overpass. We slid down the underside of the overpass to get to the street below. We kept going along that street for a bit until we got to a place we could easily get into White River. We dropped the jugs and railroad tie and were told to lock arms and walk into the water. We were "checking if a hovercraft could land". We went out until everyone was between knee and thigh deep. The water was freezing. Cadre told us to drop our arms, turn around, and assume the push-up position. We were all a little concerned at this point. Then he called out "DOWN!....UP!". I think we did around 10 reps. He then made the observation that we were trying to keep our faces out of the water, which he said needed fixing. He comforted us by saying that it had been done in the Hudson River. We then did what he called Dive Bombers, and he wanted to year sound effects. I've seen a yoga move like it, but you basically start with your butt in the air, take your face down, then angle it back up. Then we would reverse it. He made sure all of us had our face in the water.

Once that was done we got out, and a new leader was selected. We climbed up the bank with our jugs and railroad tie and kept going towards our next checkpoint. As bad as the water was it brought us all some relief from whatever was ailing us. We were weren't moving very fast towards the next checkpoint, and that was a serious mistake. When our time expired we were 0.2 miles from our checkpoint. Cadre said that those with team weight, water jugs, or the railroad tie could continue on to the checkpoint. The rest of us had to bear crawl that 0.2 miles. This was the hardest part of the whole thing. Bear crawling short distances sucks but is do-able. Add 20+ lbs in a pack and make it 0.2 miles and you have a recipe for pain.

After everyone got there Cadre gave us a little speech about how he had to demonstrate to us why we had to be taught a lesson for missing the time to the checkpoint. We had a few minute rest while he talked with our new leader and then we kept going. This time we were really moving fast. None of us wanted to do that ever again. I think we got to the checkpoint with 8 to 10 minutes to spare. At this checkpoint we got to sit down and have story time. If I remember correctly Cadre read us the story of how Kyle J. White earned his Medal of Honor.

After the story we got to leave our railroad tie behind. We were all glad to be rid of it, but we should have known we weren't out of the woods yet. Our next checkpoint was near Banker's Life Fieldhouse. We got there in time without too much issue. We did have one person roll their ankle.

Once we were at the checkpoint Cadre announced that we had taken casualties. Guys could only carry guys, girls could only carry girls. He then proceeded to pick the casualties. Of course he picked the biggest guy there as one of them. And because he had a heart he had a canvas stretcher with him that we could use. Then we were told our checkpoint was the Circle, so we knew this was the end. We barely made it in time. We had less than 60 seconds to spare. Once at the circle Cadre has us do some push ups to make sure we were a little more tired, and then we got our patches.

Going into the challenge I thought that it might be too easy; I thought I was in really good shape. I was wrong. It was hard and I have more work to do to get into better shape. I took part because I wanted to see what I was capable of, and I found out. I didn't find my breaking point but I know a little better how far I can be pushed before I get there. I highly recommend GORUCK Events, and I highly recommend GORUCK's gear. They make amazing stuff and I can't wait for my next event.

ServiceNow - Navigation Handlers January 30th 2014

Today I had a need to toy with a default ServiceNow functionality called Navigation Handlers. This is a feature that doesn't show up at all on the Wiki. The only reference I could find to it was on a ServiceNow Guru article that my predecessor had followed to disable it. Since there is no wiki article on Navigation Handlers I thought I might do a small write up about what I have figured out about them.

Navigation Handlers

A navigation handler can be used to re-write URLs and change where a user goes based on the record they are trying to view. The only default navigation handler is used to send users to the Order Status page when opening a request record in the Self-Service view.

You can access the Navigation Handlers by typing in sys_navigator.list in the navigation search bar on the left.

There are three important fields on the Navigation Handler record. The table, a class, and the script. I don't know what the class is for. The table specifies which table's records the script should be run for. The script is used to rewrite the URL.

This is the default Navigation Handler script on the Request (sc_request) table:

var view = g_request.getParameter('sysparm_view');
if (!view && ! gs.getUser().hasRoles())
    view = 'ess';

if (view == 'ess' || view == 'checkout') {
   var checkOutForm = gs.getProperty('', 'com.glideapp.servicecatalog_checkout_view');
   if (checkOutForm == 'com.glideapp.servicecatalog_checkout_view') {
       var realID = g_uri.get('sys_id');
       g_uri.set('sysparm_sys_id', realID);
       answer =  g_uri.toString('');


This script checks if the view being loaded is the Self-Service (ess) or checkout view, or if the user is unprivilleged. If any of those are the case the URL is changed to load instead of for the page.

There are two objects in there that you won't find documented on the wiki. They are g_uri and g_request. At the end of the article I'll include a list of all of the methods they have. I don't have information on most of the functions, but atleast you can see what exists. g_request seems to be information from the HTTP request. g_uri seems to be used to build a URL, but it also starts of holding the URL of the page you are currently trying to access. By setting answer to the value of g_uri the page gets redirected to the new URL. If answer is null then there is no redirect.


This method gets a parameter from the request URL and returns it as a string. For example if the requested URL is and you call g_request.getParameter('sys_param_view') it will return ess.


This method gets the a parameter from the URL and returns it as a string. For example if the URL is and you call g_uri.get('sys_id') it will return abcdef0123456789.


This method sets a parameter on the URL. It will either add the parameter or update it if it already exists. For example, if the URL is currently '' and you call g_uri.set('sys_id','abcdef0123456789') the URL will be

// These are the available methods for g_uri and g_request


ServiceNow - Workflow E-Mails January 28th 2014

Sending an e-mail from a workflow is ServiceNow is very easy. It barely takes any work. What happens when you need to include Request Item variables? Things get ugly. I had to dig around to find out how to do it; then I hoped I could find a better looking way. I didn't.

This is how you normally access a variable on a Request Item from a script:

current.variables['variable_name'].getGlideObject().getDisplayValue(); // For the displayed value
current.variables['variable_name'].getGlideObject().getValue(); // For the real value (sys_id when it is a reference)

In order to run a script with e-mail you have to use the mail_script tag. So if we wanted to output the display value of a variable in the body of our e-mail we could do this:

template.print( current.variables['primary_contact'].getGlideObject().getDisplayValue() );

If my case I had about 20 variables that needed to go in the e-mail and I didn't want to have that ugly mess throughout my message body. This is the best way I have come up with to accomplish what I needed in less space:

var item_var = function(key)
    template.print( current.variables[key].getGlideObject().getDisplayValue() );

... blah blah blah ...
<strong>Variable One:</strong><mail_script>item_var('var1')</mail_script>
<strong>Variable Two:</strong><mail_script>item_var('var2')</mail_script>
<strong>Variable Three:</strong><mail_script>item_var('var3')</mail_script>

ServiceNow - Passing Data October 24th 2013

At work we came across an interesting issue when dealing with ServiceNow this week. We are working on implimenting Incident and needed to be able to create a change or a request from the incident. The new record needed to have its parent field set to reference the inicident. Some of the original code does this for requests by redirecting the user to the Service Catalog and setting a parameter in the URL that is meant to will in a field on the form. Unfortunatly that doesn't work in our case. Our changes are generated by using a wizard, and our requests are done with a record producer. We figured out a way to work around the issue and pass the values to the wizard and record producer. I thought it was something people might like to see.

Here is what the Create Request UI action looks like:

// Update saves incidents before going to the catalog homepage
// Build the URL
var url = "";
url += current.sys_id;
// Redirect the user

What I am doing here is setting the URL for the record producer. Then I am adding a custom parameter on the end, incident_sysid, and setting it to the incident's sys_id. You can do the same thing for a wizard. Then on the first panel of the wizard, or on the record producer, you need to add a field to put the value into. I added the field and then used a UI Policy to hide the field. Then I created an onLoad Client Script that takes the parameter from the URL and fills in the form field. Here is the client script I am using with both the wizard and record producer.

function onLoad()
   var url = window.location.href;
   var match = url.match(/&incident_sysid=([a-zA-Z0-9]+)/);
   var sysid = match[1];
   g_form.setValue('parent', sysid);

The script runs a regular expression against the current URL to pull out the sys_id and set it to the value of the field. Keep in mind that if you are using a wizard you will need to update the record producer at the end to set the value on the new record. If you are using just a record producer you can name the field the same thing in both the producer and the dictionary and it will automatically copy over.

I Got Mentioned September 9th 2013

Last week a newletter went out that I had never heard of. It is called "Hacks && Happenings" and it is put out by a group called Indy Hackers. In their September 2013 Issue my Time Track project was mentioned! I didn't even know about it until my boss forwarded me the newsletter.

Needless to say, I am now following the newletter and keeping an eye on the Indy Hackers website for any meetups I want to take part in. Thank you Indy Hackers.

Importance vs. Urgency August 22nd 2013

A little while back my boss introduced me to the Importance vs. Urgency Matrix. It was a nice concept and I saw the reasoning behind it, but I didn't pay it much heed. More recently I have started using it a great deal. I have two white boards in my office. One I keep blank for use as needed. The other one has a list of the primary things that are on my radar listed, with a color coded letter next to them. Then at the top I have a matrix similar to the one below and each letter is placed in a quadrant. That is how I keep track of the major things I have to do, and which I should pay the most attention to. Let me explain the matrix.

Along one axis is the importance, along the other is urgency. Things that are important and urgent fall into the quadrant I labeled "1". Things that are important, but not urgent fall into the quadrant labeled "2". Things that are urgent but not important fall into quadrant "3". I think you can figure out quadrant "4". I only break things down into the four quadrants and then let them vie for priority amongst themselves. You can use the whole setup and have things broken down every more finely.

Importance vs. Urgency Matrix

I work on things based on the order of the quadrant they fall in. If something is in quadrant 1 that is where I will try and focus most of my time. I work on things from quadrant 2 and people bring them up and progress is needed. If something falls in quadrant 3 I will work on it a little bit as people as. I treat it like quadrant 2, but with less priority. If something is in quadrant 4 then I will more or less ignore it until it moves somewhere else.

Let me go into the reasoning behind this ordering. If something is both important and urgent then it makes sense to work on that first, right? I think that is fairly apparent, so I won't spend much time on that. The next bit could be taken differently by different people. I put important but not urgent things ahead of urgent but not important things. My reasoning on that is people freak out over small things. If someone is freaking out over an e-mail not arriving in a timely manner then you could say that is considered an urgent issue, they certainly think so. But is it really that important? Most of the time it's not. You have to try and be unbiased when judging these things. If a problem or project is especially dear to someone then they will consider it more important, but you have to look at the bigger picture to decide. If something isn't important and isn't urgent then you will probably waste your time if you focus on it. In my experience things in this category will gain some urgency every once in a while and then drop back down to not being urgent.

Here is an example of what my whiteboard looks like, except my list goes all the way up to N. I don't have it written anywhere what the different colors mean so that people don't know, but the red items are ones I should focus on as much as possible. Green means to focus on every once in a while so I can show progress. Blue means ignore it until someone else brings it up. I'm planning to get some painters tape and use that to make better lines on my board since they are hand drawn right now. I'll probably post an update to this with a picture once I have that done.

Importance vs. Urgency Whiteboard

Etsy Deployinator Environments July 30th 2013

Etsy open-sourced their deployment tool a while back, but I didn't learn about it until more recently. It is called Deployinator and runs on Ruby. I'm looking at using it for a project at work, but I ran into a horrible lack of documentation. The one example that is in the repo isn't bad, it just doesn't show you how to have multiple deploy buttons. In the case I may use it in, I will need multiple. If you aren't sure what I mean by "multiple deploy buttons" check out the picture on this page.

After digging through the code I finally got that working and I think others might want to avoid digging through the code.

If you don't configure the buttons, called environments, then you get one "Deploy production" button, like you see below.

Deployinator default enviroment

Below is what you get for the demo stack. Only a few of the methods are actually required for the default setup. The required methods for the default setup are demo_production, demo_production_version, and demo_head_build.

module Deployinator
  module Stacks
    module Demo
      def demo_git_repo_url

      def demo_git_checkout_path

      def checkout_root

      def demo_production_version
        %x{cat #{demo_git_checkout_path}/version.txt}

      def demo_production_build

      def demo_head_build
        %x{git ls-remote #{demo_git_repo_url} HEAD | cut -c1-7}.chomp

      def demo_production(options={})
        old_build = Version.get_build(demo_production_version)

        git_cmd = old_build ? :git_freshen_clone : :github_clone
        send(git_cmd, stack, "sh -c")

        git_bump_version stack, ""

        build = demo_head_build

          run_cmd %Q{echo "ssh host do_something"}
          log_and_stream "Done!<br>"
          log_and_stream "Failed!<br>"

        # log this deploy / timing
        log_and_shout(:old_build => old_build, :build => build, :send_email => true)

The code that sets up the environments is in helpers.rb. The environments are defined by an array of hashes, this is what the code for the default environment looks like:

  :name            => "production",
  :title           => "Deploy #{stack} production",
  :method          => "#{stack}_production",
  :current_version => proc{send(:"#{stack}_production_version")},
  :current_build   => proc{Version.get_build(send(:"#{stack}_production_version"))},
  :next_build      => proc{send(:head_build)}

Once I found this code, and found the typo in my method name I was easily able to add more environments. To add environments to the demo stack that is provided all you have to do is define a demo_environments method in the stack file. Below is an example with a qa and production environment defined in a dynamic way.

def demo_environment
    :name            => "qa",
    :title           => "Deploy #{stack} qa",
    :method          => "#{stack}_qa",
    :current_version => proc{send(:"#{stack}_qa_version")},
    :current_build   => proc{Version.get_build(send(:"#{stack}_qa_version"))},
    :next_build      => proc{send(:head_build)}
    :name            => "production",
    :title           => "Deploy #{stack} production",
    :method          => "#{stack}_production",
    :current_version => proc{send(:"#{stack}_production_version")},
    :current_build   => proc{Version.get_build(send(:"#{stack}_production_version"))},
    :next_build      => proc{send(:head_build)}

After adding this environment you will need to add a few additional methods ( demo_qa and demo_qa_version ). If you wanted you could also define the environments like so:

def demo_environment
    :name            => "qa",
    :title           => "Deploy demo qa",
    :method          => "demo_qa",
    :current_version => proc{send(:demo_qa_version)},
    :current_build   => proc{Version.get_build(send(:demo_qa_version))},
    :next_build      => proc{send(:head_build)}
    :name            => "production",
    :title           => "Deploy demo production",
    :method          => "demo_production",
    :current_version => proc{send(:demo_production_version)},
    :current_build   => proc{Version.get_build(send(:demo_production_version))},
    :next_build      => proc{send(:head_build)}

Here is what you end up with.

Deployinator default enviroment

So far this has been the biggest thing that wasn't explained. Anything else I come across I'll add here as well.