Javascript, Protractor, Tests, WebDriver

Using angular.element with WebDriver’s executeScript to perform actions which aren’t accessible from UI

A while ago i had the following problem:
I needed to write UI test of a canvas based component, which received array of objects, for the sake of the post let’s assume those were cars objects, and draw DOM elements on the canvas to represent those cars. It also had functionality to display car information in a popup when a user clicks on a car or a model in scope is updated (let’s call it vm.selectedCar).

I needed to test the information popup functionality.
The problem was that i didn’t have anything to work with, because:

  1. I didn’t have any element to select, because there were no DOM elements on the canvas.
  2. I couldn’t rely on mouse actions because elements position on the canvas was inconsistent by design.

So basically i couldn’t do anything on a UI level to trigger the appereance of the information popup.


As you may know, you can execute javascript by using WebDriver’s executeScript, so if we combine it with angular.element ability to retrieve the scope of the currently focused element, we can basically perform actions on the scope that we can’t do by interacting with the UI. Let’s see some code:

var openCarInfoTroughConsole = function(args) {
        var carsCanvasScope = angular.element($("#"+args.canvasId)).scope();
        carsCanvasScope.vm.selectedCar = args.carId;

var mockCar = {
        canvasId: 'canvasElmId',
        selectedCar: 'carId33'

    .executeScript(openCarInfoTroughConsole, mockCar)
    .then(performTests, handleError);

As you can see here, the first argument passed to executeScript, is a function that will be executed in the browser console scope (you can also pass a string).
The second param is arguments that we can pass from the test scope to the function scope when it’ll be executed, in this case we use those arguments to pass our car mock Id.

The openCarInfoTroughConsole itself using the angular.element($0).scope() / isolateScope() trick to get the scope of the selected element. But in this case, instead of using $0 we locate our element by id.

Important to note – depending on your angular version, debugInfoEnabled may be set to false. The app must run with debug info enabled, otherwise scope() or isolateScope() will be undefined. Since in most cases you’ll run tests in development enviroment, it shouldn’t to be a problem.

After we got the scope of our component, we are updating selectedCar model with the id of our mock car and then triggering $apply to notify angular that we changed the model. After this, the information popup will be displayed and you’ll be able to run tests on it. That’s it.

Debugging, Javascript

Faster 3rd party widget debugging using DOM breakpoints

Imagine the following scenario:
You got a requirement to implement a multi select dropdown with icons near each option and filtering capabilities. Assuming your first course of action won’t be writing it from scratch, you google around to find a library which meets those needs. You found exactly what you needed, it even has a lot of stars in github, seems safe to use. Jackpot.
You implement this dropdown and everything works fine.

After 2 weeks you get a call from the good guys at QA saying that sometimes, when filtering random strings, the icons of the filtering results are disappearing. How would you handle it? I assume that you’ll google the problem to see if this is a familiar issue with this library. But what if you won’t find anything, then what?

DOM Breakpoints to the Rescue

In order to understand what causes the problem, we’ll need to know exactly which code removes the DOM in our library and follow it’s call stack. We can do this by using DOM breakpoint on the removed element. Let’s see some code:


<!DOCTYPE html>
    <script data-require="jquery@3.0.0" data-semver="3.0.0" src=""></script>
    <script src="3rdPartyLib.js"></script>

    <i id="myIcon">ICON</i>


// Assume this is a 3rd party library you are using.

var iconRemover = function() {

	   Imagine we have much code here
	   Imagine we have much code here
        function firstInStack() {
	   Imagine we have much code here
        function secondInStack() {
	   Imagine we have much code here
        function thirdInStack() {

In order to understand what causes the icon element to be removed, inspect the element and then click on ‘break on’ and select ‘Node removal’ (we’ll cover the other options later):


Now we’ll wait 10 seconds until the first timeout of our 3rd party library will kick in and the debugger will open with the following screen:


As you can see, the breakpoint stopped at jQuery’s removeChild function. But if you look at the call stack you’ll able the to see the call stack that eventually invoked jQuery’s remove. You’ll be able to see that firstInStack called secondInStack which called thirdInStack which removed the icon by id. It gives us a crystal clear picture of what’s going on, which makes debugging much easier. You can click on any function in the stack to examine it’s contents, if we click on thirdInStack we’ll see:


One thing you need to make sure is that you checked the Async option in the debugger (I’ve marked it in the previous screen shot). This instructs the debugger to show asynchronous functions in the call stack.

Type of breakpoints

As you can see there are 3 types of breakpoints:

  • Subtree Modifications – Addition, removal or modification of any child element
  • Attributes Modifications – Any change in the attributes of the element under inspection
  • Node Removal – Removal of the element under inspection

That’s it, pretty simple yet efficient.

Continous Integration, Grunt, Javascript, Tests, Xvfb

Running UI tests on real browsers in continuous integration using X virtual framebuffer (trough a task-runner)

Many times when projects run UI tests as part of CI (continuous integration) they rely on PhantomJS, a great headless browser. Unfortunately, using PhantomJS has some drawbacks:

  1. You really don’t cover quirks happening from browser to browser (working on chrome, not on FF, working on FF, not working on IE), so you might actually ship a buggy screen to production
  2. Debugging is hard. Although PhantomJS runs on top of WebKit, it has it own quirks, so you might get failing tests while all of your browsers show that everything’s OK. Debugging something you can’t see is troublesome.

UI tests with real browsers

So you’re not completely satisfied with PhantomJS and you want to run your UI tests on real browsers, which is a good idea that have a little difficulty (or not): you want your UI tests to run as part of CI, but most of CI servers don’t have displays.

Enter Xvfb (X virtual framebuffer). From Wikipedia: Xvfb is a display server implementing the X11 display server protocol. In contrast to other display servers, Xvfb performs all graphical operations in memory without showing any screen output. From the point of view of the client, it acts exactly like any other X display server, serving requests and sending events and errors as appropriate. However, no output is shown. This virtual server does not require the computer it is running on to have a screen or any input device. Only a network layer is necessary.

So basically what you’ll need to do is:

  1. Start Xvfb
  2. Run your tests
  3. Kill Xvfb

Running Xvfb trough a task runner

In this example we’ll use Grunt to run Xvfb as one of the tasks, but this is also possible with Gulp (I’m not sure about the others).
First, we’ll install the required packages, make sure you’re not forgetting to –save-dev since we want to update our package.json with the new dependency for development.

npm install --save-dev grunt-env
npm install --save-dev grunt-shell-spawn

Next we’ll setup grunt tasks for Xvfb:

        shell: {
            xvfb: {
                command: 'Xvfb :99 -ac -screen 0 1600x1200x24',
                options: {
                    async: true
        env: {
            xvfb: {
                DISPLAY: ':99'

Now we’ll setup the UI test task, for the example we’ll use protractor but it can be any other library.

protractor: {
            options: {
                keepAlive: false,
                configFile: "protractor.conf.js"
            run: {}

In protractor.conf.js we should create configuration to run UI tests on the required browsers (I assume you have already the proper setup for this, since it’s not in the scope of this post).

Now we got:

  1. Grunt test task configuration
  2. Protractor configuration which will run the tests on all the required browsers
  3. Grunt Xvfb tasks configuration

Let’s create a task to combine everything together:

grunt.registerTask('CI-E2E', ['shell:xvfb', 'env:xvfb', 'protractor:run', 'shell:xvfb:kill']);

After this, a can simply run


and all your UI tests will run on top of Xvfb. That’s it.

Javascript, Memory Leaks

Debugging memory leaks: When the famous 3 snapshot technique can cost you days of development

Pretty much every second article about debugging memory leaks demonstrates the great 3 snapshot technique introduced by the GMail team and which was used to debug heavy memory leaks in GMail. While there’s no doubt about it being a great idea, it seems that not many mention the possible problem that may occur if you rely on only 3 snapshots without understanding 100% of your framework, vendor libraries and internals of JavaScript. You might say that one must actually understand and know every line of code in a project he is working on, and it surely an admirable statement, but not really practical in the real world for medium+ sized applications. Just for some proportions: one of the world most famous UI libraries, Kendo, is 166,000 line of codes.

What’s the possible problem with only 3 snapshots?

Well, basically, that snapshot 3 may send you barking at the wrong tree for days. The 3 snapshot technique suggest that objects allocated between snapshots #1 and #2 and still exists in snapshot #3 might be leaking, which is not true in many cases, cases like singleton implementations, services, factories (a nice example will be Angular on-demand instantiation), some of native JS interfaces and basically everything that instantiated between #1 and #2 and have all the rights to keep on living in snapshot #3.


Let’s take a simple code sample where we have 2 buttons, one add items to array and the other removes them. The array lives in a singleton which is instantiated only on demand, but after it was instantiated once, he keeps on living as a global. (I know, globals are the devils work, but bare with me for the sake of the example).


<!DOCTYPE html>
    <title>Memory Test</title>
    <button id="addData">addData</button>
    <button id="removeData">removeData</button>
    <script src="singleton.js"></script>


var myDataLayer = null;

document.getElementById("addData").addEventListener("click", addData);
document.getElementById("removeData").addEventListener("click", removeData);

function getSingleton() {

    if (myDataLayer) return myDataLayer;

    myDataLayer = (function() {

            var instance;

            function init() {

                var privateDataArray = [], i;

                function privateAddToData(numOfItems) {
                    for (i=0; i<=numOfItems; i++) {

                function privateEmptyData() {
                    privateDataArray.length = 0;

                return {
                    publicAddToData: privateAddToData,
                    publicEmptyData: privateEmptyData


            return {
                getInstance: function () {
                    if ( !instance ) {
                        instance = init();
                    return instance;


    return myDataLayer;

function addData() {

function removeData() {

Ok so we have this tiny app and we want to make sure we haven’t created any memory leaks, let’s use the 3 snapshot technique (please make sure to test it in incognito and disable all active extensions):

  1. Open the app and take the snapshot in our “healthy” mode
  2. Now click on addData and take another snapshot
  3. After this, click on removeData and take another snapshot
  4. Next, in snapshot #3, click on Summary and then filter only Objects allocated between Snapshot #1 and Snapshot #2

We will see something like this:
Objects allocated between snapshot #1 and snapshot #2

Wait, what? We have objects that aren’t browser’s internals still allocated in snapshot #3. Do we have a leak?

Well, no. As you can see, those objects and arrays related to the singleton we instantiated and to JavaScript’s UI Event and MouseEvent interfaces. All of them logically supposed to live in the 3rd snapshot. The singleton because it’s a global, and the JS Native objects because we still have an active listener (those objects created when the actual click performed, that’s why it’s displaying as objects allocated between snapshot #1 and #2). So yes, we have objects in the 3rd snapshot, but they are not leaking.

But that’s easy, i know i need to ignore those object. I’ll just look for something relevant

In this tiny app you may easily find what is relevant and what is not, but if you’re debugging a medium+ application with multiple vendor libraries and complicated business logic, you may waste a lot of time chasing down irrelevant retainers.

So, what can we do?

Consider this: when you have a memory leak in a set of actions (a flow), repeating this action several times will result in a positive linear graph in the JS heap or DOM count (or both). In simple words, in most cases the graph will keep going up as long you’ll keep doing the same action that is causing the leak (i’ll write about a single leak which won’t result in a linear leak in a few moments). Meaning that objects leaked in snapshot #3, will be kept in all following snapshots, and additional memory will be allocated on top of what was in #3. So if you have a linear leak, you may take 5 snapshots and in snapshot #5 compare snapshots #3 and #4 and discover similar types of objects as you discovered in #1 and #2.

For example:
If in snapshot #3, you are viewing objects allocated between #1 and #2, and you see something like

leakedObject @1

(@ is the location in memory),
then in snapshot #5, where you’ll be comparing objects that were allocated between #3 and #4, you’ll see something like

leakedObject @2

. Meaning, the same object type, leaked twice and created linear leak. If you remove the filter completely and view everything existing in snapshot #5, you will see

leakedObject @1
leakedObject @2

But what good it’ll make to take 5 (or even 7) and not 3?

Let’s get back to our tiny app, and repeat the snapshots process, but now we’ll make 7 snapshots and not 3. Repeat those actions (don’t forget incognito to disable all of the extensions):

  1. Take a snapshot #1 before we begin
  2. Click on add data, take snapshot #2
  3. Click on remove data, take snapshot #3
  4. Click on add data, take snapshot #4
  5. Click on remove data, take snapshot #5
  6. Click on add data, take snapshot #6
  7. Click on remove data, take snapshot #7

You’ll see something like this:
7 Snapshots

Next, if you’ll go to snapshot #3 and compare between #1 and #2, you’ll see something like this:

3 Snapshots

As you can see, we have allocated Objects and an increase of 100kb in memory, which in a case you are not 100% familiar with 100% of the app, would’ve make you think you have a leak.

Now lets go to snapshot #5 and there compare between snapshots #3 and #4:

5 snapshots

you’ll see 2 things: 1, No more increase in memory between #3 and #5. 2. No more allocated objects that still living in snapshot #5 (objects wrapped in brackets are browsers internals, we ignore them).

But wait, the browser internals makes us worry, we don’t want to stick our head in the sand and we want to make sure nothing is leaked. No problem, lets take a look at snapshot #7 and there compare objects that were allocated between #3 and #4:

snapshot 7

We see nothing. No memory size increase, no allocated objects, internal or not. That means that whatever happened in #3-#4 was completely removed, meaning we don’t have any leaks.

Do you see the difference? Using 7 snapshots we validated that we don’t have linear memory leak and our application in an healthy mode. But if we would’ve used only 3 snapshots, we could’ve been wasting our time chasing down retainers just to find the that it’s really OK for the singleton and the internal JS interfaces to be kept in memory.

What about one-time leaks?

You are right to think that only the 3 snapshot technique will catch a leak that happens only once between snapshots #1 – #2. Unfortunately i don’t have a better advice than going and checking every single retainer, understanding what it does and than deciding if it’s a leak or by design. My only advice is to be smart about it, if you see that in the first run your memory jumps to unreasonable numbers (unreasonable depends on the application and the devices running it), you’re definitely should take the time to look into it. But if the have an additional 100-200kb, or even 1mb that allocated only once and you’re not sure if they should, most of the current devices (of course with some exceptions) are strong enough to make you think twice if it’s worth your time.

Jade, Javascript, NodeJs, Optimizations, Performance, Productivity, SPA

Jade pre-compiling for SPA applications

There’s an endless debate regarding server side VS client side templating on the web, so i’ll just say it right away to avoid getting caught in the crossfire: I don’t that one is better than the other, i do think that this is a classic case of right tools for the right job (and under the right circumstances). What i do want to share is a bit of a compromise, where you can leverage some of Jade’s power to increase your productivity and save some processing time in the client, without rendering the templates in runtime on the server.

Compiling Jade in development phase

The idea is simple: You can use jade to pre-compile views, those views can be –

  • Templates that are not requiring run-time logical decisions to render
  • Small view pieces that you normally would be hard-coded into your HTML

I’m talking about a win-win situation which will boost-up your productivity and even might save some unnecessary client template rendering.

Boost up your productivity

Many developers (me among them) like the eye-pleasing jade syntax, fortunately it’s not only shiny and pretty, but also provides some nice perks for your development flow.

  • First of all, it saves keystrokes
  • Second – it’s less error prone than HTML markup, where you may discover an unclosed div only after it’s corrupting your entire UI
  • Also, it’s more readable (although some may disagree)
  • And last but most definitely not least, you can “templatize” pieces of HTML that you previously couldn’t. For example, lets say you’re using some pretty Kendo-Angular drop down. The directive can accept some bindings, in our case: k-rebind and k-options. So in every place you’re using this drop down, you will write:

    <select kendo-drop-down-list data-k-rebind="vm.kRebind" data-k-options="vm.kOptions"></select>

    In such case, if one day you’ll have to change/modify the dropdown widget, you’ll have to change it in every single place.

    Or… you can use jade’s mixins:


    include ./widgets.jade
    div A page with a dropdown


    mixin getDropdown
            select#dropdown(kendo-drop-down-list, data-k-options="vm.kOptions", data-k-rebind="vm.kRebind")

    And then compile it with:

    jade content.jade --pretty

    And the output will be:

    <div>A page with a dropdown</div>
    <select id="dropdown" kendo-drop-down-list="kendo-drop-down-list" data-k-options="vm.kOptions" data-k-rebind="vm.kRebind"></select>

    The outcome is that you have extracted the dropdown to its own template (something you’ll rarely do) without “paying” anything rendering-speed/memory wise.

In conclusion, easier to read + less mistakes + less keystrokes + more templates = higher productivity.

Boost client rendering speed

How many times did you use client templating just to avoid writing the same piece of HTML more than once? I’m not talking about templates that shown/hidden based on logical conditions or anything that requires any kind of logic. I’m talking about cases where you just wanted to avoid code duplication, so you used your client templating. God knows I’m a sinner. But I’m only human, and no reasonable human want to do the same work twice, surely not four times when the product guys will change their minds, and definitely not six times when they change their minds again. Yet, in my humble opinion, it’s insufficient reason to use client templating.

Consider the following case: You have multiple pages with the same header and footer, you use ng-include and not directives since you can’t see the point in modularizing these two views, you just want to avoid code repetition.
So you’ll have something like this:


<div>This is header</div>


<div>This is footer</div>


<div ng-include="'sub-header.html'"></div>
<div ng-include="'sub-footer.html'"></div>

In this case, in order to fully render the view, your app will have to make 3 HTTP requests (sub-header, sub-footer, content) and use the template engine to render sub-header and sub-footer each time the user requests the view. A bit costly just to avoid code duplication.

Now consider this: Using jade to create 3 different views and then compile them into a single view while still in development, so you’ll have one complete view that won’t require client templating, nor an additional HTTP requests to work, but still will be separated into different files (I’ll discuss file size caveat later in the post). It will look something like this:


include ./sub-header.jade

div This is my content

include ./sub-footer.jade


div This is the header


div This is sub footer

Now, when we’ll compile content.jade

jade content.jade --pretty

And the result will be:

<div>This is header</div>
    <div>This is my content</div>
<div>This is footer</div>

Same result as we would’ve got if we used regular ng-include, but without unnecessary HTTP requests and client processing.

But the file size getting larger!

Definitely valid argument, obviously if we pre-compile every logic-less include, our files will get larger, so you need to ask whether you can or cannot afford those extra KBs.
Some questions you need to ask might be:

  • What would be the higher number – amount of users using the application, or the amount of features and use time per each user? For example, if your application is some heavily featured GUI for IT professionals, the amount of the users will be relatively low, while the ‘visit time’ for each user will be long. In such case, it will make a lot of sense to pre-compile the templates so your server will help out the client.
  • In which environment will the app run? Open to public or on local network? In local environments you can allow yourself to be less concerned about download speed, also making pre-compile a valid option.
  • Is your current load-time is acceptable? If yes, then you might consider to gzip your requests (improving the request by around 50-90%), while adding some more KBs to your files, so the trade-off might be worth it.

I can go on and on about more considerations, but i believe that my point is clear enough. You shouldn’t blindly pre-compile your templates, you should benchmark the change and take many factors into consideration. But still, in my opinion, pre-compiling is a valid and helpful method to boost your client speed.

AngularJs, Javascript

Using promise chaining and $q.when to create a complete and clean flow out of distributed unrelated API’s

Recently i had to develop a feature which utilized no less then 4 different API’s in a single flow. While this may sound complicated, the real challenge was integrating all of them into a clean angular flow, without relying on scope.$apply, safeApply, $digest, unnecessary watchers and all kind of white or black magic. Some of the API calls were synchronous and some were asynchronous, but all of them had to be executed in a chain, passing data to each other.

So – how should we take 4 completely unrelated libraries, exposing different API’s, and not only properly chain them, but also integrate them properly into our application flow?

First of all, $q.when

As angular documentation about $q explains $q.when:
Wraps an object that might be a value or a (3rd party) then-able promise into a $q promise. This is useful when you are dealing with an object that might or might not be a promise, or if the promise comes from a source that can’t be trusted.

So what exactly does it means? If we’ll (extremely) simplify the explanation, it basically means that whichever value you’ll receive from a given 3rd party method, whether it’s a promise or a regular value, you will be able to handle it as you would have handled a regular $q promise resolution. On top of that, what the documentation does not mention, is that after the state is changed (wether it was resolved or rejected) – digest cycle is being triggered, eliminating the need for $scope.$apply.

So here you go, one line of code, one (and a half) line of explanation, and you have all your 3rd party API’s standing in line (literally) ready to be integrated into your application flow.

Now let’s wrap everything to a story-like flow execution

Since we used $q.when to wrap all our vendor methods, they are now chainable and can be executed in a very clear and self explanatory code.

Let’s see some of this code we are talking about:


<body ng-controller="AppController">

   <h1>Build a car</h1>



app.controller("AppController", function($scope, $q) {

  $ = "Building my car...";

  // execute the car build flow.
    .catch(function(err) {
      // catch an error if exists
      if (err) $ = 'Could not finish building the car :<';
  // wrap all of our 3rd party methods in $q.when,
  // this will allow then to be chainable and take normalize the return value,
  // wether its a promise or a value
  var getCarShield = function() {
      return $q.when(carShieldFromMechanic()).then(function(carShield) {
        return carShield;

    getWheels = function(carShield) {
      return $q.when(wheelsFromVendor(carShield)).then(function(wheelsType) {
        return wheelsType;

    getCarColor = function(shieldAndWheels) {
      return $q.when(paintTheCar(shieldAndWheels)).then(function(finalizedCar) {
        return finalizedCar;

    displayTheCar = function(car) {
      $ = car;


// get car shield from the mechanic
function carShieldFromMechanic() {
  return '2008 GT500';

// get wheels from wheels vendor
function wheelsFromVendor(carShield) {
  return carShield += " with 160/65r315 wheels";

// painting the car may take some time, so use jquery promise and notify me when it's done
function paintTheCar(shieldAndWheels) {

  var deferred = $.Deferred();

    function resolveOperator() {
      deferred.resolve(shieldAndWheels + ", Colored in red");
    }, 2000);

  return deferred;


This code is self explanatory but let’s review it quickly:

At the beginning we define the method chain, each method in the chain is actually a vendor method wrapped in $q.when.
As you can see, carShieldFromMechanic and wheelsFromVendor returning simple values while wheelsFromVendor returns jQuery promise, and all of them wrapped by $q.when.
Each chained method receives a data from the previous method in the chain, and finally we are .catching to see if any of the methods didn’t retrieve a value or an error occurred in one of them.

You can play around and try to change

deferred.resolve(shieldAndWheels + ", Colored in red");



or throw an error inside on of the other methods to see how .catch will handle this error.

And that’s it, now you have a complete working flow which combining multiple 3rd party libraries and perfectly integrated into your Angular application.

Javascript, NodeJs

Improve your logging with stack trace

Informative logs playing crucial part in fast debugging, especially when you debug someone else’s code.
Today i wanted to share a nice trick that will help you detect the origin’s of a problem a bit faster.

Let’s say you are debugging a simple flow for a file upload, which was written by someone else or by you couple of months ago, so you don’t remember perfectly where’s what.

You click on the upload button and then bam! you get an error. You are confident that the log is informative enough to explain why,
so you tail -f the log and there you see,

10:56:16 pm – Could not upload file to the file server

Seems clear enough, but now you want to see the piece of code that generated this error.
How would you find it? ctrl+f the entire project folder for the log string? What if the same log exists in multiple files?

Well, obviously you will find it soon enough, but why won’t we try and save us those 30 seconds with logging the function that generated this error.

stack-trace to the rescue

nodeJS have multiple packages to help us easily deal with our stack trace, i personally prefer stack-trace.
Let’s install the package:

npm install stack-trace --save

Now let’s assume we have a common module that holds our logger (IMO it’s better to have a logger in a wrapper module than to directly use it in each module we have, just in case we’ll want to replace it).

Our common module will look like this:

var stackTrace = require('stack-trace');
var moment = require('moment');

var exports = {};

exports.logs = {
    debug: function (text) {

        // get the stack trace
        var trace1 = stackTrace.get()[1];
        var date = moment().format('h:mm:ss a');

        // build the log phrase from the stack trace file name, method and line number
        var trace = {
            level1: date + " - FILE: " + trace1.getFileName().replace(/^.*[\\\/]/, '') + " - METHOD: " + trace1.getMethodName() + " (" + trace1.getLineNumber() + ")"

        // log.debug with whichever library you choose, console is only for simplicity


module.exports = exports;

We have required stack-trace and the moment library (a great library for handling dates in JavaScript, if you are not familiar with it you should check it out). After this, we exposed logs on the module, and logs have method debug. The debug function accepts the string we want to log, then gets the stack trace and the time, and prints it before the actual text. I’ve used console.debug just for simplicity purposes, but you may use any logger you want.

Now lets go back to our file upload example.
Let’s say we handling a request for /file/upload in a route file called file-route.js:

var common = require('../modules/common');

// ... some upload logic 


function error(err) {
    common.logs.debug("Could not upload file to the file server: " + err);

// ... continue

The output to the console will be:

10:56:16 pm – FILE: file-route.js – METHOD: uploadFile(8)
Could not upload file to the file server: METHOD NOT ALLOWED

Which will be much more informative than just a simple

10:56:16 pm – Could not upload file to the file server