Jade, Javascript, NodeJs, Optimizations, Performance, Productivity, SPA

Jade pre-compiling for SPA applications

There’s an endless debate regarding server side VS client side templating on the web, so i’ll just say it right away to avoid getting caught in the crossfire: I don’t that one is better than the other, i do think that this is a classic case of right tools for the right job (and under the right circumstances). What i do want to share is a bit of a compromise, where you can leverage some of Jade’s power to increase your productivity and save some processing time in the client, without rendering the templates in runtime on the server.

Compiling Jade in development phase

The idea is simple: You can use jade to pre-compile views, those views can be –

  • Templates that are not requiring run-time logical decisions to render
  • Small view pieces that you normally would be hard-coded into your HTML

I’m talking about a win-win situation which will boost-up your productivity and even might save some unnecessary client template rendering.

Boost up your productivity

Many developers (me among them) like the eye-pleasing jade syntax, fortunately it’s not only shiny and pretty, but also provides some nice perks for your development flow.

  • First of all, it saves keystrokes
  • Second – it’s less error prone than HTML markup, where you may discover an unclosed div only after it’s corrupting your entire UI
  • Also, it’s more readable (although some may disagree)
  • And last but most definitely not least, you can “templatize” pieces of HTML that you previously couldn’t. For example, lets say you’re using some pretty Kendo-Angular drop down. The directive can accept some bindings, in our case: k-rebind and k-options. So in every place you’re using this drop down, you will write:

    <select kendo-drop-down-list data-k-rebind="vm.kRebind" data-k-options="vm.kOptions"></select>

    In such case, if one day you’ll have to change/modify the dropdown widget, you’ll have to change it in every single place.

    Or… you can use jade’s mixins:


    include ./widgets.jade
    div A page with a dropdown


    mixin getDropdown
            select#dropdown(kendo-drop-down-list, data-k-options="vm.kOptions", data-k-rebind="vm.kRebind")

    And then compile it with:

    jade content.jade --pretty

    And the output will be:

    <div>A page with a dropdown</div>
    <select id="dropdown" kendo-drop-down-list="kendo-drop-down-list" data-k-options="vm.kOptions" data-k-rebind="vm.kRebind"></select>

    The outcome is that you have extracted the dropdown to its own template (something you’ll rarely do) without “paying” anything rendering-speed/memory wise.

In conclusion, easier to read + less mistakes + less keystrokes + more templates = higher productivity.

Boost client rendering speed

How many times did you use client templating just to avoid writing the same piece of HTML more than once? I’m not talking about templates that shown/hidden based on logical conditions or anything that requires any kind of logic. I’m talking about cases where you just wanted to avoid code duplication, so you used your client templating. God knows I’m a sinner. But I’m only human, and no reasonable human want to do the same work twice, surely not four times when the product guys will change their minds, and definitely not six times when they change their minds again. Yet, in my humble opinion, it’s insufficient reason to use client templating.

Consider the following case: You have multiple pages with the same header and footer, you use ng-include and not directives since you can’t see the point in modularizing these two views, you just want to avoid code repetition.
So you’ll have something like this:


<div>This is header</div>


<div>This is footer</div>


<div ng-include="'sub-header.html'"></div>
<div ng-include="'sub-footer.html'"></div>

In this case, in order to fully render the view, your app will have to make 3 HTTP requests (sub-header, sub-footer, content) and use the template engine to render sub-header and sub-footer each time the user requests the view. A bit costly just to avoid code duplication.

Now consider this: Using jade to create 3 different views and then compile them into a single view while still in development, so you’ll have one complete view that won’t require client templating, nor an additional HTTP requests to work, but still will be separated into different files (I’ll discuss file size caveat later in the post). It will look something like this:


include ./sub-header.jade

div This is my content

include ./sub-footer.jade


div This is the header


div This is sub footer

Now, when we’ll compile content.jade

jade content.jade --pretty

And the result will be:

<div>This is header</div>
    <div>This is my content</div>
<div>This is footer</div>

Same result as we would’ve got if we used regular ng-include, but without unnecessary HTTP requests and client processing.

But the file size getting larger!

Definitely valid argument, obviously if we pre-compile every logic-less include, our files will get larger, so you need to ask whether you can or cannot afford those extra KBs.
Some questions you need to ask might be:

  • What would be the higher number – amount of users using the application, or the amount of features and use time per each user? For example, if your application is some heavily featured GUI for IT professionals, the amount of the users will be relatively low, while the ‘visit time’ for each user will be long. In such case, it will make a lot of sense to pre-compile the templates so your server will help out the client.
  • In which environment will the app run? Open to public or on local network? In local environments you can allow yourself to be less concerned about download speed, also making pre-compile a valid option.
  • Is your current load-time is acceptable? If yes, then you might consider to gzip your requests (improving the request by around 50-90%), while adding some more KBs to your files, so the trade-off might be worth it.

I can go on and on about more considerations, but i believe that my point is clear enough. You shouldn’t blindly pre-compile your templates, you should benchmark the change and take many factors into consideration. But still, in my opinion, pre-compiling is a valid and helpful method to boost your client speed.

Laravel, PHP

Exception in the exceptions handler

I’ve stumbled upon this issue couple of months ago, when our Laravel based backend server stopped logging and handling exception that was thrown in a specific flow that included a file upload.

The easy part was to find this exception, a quick peek in Apache’s logs revealed what was hiding: FileNotFoundException. But the file existed, the upload was successful and i had no idea which file Willis was talking ’bout. After a while i found the real exception, some edge case data integrity issue. The small problem was fixed quickly, but the bigger problem just began; there is a case where our exception handler fails for unknown (yet) reason.

To make a long story short – there was a problem (or intentional design) in httpFoundation\File implementation that didn’t played nicely with our attempt to capture the request URL which caused the exception and log it. So eventually –


In our exception handler caused another exception. And there you go, Exception in the exceptions handler.

Handling exceptions in the exception handler

As you may know, Laravel’s exception handler extends ExceptionHandler and have 2 methods:

  • report – that is responsible for reporting the exception, there you may log it, send it to external bug management tools such as New Relic or anything else.
  • render – that is responsible for creating the HTTP response that will be sent back to the browser.

After report method finished processing whatever you defined as a proper exception handling, it will pass the exception to it’s parent (ExceptionHandler) report method, unless of course you had a problem in the handler.

So how would you handle an exception in the handler itself?

A simple try-catch-finally block was the solution for this exception-seption.
Consider the following modification of Laravels report method:

public function report(Exception $e) {
	try {
		// Try to execute exception handler
		Log::error("Exception ... ");
		// ... Report to New Relic
		// ... Blame the engineers
	} catch(Exception $handlerException) {
		// Report also the handler's Exception
	} finally {
		// Whether the try part succeeded or not, report the original exception
		return parent::report($e);

The logic is simple. try to execute your handler, catch if something goes wrong and report it, and then finally report the original exception that caused the exception that activated the exception handler. In this case, if god forbid, your handler will break, nothing will go unnoticed, not the original exception, and not the exception in the handler.

AngularJs, Javascript

Using promise chaining and $q.when to create a complete and clean flow out of distributed unrelated API’s

Recently i had to develop a feature which utilized no less then 4 different API’s in a single flow. While this may sound complicated, the real challenge was integrating all of them into a clean angular flow, without relying on scope.$apply, safeApply, $digest, unnecessary watchers and all kind of white or black magic. Some of the API calls were synchronous and some were asynchronous, but all of them had to be executed in a chain, passing data to each other.

So – how should we take 4 completely unrelated libraries, exposing different API’s, and not only properly chain them, but also integrate them properly into our application flow?

First of all, $q.when

As angular documentation about $q explains $q.when:
Wraps an object that might be a value or a (3rd party) then-able promise into a $q promise. This is useful when you are dealing with an object that might or might not be a promise, or if the promise comes from a source that can’t be trusted.

So what exactly does it means? If we’ll (extremely) simplify the explanation, it basically means that whichever value you’ll receive from a given 3rd party method, whether it’s a promise or a regular value, you will be able to handle it as you would have handled a regular $q promise resolution. On top of that, what the documentation does not mention, is that after the state is changed (wether it was resolved or rejected) – digest cycle is being triggered, eliminating the need for $scope.$apply.

So here you go, one line of code, one (and a half) line of explanation, and you have all your 3rd party API’s standing in line (literally) ready to be integrated into your application flow.

Now let’s wrap everything to a story-like flow execution

Since we used $q.when to wrap all our vendor methods, they are now chainable and can be executed in a very clear and self explanatory code.

Let’s see some of this code we are talking about:


<body ng-controller="AppController">

   <h1>Build a car</h1>



app.controller("AppController", function($scope, $q) {

  $scope.car = "Building my car...";

  // execute the car build flow.
    .catch(function(err) {
      // catch an error if exists
      if (err) $scope.car = 'Could not finish building the car :<';
  // wrap all of our 3rd party methods in $q.when,
  // this will allow then to be chainable and take normalize the return value,
  // wether its a promise or a value
  var getCarShield = function() {
      return $q.when(carShieldFromMechanic()).then(function(carShield) {
        return carShield;

    getWheels = function(carShield) {
      return $q.when(wheelsFromVendor(carShield)).then(function(wheelsType) {
        return wheelsType;

    getCarColor = function(shieldAndWheels) {
      return $q.when(paintTheCar(shieldAndWheels)).then(function(finalizedCar) {
        return finalizedCar;

    displayTheCar = function(car) {
      $scope.car = car;


// get car shield from the mechanic
function carShieldFromMechanic() {
  return '2008 GT500';

// get wheels from wheels vendor
function wheelsFromVendor(carShield) {
  return carShield += " with 160/65r315 wheels";

// painting the car may take some time, so use jquery promise and notify me when it's done
function paintTheCar(shieldAndWheels) {

  var deferred = $.Deferred();

    function resolveOperator() {
      deferred.resolve(shieldAndWheels + ", Colored in red");
    }, 2000);

  return deferred;


This code is self explanatory but let’s review it quickly:

At the beginning we define the method chain, each method in the chain is actually a vendor method wrapped in $q.when.
As you can see, carShieldFromMechanic and wheelsFromVendor returning simple values while wheelsFromVendor returns jQuery promise, and all of them wrapped by $q.when.
Each chained method receives a data from the previous method in the chain, and finally we are .catching to see if any of the methods didn’t retrieve a value or an error occurred in one of them.

You can play around and try to change

deferred.resolve(shieldAndWheels + ", Colored in red");



or throw an error inside on of the other methods to see how .catch will handle this error.

And that’s it, now you have a complete working flow which combining multiple 3rd party libraries and perfectly integrated into your Angular application.

AngularJs, Grunt, Optimizations, Performance

Improving performance in production environment

By default, when we create data-bound elements, angular attaches additional information about the scope and the bindings to the DOM node, and then applying ng-binding CSS class to the data bound element. Debuggers (such as Batarang) and test frameworks (Like Protractor) requires this information to be able to run.

Let’s try it out by ourselves

We will setup a simple app with a controller and a directive with isolated scope.


<div ng-controller="myController">controller</div>
<isolated-scope data-binding="isolated scope"></isolated-scope>


app.directive('isolatedScope', function() {
  return {
   scope: {
     binding: "@binding"
   template: "{{binding}}"

app.controller('myController', function($scope) {
  $scope.location = "You are in a controller";

Next, if we’ll execute in the console:


We will get the scope of our controller.

$ChildScope: null
$$childHead: null
$$childTail: null
$$listenerCount: Object
$$listeners: Object
$$nextSibling: Scope
$$prevSibling: null
$$watchers: null
$id: 2
$parent: Scope
location: "You are in a controller"

And if we try –


We will get the isolated scope of the directive

$$childHead: null
$$childTail: null
$$destroyed: false
$$isolateBindings: Object
$$listenerCount: Object
$$listeners: Object
$$nextSibling: null
$$phase: null
$$prevSibling: ChildScope
$$watchers: Array[1]
$id: 3
$parent: Scope
$root: Scope
binding: "isolated scope"

OK, that’s informative. So what’s the problem?

The problem is, that according to Angular’s running in production document, this info may come at a cost of “significant” performance loss.

Luckily, the solution is quite simple. Since you need this information while developing and debugging, but you do want to turn it off in production,
you can easily switch it on/off in the config phase of the application, and control this configuration with your build tool. Let’s see an example –

First, we configure debugInfoEnabled during the config phase of our application.

// application's config constant
angular.module('myApp').constant('myConfig', {
	enableAngularDebugInfo: true

// setup necessary changes in config phase 
angular.module('myApp').config(function ($urlRouterProvider, myConfig) { 

The value enableAngularDebugInfo should be true by default, since we need it for development.
Next, we should disable it when building the application for production. I will demonstrate an example with Grunt, but you can do it with Gulp, Cake, Broccoli or any other task runner.

Configuring GruntFile to disable debugInfoEnabled in production

For simplicity purposes, i have removed from the GruntFile all commands that don’t directly relevant to disabling AngularDebugInfo:

module.exports = function (grunt) {

    // Project configuration.
        'string-replace': {
            disableAngularDebugInfo: {
                options: {
                    replacements: [{
                        pattern: 'enableAngularDebugInfo: true,',
                        replacement: 'enableAngularDebugInfo: false,'
                src: 'temp/app.full.js',
                dest: 'temp/app.full.js'

    // for build task, run string-replace.
    grunt.registerTask('build', ['string-replace']);

As you can see, we used the neat grunt-string-replace to replace true to false when running the build command. Obviously the actual build command will contain many more tasks (tests, jshint etc’), string-replace it’s only one of them.

And that’s it, now in your app.full.js file, angularDebugInfo will be disabled, so you can keep on developing while boosting up your applications performance in production environment.

Javascript, NodeJs

Improve your logging with stack trace

Informative logs playing crucial part in fast debugging, especially when you debug someone else’s code.
Today i wanted to share a nice trick that will help you detect the origin’s of a problem a bit faster.

Let’s say you are debugging a simple flow for a file upload, which was written by someone else or by you couple of months ago, so you don’t remember perfectly where’s what.

You click on the upload button and then bam! you get an error. You are confident that the log is informative enough to explain why,
so you tail -f the log and there you see,

10:56:16 pm – Could not upload file to the file server

Seems clear enough, but now you want to see the piece of code that generated this error.
How would you find it? ctrl+f the entire project folder for the log string? What if the same log exists in multiple files?

Well, obviously you will find it soon enough, but why won’t we try and save us those 30 seconds with logging the function that generated this error.

stack-trace to the rescue

nodeJS have multiple packages to help us easily deal with our stack trace, i personally prefer stack-trace.
Let’s install the package:

npm install stack-trace --save

Now let’s assume we have a common module that holds our logger (IMO it’s better to have a logger in a wrapper module than to directly use it in each module we have, just in case we’ll want to replace it).

Our common module will look like this:

var stackTrace = require('stack-trace');
var moment = require('moment');

var exports = {};

exports.logs = {
    debug: function (text) {

        // get the stack trace
        var trace1 = stackTrace.get()[1];
        var date = moment().format('h:mm:ss a');

        // build the log phrase from the stack trace file name, method and line number
        var trace = {
            level1: date + " - FILE: " + trace1.getFileName().replace(/^.*[\\\/]/, '') + " - METHOD: " + trace1.getMethodName() + " (" + trace1.getLineNumber() + ")"

        // log.debug with whichever library you choose, console is only for simplicity


module.exports = exports;

We have required stack-trace and the moment library (a great library for handling dates in JavaScript, if you are not familiar with it you should check it out). After this, we exposed logs on the module, and logs have method debug. The debug function accepts the string we want to log, then gets the stack trace and the time, and prints it before the actual text. I’ve used console.debug just for simplicity purposes, but you may use any logger you want.

Now lets go back to our file upload example.
Let’s say we handling a request for /file/upload in a route file called file-route.js:

var common = require('../modules/common');

// ... some upload logic 


function error(err) {
    common.logs.debug("Could not upload file to the file server: " + err);

// ... continue

The output to the console will be:

10:56:16 pm – FILE: file-route.js – METHOD: uploadFile(8)
Could not upload file to the file server: METHOD NOT ALLOWED

Which will be much more informative than just a simple

10:56:16 pm – Could not upload file to the file server

AngularJs, Javascript, Optimizations, Performance

Shorten $digest cycles with one time bindings

In large scale angular apps (or poorly designed small-medium scale), performance issues, if not caught on time, could be extremely painful to fix. Memory leaks, uncontrolled watchers, unnecessary expressions processed during the $digest cycle and the list goes on and on.

Today I wanted to share a little addition presented in angular 1.3 and allows us to shorten our $digest cycles.

Consider the following (simple) example:


myController.controller('MyController', function($scope, urlConfigurations, $timeout) {
    var vm = this;
    vm.userId = 'firstUser';
    vm.facebookProfile = urlConfigurations.facebook;
    vm.twitterProfile = urlConfigurations.twitter;
    vm.linkedin = urlConfigurations.linkedin;

    vm.nextUser = function() {
        vm.userId = 'secondUser';


myApp.constant('urlConfigurations', {
    	facebook: 'http://facebook.com/?profile=',
        twitter: 'http://www.twitter.com/?id=',
        linkedin: 'http://www.linkedin.com/?id='


<div ng-controller="MyController as vm">
      <a ng-href="{{vm.facebookProfile + vm.userId}}">Facebook</a>
      <a ng-href="{{vm.twitterProfile + vm.userId }}">Twitter</a>
      <a ng-href="{{vm.linkedin + vm.userId}}">Linkedin</a>
      <button ng-click="vm.nextUser();">Next User</button>

Looks legitimate piece of code, right? We have urlConfigurations constant which holds our URLs, then we retrieve it in our controller and sending it to the view , which combining it with vm.userId to create links to social media account of the specific user id. The nextUser() method allows us to change the user id, which will also modify the social media URLs.

Where is the problem?

The problem is that we have 3 unnecessary watchers here: vm.facebookProfile, vm.twitterProfile and vm.linkedin. So, when we will update vm.userId, it will trigger a $digest cycle that will process everything on the scope, including those 3 watchers that will never change in runtime, making processing them redundant and time/resources consuming. Now consider a case where we don’t have only 3, maybe we have a 100 (a good example might be localization implementation).

The solution

As for angular 1.3 > , the solution is quite simple. We can just add :: near the property we want to bind only once. So in our view, we will be using:

<a ng-href="{{::vm.facebookProfile}}{{vm.userId}}">Facebook</a>
<a ng-href="{{::vm.twitterProfile}}{{vm.userId}}">Twitter</a>
<a ng-href="{{::vm.linkedin}}{{vm.userId}}">Linkedin</a>

In this case, vm.facebookProfile, vm.twitterProfile and vm.linkedin won’t be processed in the $digest cycle each time vm.userId is modified.

AngularJs, Javascript

Extending underscore.js with your own methods

Every project have its own common/utilities library, many projects also use underscore.js (or even ports of this library such as underscore-java or underscore-php). You can easily integrate your own utilities functions into underscore to make thing cleaner and reduce the amount of requires (or global objects, depending where you’re holding your utilities).

In order to extend underscore with your own method you can use _.mixin .

    excerpt: function(string, numOfChars) {
        return string.substr(0,numOfChars);

Then you can easily do:

var string = _.excerpt(‘Give me first 4 chars’,4); // string = ‘Give’

That’s it.