Apache, Nginx, Optimizations, Performance

HTTP Compression: Reduce up to 90% of HTTP response size with Gzip


Speed is one of the most important (if not the most important) aspects of a quality application. Among others, application speed is effected by the speed of the HTTP requests, which effected by things we can’t (network connection) and can (responses size, structure etc’) control. HTTP compression provides a neat method to control response size and reduce the amount of time it takes for an HTTP request to complete.

Gzip is one of most popular compression utilities and can reduce your response size by up to 90% (You can see a compression list here). One of the nicer parts about Gzip is that from server point of view, it’s relatively easy to setup on most of the modern servers, and from client point of view you literally don’t need to do anything. All of the modern browsers support it, you can see it if you open the requests details on your network tab and look at Accept-Encoding :

gzip

What basically happens is that the HTTP request notifying the server that it can accept Gzipped content, and the server, if configured, Gzipping the content before returning the response.

Configure Apache

Apache supports 2 types of compression options, mod_deflate and mod_gzip. We’ll use mod_deflate since this mod is actively maintained, comes out of the box and easy to setup.

As explained, mod_deflate comes right out of the box in latest Apache installs (at least in Windows installer and trough Ubuntu apt-get), so you don’t need to install anything. To be on the safe side you can check if the mod is available by running:

apachectl -D DUMP_MODULES | grep deflate

You should see something like:

deflate_module (shared)

After we’ve validated that the mod is active, add this to your .htaccess file:

<IfModule mod_deflate.c>
        <IfModule mod_filter.c>
                AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript application/javascript application/ecmascript application/rss+xml application/xml application/json
        </IfModule>
</IfModule>

This is pretty much self explanatory, we’ve just added multiple content types to be compressed after checking that deflate and filter mods are enabled. Note that if you need to support really old browsers you can use Apache’s BrowserMatch directive:

BrowserMatch [HTTP-Agent-Regex] gzip-only-text/html

You can also set DeflateCompressionLevel directive to control the compression.

DeflateCompressionLevel [1-9]

The higher the value the better the compression at cost of more CPU.

Now restart Apache and that’s it.

Configure Nginx

In order to configure nginx you should edit your nginx.conf file :

gzip on;
gzip_types text/html text/plain text/xml text/css application/x-javascript application/javascript application/ecmascript application/rss+xml application/xml application/json

Also on Nginx you can disable gzip for certain browsers and control the compression level:

gzip_disable [HTTP-Agent-Regex];
gzip_comp_level [1-9];

Now restart nginx and everything should be working.

IIS

I’ve never really configured gzip for IIS, but a quick google yield this highly voted answer.

How can i tell my response is compressed?

You can make sure your response came back compressed if you open your network tab and look at the response headers:

Response headers

You can also see the before/after compression size in the main HTTP Requests view:
compressed-size

If you look at the size column, you’ll notice a black and greyed out numbers. The grey number represents the size of the actual size while the grey one represents the compressed size.

That’s it, enjoy.

Jade, Javascript, NodeJs, Optimizations, Performance, Productivity, SPA

Jade pre-compiling for SPA applications


There’s an endless debate regarding server side VS client side templating on the web, so i’ll just say it right away to avoid getting caught in the crossfire: I don’t that one is better than the other, i do think that this is a classic case of right tools for the right job (and under the right circumstances). What i do want to share is a bit of a compromise, where you can leverage some of Jade’s power to increase your productivity and save some processing time in the client, without rendering the templates in runtime on the server.

Compiling Jade in development phase

The idea is simple: You can use jade to pre-compile views, those views can be –

  • Templates that are not requiring run-time logical decisions to render
  • Small view pieces that you normally would be hard-coded into your HTML

I’m talking about a win-win situation which will boost-up your productivity and even might save some unnecessary client template rendering.

Boost up your productivity

Many developers (me among them) like the eye-pleasing jade syntax, fortunately it’s not only shiny and pretty, but also provides some nice perks for your development flow.

  • First of all, it saves keystrokes
  • Second – it’s less error prone than HTML markup, where you may discover an unclosed div only after it’s corrupting your entire UI
  • Also, it’s more readable (although some may disagree)
  • And last but most definitely not least, you can “templatize” pieces of HTML that you previously couldn’t. For example, lets say you’re using some pretty Kendo-Angular drop down. The directive can accept some bindings, in our case: k-rebind and k-options. So in every place you’re using this drop down, you will write:

    <select kendo-drop-down-list data-k-rebind="vm.kRebind" data-k-options="vm.kOptions"></select>
    

    In such case, if one day you’ll have to change/modify the dropdown widget, you’ll have to change it in every single place.

    Or… you can use jade’s mixins:

    content.jade

    include ./widgets.jade
    
    div A page with a dropdown
    +getDropdown
    

    widgets.jade

    mixin getDropdown
            select#dropdown(kendo-drop-down-list, data-k-options="vm.kOptions", data-k-rebind="vm.kRebind")
    

    And then compile it with:

    jade content.jade --pretty
    

    And the output will be:

    <div>A page with a dropdown</div>
    <select id="dropdown" kendo-drop-down-list="kendo-drop-down-list" data-k-options="vm.kOptions" data-k-rebind="vm.kRebind"></select>
    

    The outcome is that you have extracted the dropdown to its own template (something you’ll rarely do) without “paying” anything rendering-speed/memory wise.

In conclusion, easier to read + less mistakes + less keystrokes + more templates = higher productivity.

Boost client rendering speed

How many times did you use client templating just to avoid writing the same piece of HTML more than once? I’m not talking about templates that shown/hidden based on logical conditions or anything that requires any kind of logic. I’m talking about cases where you just wanted to avoid code duplication, so you used your client templating. God knows I’m a sinner. But I’m only human, and no reasonable human want to do the same work twice, surely not four times when the product guys will change their minds, and definitely not six times when they change their minds again. Yet, in my humble opinion, it’s insufficient reason to use client templating.

Consider the following case: You have multiple pages with the same header and footer, you use ng-include and not directives since you can’t see the point in modularizing these two views, you just want to avoid code repetition.
So you’ll have something like this:

sub-header.html

<div>This is header</div>

sub-footer.html

<div>This is footer</div>

content.html

<div ng-include="'sub-header.html'"></div>
    <div>
      Content
    </div>
<div ng-include="'sub-footer.html'"></div>

In this case, in order to fully render the view, your app will have to make 3 HTTP requests (sub-header, sub-footer, content) and use the template engine to render sub-header and sub-footer each time the user requests the view. A bit costly just to avoid code duplication.

Now consider this: Using jade to create 3 different views and then compile them into a single view while still in development, so you’ll have one complete view that won’t require client templating, nor an additional HTTP requests to work, but still will be separated into different files (I’ll discuss file size caveat later in the post). It will look something like this:

content.jade

include ./sub-header.jade

div This is my content

include ./sub-footer.jade

sub-header.jade

div This is the header

sub-footer.jade

div This is sub footer

Now, when we’ll compile content.jade

jade content.jade --pretty

And the result will be:

<div>This is header</div>
    <div>This is my content</div>
<div>This is footer</div>

Same result as we would’ve got if we used regular ng-include, but without unnecessary HTTP requests and client processing.

But the file size getting larger!

Definitely valid argument, obviously if we pre-compile every logic-less include, our files will get larger, so you need to ask whether you can or cannot afford those extra KBs.
Some questions you need to ask might be:

  • What would be the higher number – amount of users using the application, or the amount of features and use time per each user? For example, if your application is some heavily featured GUI for IT professionals, the amount of the users will be relatively low, while the ‘visit time’ for each user will be long. In such case, it will make a lot of sense to pre-compile the templates so your server will help out the client.
  • In which environment will the app run? Open to public or on local network? In local environments you can allow yourself to be less concerned about download speed, also making pre-compile a valid option.
  • Is your current load-time is acceptable? If yes, then you might consider to gzip your requests (improving the request by around 50-90%), while adding some more KBs to your files, so the trade-off might be worth it.

I can go on and on about more considerations, but i believe that my point is clear enough. You shouldn’t blindly pre-compile your templates, you should benchmark the change and take many factors into consideration. But still, in my opinion, pre-compiling is a valid and helpful method to boost your client speed.

AngularJs, Grunt, Optimizations, Performance

Improving performance in production environment


By default, when we create data-bound elements, angular attaches additional information about the scope and the bindings to the DOM node, and then applying ng-binding CSS class to the data bound element. Debuggers (such as Batarang) and test frameworks (Like Protractor) requires this information to be able to run.

Let’s try it out by ourselves

We will setup a simple app with a controller and a directive with isolated scope.

HTML

<div ng-controller="myController">controller</div>
<isolated-scope data-binding="isolated scope"></isolated-scope>

JavaScript

app.directive('isolatedScope', function() {
  return {
   scope: {
     binding: "@binding"
   },
   template: "{{binding}}"
  };
});

app.controller('myController', function($scope) {
  $scope.location = "You are in a controller";
});

Next, if we’ll execute in the console:

angular.element(document.querySelector('div')).scope();

We will get the scope of our controller.

$ChildScope: null
$$childHead: null
$$childTail: null
$$listenerCount: Object
$$listeners: Object
$$nextSibling: Scope
$$prevSibling: null
$$watchers: null
$id: 2
$parent: Scope
location: "You are in a controller"

And if we try –

angular.element(document.querySelector('isolated-scope')).isolateScope();

We will get the isolated scope of the directive

$$childHead: null
$$childTail: null
$$destroyed: false
$$isolateBindings: Object
$$listenerCount: Object
$$listeners: Object
$$nextSibling: null
$$phase: null
$$prevSibling: ChildScope
$$watchers: Array[1]
$id: 3
$parent: Scope
$root: Scope
binding: "isolated scope"


OK, that’s informative. So what’s the problem?

The problem is, that according to Angular’s running in production document, this info may come at a cost of “significant” performance loss.

Luckily, the solution is quite simple. Since you need this information while developing and debugging, but you do want to turn it off in production,
you can easily switch it on/off in the config phase of the application, and control this configuration with your build tool. Let’s see an example –

First, we configure debugInfoEnabled during the config phase of our application.

// application's config constant
angular.module('myApp').constant('myConfig', {
	enableAngularDebugInfo: true
});

// setup necessary changes in config phase 
angular.module('myApp').config(function ($urlRouterProvider, myConfig) { 
  $compileProvider.debugInfoEnabled(myConfig.enableAngularDebugInfo);
});

The value enableAngularDebugInfo should be true by default, since we need it for development.
Next, we should disable it when building the application for production. I will demonstrate an example with Grunt, but you can do it with Gulp, Cake, Broccoli or any other task runner.

Configuring GruntFile to disable debugInfoEnabled in production

For simplicity purposes, i have removed from the GruntFile all commands that don’t directly relevant to disabling AngularDebugInfo:

module.exports = function (grunt) {

    // Project configuration.
    grunt.initConfig({
        'string-replace': {
            disableAngularDebugInfo: {
                options: {
                    replacements: [{
                        pattern: 'enableAngularDebugInfo: true,',
                        replacement: 'enableAngularDebugInfo: false,'
                    }
                    ]
                },
                src: 'temp/app.full.js',
                dest: 'temp/app.full.js'
            }
        }
    });

    // for build task, run string-replace.
    grunt.registerTask('build', ['string-replace']);
};

As you can see, we used the neat grunt-string-replace to replace true to false when running the build command. Obviously the actual build command will contain many more tasks (tests, jshint etc’), string-replace it’s only one of them.

And that’s it, now in your app.full.js file, angularDebugInfo will be disabled, so you can keep on developing while boosting up your applications performance in production environment.

AngularJs, Javascript, Optimizations, Performance

Shorten $digest cycles with one time bindings


In large scale angular apps (or poorly designed small-medium scale), performance issues, if not caught on time, could be extremely painful to fix. Memory leaks, uncontrolled watchers, unnecessary expressions processed during the $digest cycle and the list goes on and on.

Today I wanted to share a little addition presented in angular 1.3 and allows us to shorten our $digest cycles.

Consider the following (simple) example:

Controller


myController.controller('MyController', function($scope, urlConfigurations, $timeout) {
    var vm = this;
    vm.userId = 'firstUser';
    vm.facebookProfile = urlConfigurations.facebook;
    vm.twitterProfile = urlConfigurations.twitter;
    vm.linkedin = urlConfigurations.linkedin;

    vm.nextUser = function() {
        vm.userId = 'secondUser';
    };
});

Constant

myApp.constant('urlConfigurations', {
    	facebook: 'http://facebook.com/?profile=',
        twitter: 'http://www.twitter.com/?id=',
        linkedin: 'http://www.linkedin.com/?id='
});

View

<div ng-controller="MyController as vm">
      <a ng-href="{{vm.facebookProfile + vm.userId}}">Facebook</a>
      <a ng-href="{{vm.twitterProfile + vm.userId }}">Twitter</a>
      <a ng-href="{{vm.linkedin + vm.userId}}">Linkedin</a>
      
      <button ng-click="vm.nextUser();">Next User</button>
</div>

Looks legitimate piece of code, right? We have urlConfigurations constant which holds our URLs, then we retrieve it in our controller and sending it to the view , which combining it with vm.userId to create links to social media account of the specific user id. The nextUser() method allows us to change the user id, which will also modify the social media URLs.

Where is the problem?

The problem is that we have 3 unnecessary watchers here: vm.facebookProfile, vm.twitterProfile and vm.linkedin. So, when we will update vm.userId, it will trigger a $digest cycle that will process everything on the scope, including those 3 watchers that will never change in runtime, making processing them redundant and time/resources consuming. Now consider a case where we don’t have only 3, maybe we have a 100 (a good example might be localization implementation).

The solution

As for angular 1.3 > , the solution is quite simple. We can just add :: near the property we want to bind only once. So in our view, we will be using:

<a ng-href="{{::vm.facebookProfile}}{{vm.userId}}">Facebook</a>
<a ng-href="{{::vm.twitterProfile}}{{vm.userId}}">Twitter</a>
<a ng-href="{{::vm.linkedin}}{{vm.userId}}">Linkedin</a>

In this case, vm.facebookProfile, vm.twitterProfile and vm.linkedin won’t be processed in the $digest cycle each time vm.userId is modified.