Continous Integration, Grunt, Javascript, Tests, Xvfb

Running UI tests on real browsers in continuous integration using X virtual framebuffer (trough a task-runner)

Many times when projects run UI tests as part of CI (continuous integration) they rely on PhantomJS, a great headless browser. Unfortunately, using PhantomJS has some drawbacks:

  1. You really don’t cover quirks happening from browser to browser (working on chrome, not on FF, working on FF, not working on IE), so you might actually ship a buggy screen to production
  2. Debugging is hard. Although PhantomJS runs on top of WebKit, it has it own quirks, so you might get failing tests while all of your browsers show that everything’s OK. Debugging something you can’t see is troublesome.

UI tests with real browsers

So you’re not completely satisfied with PhantomJS and you want to run your UI tests on real browsers, which is a good idea that have a little difficulty (or not): you want your UI tests to run as part of CI, but most of CI servers don’t have displays.

Enter Xvfb (X virtual framebuffer). From Wikipedia: Xvfb is a display server implementing the X11 display server protocol. In contrast to other display servers, Xvfb performs all graphical operations in memory without showing any screen output. From the point of view of the client, it acts exactly like any other X display server, serving requests and sending events and errors as appropriate. However, no output is shown. This virtual server does not require the computer it is running on to have a screen or any input device. Only a network layer is necessary.

So basically what you’ll need to do is:

  1. Start Xvfb
  2. Run your tests
  3. Kill Xvfb

Running Xvfb trough a task runner

In this example we’ll use Grunt to run Xvfb as one of the tasks, but this is also possible with Gulp (I’m not sure about the others).
First, we’ll install the required packages, make sure you’re not forgetting to –save-dev since we want to update our package.json with the new dependency for development.

npm install --save-dev grunt-env
npm install --save-dev grunt-shell-spawn

Next we’ll setup grunt tasks for Xvfb:

        shell: {
            xvfb: {
                command: 'Xvfb :99 -ac -screen 0 1600x1200x24',
                options: {
                    async: true
        env: {
            xvfb: {
                DISPLAY: ':99'

Now we’ll setup the UI test task, for the example we’ll use protractor but it can be any other library.

protractor: {
            options: {
                keepAlive: false,
                configFile: "protractor.conf.js"
            run: {}

In protractor.conf.js we should create configuration to run UI tests on the required browsers (I assume you have already the proper setup for this, since it’s not in the scope of this post).

Now we got:

  1. Grunt test task configuration
  2. Protractor configuration which will run the tests on all the required browsers
  3. Grunt Xvfb tasks configuration

Let’s create a task to combine everything together:

grunt.registerTask('CI-E2E', ['shell:xvfb', 'env:xvfb', 'protractor:run', 'shell:xvfb:kill']);

After this, a can simply run


and all your UI tests will run on top of Xvfb. That’s it.

Javascript, Memory Leaks

Debugging memory leaks: When the famous 3 snapshot technique can cost you days of development

Pretty much every second article about debugging memory leaks demonstrates the great 3 snapshot technique introduced by the GMail team and which was used to debug heavy memory leaks in GMail. While there’s no doubt about it being a great idea, it seems that not many mention the possible problem that may occur if you rely on only 3 snapshots without understanding 100% of your framework, vendor libraries and internals of JavaScript. You might say that one must actually understand and know every line of code in a project he is working on, and it surely an admirable statement, but not really practical in the real world for medium+ sized applications. Just for some proportions: one of the world most famous UI libraries, Kendo, is 166,000 line of codes.

What’s the possible problem with only 3 snapshots?

Well, basically, that snapshot 3 may send you barking at the wrong tree for days. The 3 snapshot technique suggest that objects allocated between snapshots #1 and #2 and still exists in snapshot #3 might be leaking, which is not true in many cases, cases like singleton implementations, services, factories (a nice example will be Angular on-demand instantiation), some of native JS interfaces and basically everything that instantiated between #1 and #2 and have all the rights to keep on living in snapshot #3.


Let’s take a simple code sample where we have 2 buttons, one add items to array and the other removes them. The array lives in a singleton which is instantiated only on demand, but after it was instantiated once, he keeps on living as a global. (I know, globals are the devils work, but bare with me for the sake of the example).


<!DOCTYPE html>
    <title>Memory Test</title>
    <button id="addData">addData</button>
    <button id="removeData">removeData</button>
    <script src="singleton.js"></script>


var myDataLayer = null;

document.getElementById("addData").addEventListener("click", addData);
document.getElementById("removeData").addEventListener("click", removeData);

function getSingleton() {

    if (myDataLayer) return myDataLayer;

    myDataLayer = (function() {

            var instance;

            function init() {

                var privateDataArray = [], i;

                function privateAddToData(numOfItems) {
                    for (i=0; i<=numOfItems; i++) {

                function privateEmptyData() {
                    privateDataArray.length = 0;

                return {
                    publicAddToData: privateAddToData,
                    publicEmptyData: privateEmptyData


            return {
                getInstance: function () {
                    if ( !instance ) {
                        instance = init();
                    return instance;


    return myDataLayer;

function addData() {

function removeData() {

Ok so we have this tiny app and we want to make sure we haven’t created any memory leaks, let’s use the 3 snapshot technique (please make sure to test it in incognito and disable all active extensions):

  1. Open the app and take the snapshot in our “healthy” mode
  2. Now click on addData and take another snapshot
  3. After this, click on removeData and take another snapshot
  4. Next, in snapshot #3, click on Summary and then filter only Objects allocated between Snapshot #1 and Snapshot #2

We will see something like this:
Objects allocated between snapshot #1 and snapshot #2

Wait, what? We have objects that aren’t browser’s internals still allocated in snapshot #3. Do we have a leak?

Well, no. As you can see, those objects and arrays related to the singleton we instantiated and to JavaScript’s UI Event and MouseEvent interfaces. All of them logically supposed to live in the 3rd snapshot. The singleton because it’s a global, and the JS Native objects because we still have an active listener (those objects created when the actual click performed, that’s why it’s displaying as objects allocated between snapshot #1 and #2). So yes, we have objects in the 3rd snapshot, but they are not leaking.

But that’s easy, i know i need to ignore those object. I’ll just look for something relevant

In this tiny app you may easily find what is relevant and what is not, but if you’re debugging a medium+ application with multiple vendor libraries and complicated business logic, you may waste a lot of time chasing down irrelevant retainers.

So, what can we do?

Consider this: when you have a memory leak in a set of actions (a flow), repeating this action several times will result in a positive linear graph in the JS heap or DOM count (or both). In simple words, in most cases the graph will keep going up as long you’ll keep doing the same action that is causing the leak (i’ll write about a single leak which won’t result in a linear leak in a few moments). Meaning that objects leaked in snapshot #3, will be kept in all following snapshots, and additional memory will be allocated on top of what was in #3. So if you have a linear leak, you may take 5 snapshots and in snapshot #5 compare snapshots #3 and #4 and discover similar types of objects as you discovered in #1 and #2.

For example:
If in snapshot #3, you are viewing objects allocated between #1 and #2, and you see something like

leakedObject @1

(@ is the location in memory),
then in snapshot #5, where you’ll be comparing objects that were allocated between #3 and #4, you’ll see something like

leakedObject @2

. Meaning, the same object type, leaked twice and created linear leak. If you remove the filter completely and view everything existing in snapshot #5, you will see

leakedObject @1
leakedObject @2

But what good it’ll make to take 5 (or even 7) and not 3?

Let’s get back to our tiny app, and repeat the snapshots process, but now we’ll make 7 snapshots and not 3. Repeat those actions (don’t forget incognito to disable all of the extensions):

  1. Take a snapshot #1 before we begin
  2. Click on add data, take snapshot #2
  3. Click on remove data, take snapshot #3
  4. Click on add data, take snapshot #4
  5. Click on remove data, take snapshot #5
  6. Click on add data, take snapshot #6
  7. Click on remove data, take snapshot #7

You’ll see something like this:
7 Snapshots

Next, if you’ll go to snapshot #3 and compare between #1 and #2, you’ll see something like this:

3 Snapshots

As you can see, we have allocated Objects and an increase of 100kb in memory, which in a case you are not 100% familiar with 100% of the app, would’ve make you think you have a leak.

Now lets go to snapshot #5 and there compare between snapshots #3 and #4:

5 snapshots

you’ll see 2 things: 1, No more increase in memory between #3 and #5. 2. No more allocated objects that still living in snapshot #5 (objects wrapped in brackets are browsers internals, we ignore them).

But wait, the browser internals makes us worry, we don’t want to stick our head in the sand and we want to make sure nothing is leaked. No problem, lets take a look at snapshot #7 and there compare objects that were allocated between #3 and #4:

snapshot 7

We see nothing. No memory size increase, no allocated objects, internal or not. That means that whatever happened in #3-#4 was completely removed, meaning we don’t have any leaks.

Do you see the difference? Using 7 snapshots we validated that we don’t have linear memory leak and our application in an healthy mode. But if we would’ve used only 3 snapshots, we could’ve been wasting our time chasing down retainers just to find the that it’s really OK for the singleton and the internal JS interfaces to be kept in memory.

What about one-time leaks?

You are right to think that only the 3 snapshot technique will catch a leak that happens only once between snapshots #1 – #2. Unfortunately i don’t have a better advice than going and checking every single retainer, understanding what it does and than deciding if it’s a leak or by design. My only advice is to be smart about it, if you see that in the first run your memory jumps to unreasonable numbers (unreasonable depends on the application and the devices running it), you’re definitely should take the time to look into it. But if the have an additional 100-200kb, or even 1mb that allocated only once and you’re not sure if they should, most of the current devices (of course with some exceptions) are strong enough to make you think twice if it’s worth your time.


Session locking – big bad and sometimes (or mostly) unnoticed until it’s too late con of long polling

In this post I’m not going to discuss long polling VS short polling VS sockets, I’m also not going to say anything against (or in favor) of long polling. I assume that anyone reading this already done their research and considerations and aware of most (or all) of the pros and the cons of each method. I just wanted to share something that is not mentioned in most of the discussions i heard about the polling methods: PHP’s sessions locks.

By default, PHP uses files for storing session data. This means that in case of long polling, by default, the relevant session file will be locked, so any additional incoming request from the same user will have to wait until the session file will be unlocked.

“So what? This is a price I’m willing to pay”

Follow me on this one – you have a client app that creates a long polling connection, waiting for the response, even displaying a nice loading screen. So far a good (or ok-eish) user experience. But what if the user will say “Ok then, i know this is going to take a while. So i’ll just open a new tab while this is loading and do other stuff”? Well, if you were not prepared for this, then you are, gently put, screwed. The user won’t be able to access your application, and I’m not talking about some nice ‘Please wait, in process’ screen, I’m talking about server-takes-forever-to-respond scenario.

I’m probably a little bit over-dramatic here. But this will happen, and if it isn’t ok with you (or your product manager), than it will be a problem that may be a bit expensive to fix, depending on when you discover it.

“On second thought, maybe it’s a price I’m not willing to pay”

You can take different approaches on fixing this, depending on how much you are willing to invest (in time or money), the stage your project at, your infrastructure and other factors.

  • Explicitly closing the session with session_write_close – Simple in theory: After finishing writing data to the session file, you can close it and unblock it for other processes, afterwards you’ll be able to read from the session but not write to it. In practice you probably should take more cautious approach with this one, most backend servers today written on top of frameworks, whether its Laravel, Yii, Cake or anything else. Each of those frameworks ships with components that require session writing privileges, components such as Authentication, Permissions etc’. So make sure you really understand what’s going under the hood of your framework before unblocking the session.
  • Using non-blocking session storage such as a database, redis, memcached or other equivalents – Some may argue, but this is my personal favorite for couple of reasons:

    1. Managing session locks by yourself is more prone to mistakes than letting a fully tested and proven framework to do it for you.
    2. It prevents a possible security breach where session file stored in a shared folder and may be accessible to unauthorized 3rd parties.
    3. It’s easier to move to a multi-server setup when your session data is accessible from multiple servers.
    4. Performance wise you have a larger set of tools to optimize your database speed than you have with a filesystem.

To summarize, long polling is still a valid approach in many use cases, just make sure you (and/or your product team) understand the full list of pros and cons of doing it.

Javascript, NodeJs

Improve your logging with stack trace

Informative logs playing crucial part in fast debugging, especially when you debug someone else’s code.
Today i wanted to share a nice trick that will help you detect the origin’s of a problem a bit faster.

Let’s say you are debugging a simple flow for a file upload, which was written by someone else or by you couple of months ago, so you don’t remember perfectly where’s what.

You click on the upload button and then bam! you get an error. You are confident that the log is informative enough to explain why,
so you tail -f the log and there you see,

10:56:16 pm – Could not upload file to the file server

Seems clear enough, but now you want to see the piece of code that generated this error.
How would you find it? ctrl+f the entire project folder for the log string? What if the same log exists in multiple files?

Well, obviously you will find it soon enough, but why won’t we try and save us those 30 seconds with logging the function that generated this error.

stack-trace to the rescue

nodeJS have multiple packages to help us easily deal with our stack trace, i personally prefer stack-trace.
Let’s install the package:

npm install stack-trace --save

Now let’s assume we have a common module that holds our logger (IMO it’s better to have a logger in a wrapper module than to directly use it in each module we have, just in case we’ll want to replace it).

Our common module will look like this:

var stackTrace = require('stack-trace');
var moment = require('moment');

var exports = {};

exports.logs = {
    debug: function (text) {

        // get the stack trace
        var trace1 = stackTrace.get()[1];
        var date = moment().format('h:mm:ss a');

        // build the log phrase from the stack trace file name, method and line number
        var trace = {
            level1: date + " - FILE: " + trace1.getFileName().replace(/^.*[\\\/]/, '') + " - METHOD: " + trace1.getMethodName() + " (" + trace1.getLineNumber() + ")"

        // log.debug with whichever library you choose, console is only for simplicity


module.exports = exports;

We have required stack-trace and the moment library (a great library for handling dates in JavaScript, if you are not familiar with it you should check it out). After this, we exposed logs on the module, and logs have method debug. The debug function accepts the string we want to log, then gets the stack trace and the time, and prints it before the actual text. I’ve used console.debug just for simplicity purposes, but you may use any logger you want.

Now lets go back to our file upload example.
Let’s say we handling a request for /file/upload in a route file called file-route.js:

var common = require('../modules/common');

// ... some upload logic 


function error(err) {
    common.logs.debug("Could not upload file to the file server: " + err);

// ... continue

The output to the console will be:

10:56:16 pm – FILE: file-route.js – METHOD: uploadFile(8)
Could not upload file to the file server: METHOD NOT ALLOWED

Which will be much more informative than just a simple

10:56:16 pm – Could not upload file to the file server

Javascript, Protractor, Tests, WebDriver

Using angular.element with WebDriver’s executeScript to perform actions which aren’t accessible from UI

A while ago i had the following problem:
I needed to write UI test of a canvas based component, which received array of objects, for the sake of the post let’s assume those were cars objects, and draw DOM elements on the canvas to represent those cars. It also had functionality to display car information in a popup when a user clicks on a car or a model in scope is updated (let’s call it vm.selectedCar).

I needed to test the information popup functionality.
The problem was that i didn’t have anything to work with, because:

  1. I didn’t have any element to select, because there were no DOM elements on the canvas.
  2. I couldn’t rely on mouse actions because elements position on the canvas was inconsistent by design.

So basically i couldn’t do anything on a UI level to trigger the appereance of the information popup.


As you may know, you can execute javascript by using WebDriver’s executeScript, so if we combine it with angular.element ability to retrieve the scope of the currently focused element, we can basically perform actions on the scope that we can’t do by interacting with the UI. Let’s see some code:

var openCarInfoTroughConsole = function(args) {
        var carsCanvasScope = angular.element($("#"+args.canvasId)).scope();
        carsCanvasScope.vm.selectedCar = args.carId;

var mockCar = {
        canvasId: 'canvasElmId',
        selectedCar: 'carId33'

    .executeScript(openCarInfoTroughConsole, mockCar)
    .then(performTests, handleError);

As you can see here, the first argument passed to executeScript, is a function that will be executed in the browser console scope (you can also pass a string).
The second param is arguments that we can pass from the test scope to the function scope when it’ll be executed, in this case we use those arguments to pass our car mock Id.

The openCarInfoTroughConsole itself using the angular.element($0).scope() / isolateScope() trick to get the scope of the selected element. But in this case, instead of using $0 we locate our element by id.

Important to note – depending on your angular version, debugInfoEnabled may be set to false. The app must run with debug info enabled, otherwise scope() or isolateScope() will be undefined. Since in most cases you’ll run tests in development enviroment, it shouldn’t to be a problem.

After we got the scope of our component, we are updating selectedCar model with the id of our mock car and then triggering $apply to notify angular that we changed the model. After this, the information popup will be displayed and you’ll be able to run tests on it. That’s it.

DB, ElasticSearch, no-SQL

ElasticSearch: Performing aggregations and sub aggregations on filtered children of filtered parents with a single query

Edit: Post refers to ES 1.x. Queries with 2.x and 5.x are pretty much the same, only with couple of changes in the filtered/query structure which were deprecated. Check out the breaking changes in the official website.

On a project I’m currently working on, I wrote a propriety ODM for ES, one of the function this ODM had to support was aggregations and sub aggregations on filtered children of filtered parents. The problem was that ES docs provided examples for each separate query (aggregation, sub aggregation, has_child, has_parent), but it didn’t really explained properly combine all of them together. So the situation was that I had a clear idea of the API the ODM will expose, but I wasn’t sure about the request tree it should generate to pass to ES. So after some reasearch, I thought I’d share my way to do it.

Query definition

Lets assume that we have an index of book_store which have 2 types, agency and books. Our requirement is to get all books which were published, but not yet sold out, belonging to an agency named “Marvel” with whom we don’t have any legal issues, and get total sales, per book, per month.

This is the full query, I’ve simplified it in terms of conditions and the aggregations amount in order to make the request tree clearer. We’ll go over each part of it soon enough.

GET book_store/book/_search
  "query": {
    "filtered": {
      "query": {
        "filtered": {
          "filter": {
            "bool": {
              "must": [
                  "term": {
                    "published": true
              "must_not": [
                  "term": {
                    "sold_out": true
      "filter": {
        "has_parent": {
          "type": "agency",
          "query": {
            "filtered": {
              "filter": {
                "bool": {
                  "must": [
                      "term": {
                        "name": "Marvel"
                  "must_not": [
                      "exists": {
                        "field": "legal_issue_id"
  "aggs": {
    "book_name": {
      "terms": {
        "field": "book_name"
      "aggs": {
        "publish_months": {
          "date_histogram": {
            "field": "publish_date",
            "interval": "month"
          "aggs": {
            "sales": {
              "value_count": {
                "field": "books_sold"
  "size": 0

Step by step

As you can see, once you figure out the correct structure the query is pretty straightforward. At first, we a set up a query on the type we want to work with (lines 2-26), we’re doing multiple filters query by wrapping them in a boolean query to get only the books which are published but not yet sold out.

Next, we are filtering the matches in the query by has_parent. So make sure you’ve defined child-parent relationship between your types. Between lines 27-53 we define that our matches must have an agency parent, and this agency parent also filtered by boolean query, which states that the agency must be “Marvel” and we must not have any legal issue with it.

Aggregating on the filtered data

Up until line 56 we defined the desired result filters and matches that on which we will be performing our aggregations. Now let’s examine the desired aggregations: As you probably know ES allows us to nest aggregations, this way we can perform aggregation on already aggregated data.

  1. Lines 57-59 – Will perform aggregation by book name.
  2. Lines 62-64 – Will perform date histogram aggregation on each book name, the date aggregation buckets will be listed under each book name aggregation.
  3. Lines 68-70 – Will take every date range in the buckets created by the date histogram aggregation, and under each date list the count of the sold books.

So basically now you have the aggregation of sold books, per date range, per book. That’s all.

P.S, you may have noticed that I’ve written that this is my way of executing this query. While i come up with this query after some strenuous research with ES docs, I don’t rule out that it maybe somehow optimized a little bit. So if anyone have any better idea it sure is welcome.

Debugging, Javascript

Faster 3rd party widget debugging using DOM breakpoints

Imagine the following scenario:
You got a requirement to implement a multi select dropdown with icons near each option and filtering capabilities. Assuming your first course of action won’t be writing it from scratch, you google around to find a library which meets those needs. You found exactly what you needed, it even has a lot of stars in github, seems safe to use. Jackpot.
You implement this dropdown and everything works fine.

After 2 weeks you get a call from the good guys at QA saying that sometimes, when filtering random strings, the icons of the filtering results are disappearing. How would you handle it? I assume that you’ll google the problem to see if this is a familiar issue with this library. But what if you won’t find anything, then what?

DOM Breakpoints to the Rescue

In order to understand what causes the problem, we’ll need to know exactly which code removes the DOM in our library and follow it’s call stack. We can do this by using DOM breakpoint on the removed element. Let’s see some code:


<!DOCTYPE html>
    <script data-require="jquery@3.0.0" data-semver="3.0.0" src=""></script>
    <script src="3rdPartyLib.js"></script>

    <i id="myIcon">ICON</i>


// Assume this is a 3rd party library you are using.

var iconRemover = function() {

	   Imagine we have much code here
	   Imagine we have much code here
        function firstInStack() {
	   Imagine we have much code here
        function secondInStack() {
	   Imagine we have much code here
        function thirdInStack() {

In order to understand what causes the icon element to be removed, inspect the element and then click on ‘break on’ and select ‘Node removal’ (we’ll cover the other options later):


Now we’ll wait 10 seconds until the first timeout of our 3rd party library will kick in and the debugger will open with the following screen:


As you can see, the breakpoint stopped at jQuery’s removeChild function. But if you look at the call stack you’ll able the to see the call stack that eventually invoked jQuery’s remove. You’ll be able to see that firstInStack called secondInStack which called thirdInStack which removed the icon by id. It gives us a crystal clear picture of what’s going on, which makes debugging much easier. You can click on any function in the stack to examine it’s contents, if we click on thirdInStack we’ll see:


One thing you need to make sure is that you checked the Async option in the debugger (I’ve marked it in the previous screen shot). This instructs the debugger to show asynchronous functions in the call stack.

Type of breakpoints

As you can see there are 3 types of breakpoints:

  • Subtree Modifications – Addition, removal or modification of any child element
  • Attributes Modifications – Any change in the attributes of the element under inspection
  • Node Removal – Removal of the element under inspection

That’s it, pretty simple yet efficient.

Apache, Nginx, Optimizations, Performance

HTTP Compression: Reduce up to 90% of HTTP response size with Gzip

Speed is one of the most important (if not the most important) aspects of a quality application. Among others, application speed is effected by the speed of the HTTP requests, which effected by things we can’t (network connection) and can (responses size, structure etc’) control. HTTP compression provides a neat method to control response size and reduce the amount of time it takes for an HTTP request to complete.

Gzip is one of most popular compression utilities and can reduce your response size by up to 90% (You can see a compression list here). One of the nicer parts about Gzip is that from server point of view, it’s relatively easy to setup on most of the modern servers, and from client point of view you literally don’t need to do anything. All of the modern browsers support it, you can see it if you open the requests details on your network tab and look at Accept-Encoding :


What basically happens is that the HTTP request notifying the server that it can accept Gzipped content, and the server, if configured, Gzipping the content before returning the response.

Configure Apache

Apache supports 2 types of compression options, mod_deflate and mod_gzip. We’ll use mod_deflate since this mod is actively maintained, comes out of the box and easy to setup.

As explained, mod_deflate comes right out of the box in latest Apache installs (at least in Windows installer and trough Ubuntu apt-get), so you don’t need to install anything. To be on the safe side you can check if the mod is available by running:

apachectl -D DUMP_MODULES | grep deflate

You should see something like:

deflate_module (shared)

After we’ve validated that the mod is active, add this to your .htaccess file:

<IfModule mod_deflate.c>
        <IfModule mod_filter.c>
                AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript application/javascript application/ecmascript application/rss+xml application/xml application/json

This is pretty much self explanatory, we’ve just added multiple content types to be compressed after checking that deflate and filter mods are enabled. Note that if you need to support really old browsers you can use Apache’s BrowserMatch directive:

BrowserMatch [HTTP-Agent-Regex] gzip-only-text/html

You can also set DeflateCompressionLevel directive to control the compression.

DeflateCompressionLevel [1-9]

The higher the value the better the compression at cost of more CPU.

Now restart Apache and that’s it.

Configure Nginx

In order to configure nginx you should edit your nginx.conf file :

gzip on;
gzip_types text/html text/plain text/xml text/css application/x-javascript application/javascript application/ecmascript application/rss+xml application/xml application/json

Also on Nginx you can disable gzip for certain browsers and control the compression level:

gzip_disable [HTTP-Agent-Regex];
gzip_comp_level [1-9];

Now restart nginx and everything should be working.


I’ve never really configured gzip for IIS, but a quick google yield this highly voted answer.

How can i tell my response is compressed?

You can make sure your response came back compressed if you open your network tab and look at the response headers:

Response headers

You can also see the before/after compression size in the main HTTP Requests view:

If you look at the size column, you’ll notice a black and greyed out numbers. The grey number represents the size of the actual size while the grey one represents the compressed size.

That’s it, enjoy.