Improve your Angular test performance by 600%

My first title was actually "This one weird trick to improve your Angular test performance by 600%," but the click-bait seemed a bit obvious. Hopefully the 600% is still enough to make you say, "Wait, what?" You probably didn't realize that your Angular test suite could run that much faster. We have 2,541 Angular unit tests at the time of this writing, and it use to take nearly 40 seconds to run them all. After the change below, it takes 6 and a half. Here, I'll prove it. On the left is our tests before this change and on the right is with the change:

Side by side test comparison

How can you achieve this same speedy test bliss?

Stop using inject

There it is. I said it. inject is SLOW. Not only that but it also has a memory leak. This is actually what prompted me to start investigating alternative test setups. When we reached about 1800 tests, we started to see slowdown when running the whole suite with testem. It would get down to 1500 or 1800 tests and slow way down. For those interested in how you figure out this sort of thing, here's what I did:

  1. First I ran the tests normally and captured the heap usage with Google Chrome's developer tools just to confirm there was in fact a memory leak. No doubt about it. Something was creating objects and not releasing them.
  2. I replicated our test suite in a controllable way by duplicating our test setup in a loop so I could take out various pieces to isolate the problem. I didn't use anything particularly angular in the tests themselves so that I didn't have to change the test setup each time to get them to pass.
  3. I ran the test loop without module('app'). There was still a leak.
  4. I ran the test loop without inject(function(){}). Ah . . . now everything runs smoothly.

But inject is how you bootstrap the app in a controllable way!

Indeed it is. We use jasmine as our test runner, and we use mocking heavily to isolate the subject under test. Our previous test setup looked like this (note that we write tests in coffeescript):

describe 'SomeController', ->  
  Given -> module 'app', ($provide) ->
    # Stub out SomeService prior to bootstrapping
    $provide.constant 'SomeService', jasmine.createSpyObject('SomeService', ['someMethod'])

  Given inject ($injector, $controller, $rootScope) ->
    # Get a reference to SomeService
    @someService = $injector.get('SomeService')
    # Create a scope for the test
    @scope = $rootScope.$new()
    $controller('SomeController', { $scope: @scope })

So how can we mock dependencies of controllers and services without inject? How can we control what's injected without overriding it prior to bootstrapping? By using inject . . . er . . . but only once.

Single inject setup

The reason inject is so slow is that it has to bootstrap your angular application. And it does this for every test. So in our suite, it's doing it . . . well, it's not 2,541 times exactly. It basically happens once per file because of how jasmine runs nested describes. This is why, when you run Angular tests (at least with jasmine), they complete in batches. You'll see 10 tests complete basically instantaneously, and then a slight pause, and then another 15 complete, then another pause, etc. Those pauses are inject bootstrapping the app. And I thought, wouldn't it be great if all the tests could run as fast as they do after bootstrapping? And then I thought, maybe you can set up up the app once at the beginning and then run all the tests. This means calling inject up front in a helper, but it also means finding a different way to inject dependencies to control the boundaries of the test.

The inject helper

Clearly, we still need to call inject to get angular to run correctly. So I wrote this helper to do that up front:

beforeEach ->  
  # I'm putting $injector on window later, so basically this will only happen the first time
  if not window.$injector
    # Call module('app', function($provide){}) just once up front
    module 'app', ($provide) ->
      # Stub these things globally (more on this later)
      $provide.constant '$timeout', jasmine.spyWithProps '$timeout', ['cancel']
      $provide.constant '$interval', jasmine.spyWithProps '$interval', ['cancel']
      $provide.constant 'RootScopeHelpers', jasmine.createSpyObj 'RootScopeHelpers', ['register']
      $provide.constant '$httpBackend', jasmine.createSpy '$httpBackend'

    # Call inject just once and put very commonly used services on window
    inject ($injector, $rootScope, $compile, $controller, $window, $service, $timeout, $interval) ->
      window.$injector = $injector
      window.$rootScope = $rootScope
      window.$compile = $compile
      window.$controller = $controller
      window.$window = $window
      window.$timeout = $timeout
      window.$interval = $interval
      window.$service = $service

  return

# Completely reset $timeout and $interval after each
# test so we don't get test pollution
afterEach ->  
  # Annoyingly, .reset only clears calls to the spy,
  # not actions specified (e.g. with andCallFake)
  $timeout.reset()
  $timeout.cancel.reset()
  $interval.reset()
  $interval.cancel.reset()

  # So we have to set the spy plan back to a no-op manually.
  # "_stealth_stubbings" comes from jasmine-stealth
  $timeout.plan = angular.noop
  $interval.plan = angular.noop
  $timeout.cancel.plan = angular.noop
  $interval.plan = angular.noop
  delete $timeout._stealth_stubbings
  delete $timeout.cancel._stealth_stubbings
  delete $interval._stealth_stubbings
  delete $interval.cancel._stealth_stubbings

This is surprisingly straight forward. The test setup is the same as it was before (call Given -> module('app') and use $provide to stub things then call Given inject), but now it only runs once. The first time this helper runs, it uses $provide to stub some things and then uses inject to bootstrap the app and get a reference to some necessary services (i.e. via $injector). It puts those things on window so they're available to tests but also to signal that test setup has run so that this helper won't run a second time later. The services we put on window are ones used for individual test setup later ($controller, $rootScope, $injector, $compile) and commonly used ones, like $window (although we don't actually use $window that often, so in retrospect, I could probably have omitted this).

The afterEach is necessary to reset the $timeout and $interval services (or rather, their stubs). Note that we use jasmine-stealth which provides a nice syntax for conditional stubbing, and we have to reset the internals of that manually. There's a reason for stubbing these which I'll explain later, but for now we need to derive a different way for each type of test to control the boundaries of the subject under test by stubbing dependencies without calling inject.

Controllers

Testing controllers requires the least deviation from the pattern you're familiar with. In fact, part of the original test setup is a clue to how we can stub dependencies. The line $controller('SomeController', { $scope, @scope }) is creating a new instance of SomeController using a provided scope. But notice that the scope is passed as part of an object . . . you're actually supplying a dependency to the controller when you instantiate it. And you can do this with any dependency. So our test setup now looks like this:

describe 'SomeController', ->  
  Given ->
    @someService = jasmine.createSpyObj('SomeService', ['someMethod'])
    # It's not strictly necessary to do this, as $timeout
    # is already stubbed, but this feels more self-contained to me.
    @timeout = jasmine.createSpy('$timeout')
    # Remember, $rootScope is on window
    @scope = $rootScope.$new()
    # And so is $controller
    $controller 'SomeController',
      SomeService: @someService
      $timeout: @timeout
      $scope: @scope

We simply supply our own stubbed versions of services to the $controller. If you like to use controller-as, just save the reference $controller returns: @ctrl = $controller('SomeController', { SomeService: @someService }). If you don't supply a service which a controller requires, the $controller service will use $injector to provide the real one, so you only need to pass in dependencies that need to be stubbed.

Services

There is an alternate method to testing services that you could use (basically what's used to test directives and filters below), but I like the $controller method, so I wrote a $service service that works the same. It looks like this:

(function() {
  // Keep an internal cache of all services registered
  var serviceCache = {};
  // Get a reference to the application
  var app = angular.module('app');

  // Register the $service service (a bit meta, I know)
  app.factory('$service', function($injector) {
    // Define this as a function variable so we can add the .cache method below
    var get = function(name, locals) {
      // If no dependencies are supplied, default to empty object.
      locals = locals || {};

      // Find the service in the cache.
      var service = serviceCache[name];
      // If we find that service . . .
      if (service) {
        // build a list of services to provide to the service under test.
        var args = service.inject.reduce(function(memo, dependency) {
          // This allows services to be stubbed. If the service
          // under test requires another service and that service
          // was not passed as a local, use $injector to get the real one
          memo.push(locals[dependency] || $injector.get(dependency));
          return memo;
        }, []);

        // Call the service function with the list of services (fake and real).
        return service.fn.apply(null, args);
      }
    };

    // Provide a way to access the cache.
    get.cache = function() {
      return serviceCache;
    };

    return get;
  });

  // Replace the original factory method with one that captures each
  // registered service in the cache so we can get it later.
  // You could do this with app.service as well.
  app.__factory = app.factory;
  // Whenever a service registers, this function now gets called.
  // We'll do the same kind of parsing that angular does
  // for dependency resolution.
  app.factory = function(name, fn) {
    var inject = [];

    if (typeof fn !== 'function') {
      // If this service was registered with array notation, e.g.
      // app.factory('SomeService', ['SomeDependency', function(SomeDependency) {
      // then the fn argument is the array of services.
      inject = fn;
      // and the service function is the last in that array.
      fn = fn.pop();
    } else {
      // If this service was registered as a function (without the array),
      // as it is if you use ng-annotate as part of a build process,
      // parse function.toString() to get the arguments it's expecting.
      // This is essentially what angular does to figure out what to inject
      // in a service or controller.
      var match = fn.toString().match(/function\s*\(([^)]*)\)/);
      if (match && match[1] && match[1].length) {
        inject = match[1].split(/\s*,\s*/);
      }
    }

    // Cache the service, including the function and services to inject.
    serviceCache[name] = {
      // This is the list of dependency we check for in the get function above
      inject: inject,
      // And this is the function we pass those to
      fn: fn
    };

    // Call through to the original factory method so normal service registration occurs.
    app.__factory.apply(app, arguments);
  };

})();

That's kind of long and complicated, but you don't really need to know anything about it other than it let's you call $service('Name', { locals: locals }) to get a service instance, just like $controller does for controllers. Just make sure this script is included in your test bundle after angular but before any services are registered so it can capture them, then write your tests almost exactly like $controller tests:

describe 'SomeService', ->  
  Given ->
    @otherService = jasmine.createSpyObj('OtherService', ['someMethod'])
    @service = $service 'SomeService',
      OtherService: @otherService

The only difference here is we are keeping a reference to the returned service object in @service so that we can call its functions.

Directives

You could also write a $directive service exactly like the $service service above and capture directives too. I chose not to use this approach because that means you can't test compiling actual html. It's also not necessary to test directives in the same way, since they aren't injected until they are compiled. You can just inject the necessary services into your test, stub the functions used by the directive, and then compile your directive. For example:

describe 'my-directive', ->  
  Given ->
    # Remember, $injector is on window
    @someService = $injector.get('SomeService')
    spyOn @someSerivce, 'someMethod'
    # And $rootScope is on window
    @scope = $rootScope.$new()

  Given -> @element = "<my-directive></my-directive>"
  # And so is $compile
  Given -> @element = $compile(@element)(@scope)

It's because of this setup that I'm using $provide.constant to stub $timeout and $interval in that inject helper. The $timeout and $interval services return functions instead of objects (although they do both have a cancel function), which means you literally can't stub them this way:

describe 'my-directive', ->  
  Given ->
    @timeout = $injector.get('$timeout')
    spyOn(# . . . um, if $timeout is the object, what's the function?)

Instead, we stub $timeout and $interval up front, so that tests can provide behavior for them as necessary. Incidentally, this reveals an important service paradigm: never return a function as a service. Always return an object with function properties so you can stub them easier. This also makes it easier to add new functionality later without having to redo existing usages of the service.

Unrelated to testing directives, but to complete the explanation of stubbing things up front, we stub $httpBackend because we have a development server that returns (semi-) static json (for reasons why this can be a good setup, read Faking the API). Without stubbing $httpBackend, directives that require templates fail with $httpBackend's "Unexpected request" so we basically just tell $httpBackend to shut up and stay in line. We also stub a thing called RootScopeHelpers that does initial app setup. It's injected into our app's .run function, which means it happens every time you bootstrap the app, which makes it difficult to prevent unwanted behavior when testing other things. The nice thing about the $service service is that it still captures the original service even though we're using $provide.constant to supply a stub to dependents, so we can still test it's original behavior, even though $injector.get would normally return a stub.

Filters

Like directives, you could write a service to capture filters as they are registered (although you couldn't call it $filter, as that's the name of angular's built-in service). However, if your setup is simple enough, you don't really need to do that. Filters are typically pretty short, so we have all of ours defined in a single file. Consequently, we have one test for filters with multiple nested describe blocks. I chose to use the same approach I outlined for directives for testing filters as well, since it only requires one setup and because you don't often need to inject services into filters so the setup is quite small. The entirety of our filter setup looks like this:

describe 'filters', ->  
  Given ->
    @sce = $injector.get('$sce')
    spyOn @sce, 'trustAsResoureUrl'
    spyOn @sce, 'trustAsHtml'
    @filter = $injector.get('$filter')

If you have your filters split across many files or you have more complicated setup than this, maybe the other approach would be better. If you end up writing something to capture filters too, you might consider writing a single thing that captures everything. You might just make a generic $test service with specific methods for testing each type of thing. Something like this (completely untested and not guaranteed to work) service:

(function() {
  // Define all the types
  var types = ['service', 'factory', 'controller', 'directive', 'component', 'filter'];
  // And a cache
  var cache = {};
  // Get the app
  var app = angular.module('app');

  // Define the service with a dependency on $injector
  app.factory('$test', function($injector) {
    // This is a simple helper to create getter functions
    // for each type of test.
    var get = function(type) {
      // This will be the actual function attached to the service
      return function(name, locals) {
        // These internals are the same as before, except that
        // we use the type parameter from the outer scope to look
        // up the right type in the cache
        locals = locals || {};
        var thing = cache[type][name];
        if (thing) {
          var args = thing.inject.reduce(function(memo, dependency) {
            memo.push(locals[dependency] || $injector.get(dependency));
            return memo;
          });

          return thing.fn.apply(null, args);
        }
      };
    };

    var cache = function(type) {
      return type ? cache[type] : cache;
    };

    // Iterate over the types and create getter functions for them
    types.forEach(function(type) {
      cache[type] = get(type);
    });
  });

  // Then iterate over them to create interceptor functions.
  // Again, this is the same as the `$service` example above, but
  // with the added "type" checks.
  types.forEach(function(type) {
    // Save of the original reference
    app['__' + type] = app[type];
    // And replace it with our caching function
    app[type] = function(name, fn) {
      var inject = [];

      // Get the dependencies of this thing
      if (typeof fn !== 'function') {
        inject = fn;
        fn = fn.pop();
      } else {
        var match = fn.toString().match(/function\s*\(([^)]*)\)/);
        if (match && match[1] && match[1].length) {
          inject = match[1].split(/\s*,\s*/);
        }
      }

      // Make sure there's a cache for this type
      cache[type] = cache[type] || {};
      // And add this thing to that cache
      cache[type][name] = {
        fn: fn,
        inject: inject
      };

      // Call through to the original angular function
      app.['__' + type].apply(app, arguments);
    };
  });
})();

You'd then use it like this: $test.service('SomeService', { SomeDependency: @someDependency }). Or like this: $test.directive('myDirective', { SomeService: @someService }).

Why does this matter?

It's generally accepted that development things like tests and builds don't really have to work fast as long as the final production code is fast. But there are some negative things about slow tests. First, developers get annoyed by waiting for them. Consequently, they don't want to run the tests, or worse, they don't want to write tests (or they write less thorough tests). And of course, when you run slow tests and they fail, you need to run them again, which isn't helpful to developer morale. I have no scientific proof, but I suspect that happy developers are more productive and better problem-solvers. Slow tests also impair productivity. Not just because they're slow but also because there's a cognitive interruption to solving the problem at hand. Developers are busy writing new features and fixing bugs, and then they have to pause and wait for tests to complete. I've often started tests and then switched over to Facebook or Twitter while I wait for them to complete. When I come back to the code, I'm completely out of the flow of development. Finally, at least in the case of inject, there are side-effects besides just being slow. In our test suite, we had gotten to a point where you literally couldn't run the whole suite in one go because the browser would lock and crash about 3/4 of the way through. We were more or less relying on travis to tell us if something is wrong, rather than running the tests locally first. But eventually even travis would choke on the tests and fail, which meant we were getting false negatives and stalling product development.

Small optimizations still go a long way in improving the development life-cycle, but this is not a small optimization. For large test suites, you could be losing 30 seconds or more each time you run your tests. Over the course of a day or a week or a year this adds up to a lot of lost time, focus, and productivity. Finding and fixing bottlenecks in the development life-cycle is always a worthwhile exercise.

comments powered by Disqus