If you have been watching my GitHub, you may have noticed a new repository show up a little while ago, named Testing Browser Javascript Completely.
The short story about that repository is that the current collective body of understanding on how to build a robust, flexible, and useful testing ecosystem for client side JavaScript with the current suite of tools available in the JavaScript space is infinitesimally small. Getting a framework set up that tested things in the correct way took weeks of effort. If we added up the amount of time the team spent on this testing setup, it would be in the hundreds of person hours.
The long story
Needless to say, where I work we're pretty serious when it comes to testing. We want not only to get it right, but to have it actually be right, and to have it give us the information we need to be able to make accurate decisions about our level of confidence in the code we produce.
Things are not always as they seem
The first large hurdle I hit on this journey was simply a language context issue. Naming things is hard. Today, on the web, "JavaScript Testing" means node.js. Almost exclusively, every time you read something recent about testing JavaScript, it's not telling you how to test browser side frontend JavaScript. You have to look for "client side" or "browser side" or similar words indicating that what you are looking at is actually useful for testing frontend code.
The trick is -- you don't know that until you have gone too far. You'll read things that say "Ok, install X, Y and Z with npm" and little do you know you're already eating from the poisoned apple. If you are pulling your testing libraries (not support files, but the testing libraries themselves) you have already lost the war. First they get you to install Mocha and Chai with npm. Then you're installing some abstraction layer that occupies globals.window to make your code think it's running in a browser. This works right up until you start testing things that 100% expect to be running in a browser (like jQuery plugins). [Insert brick wall here.]
Testing code meant to run in a browser requires a browser
When it comes down to it, you need to be running your tests in a browser. But we're also command line junkies and automating fools, so we have to satisfy our desire for testing against our desire to not repeatedly mash F5. Let's be perfectly clear: refresh based testing is a valid testing strategy but it is definitely not a good one. I'd like to introduce you to my new friend PhantomJS. Now, PhantomJS and I get along fairly well, but figuring out the best way to talk to my good buddy caused us quite a bit of trouble. We started with grunt-phantomjs which worked right up until it didn't. I don't recall what the issue was at the time, but there was something specific with the Mocha wiring once we changed to browser side JavaScript testing that threw it for a loop. That's when PhantomJS and I met grunt-mocha-phantomjs. It worked great! Ran the tests like a charm. Nice console output. You know... once we figured out the 15 steps to wire it all up nicely.
Now that we had a basis to work from, we needed to make sure we make sure we could use this specific setup with Jenkins, so we dug in and figured out that with only a minor change, we could also get it to output an XUnit formatted log file. Yay!
Now that we have the basics covered...
We've now hit our very basic bare minimum of acceptable tooling. We can write unit tests against any code we have, and run them in a CLI interface, quickly, to make sure we have not broken anything. Now we needed a way to make sure that we were writing unit tests against all of the code we had, so we would know that our changes hadn't broken anything. [Maestro, cue the music for the great code coverage war, please...]
Well, to be fair, it wasn't actually a war. Let me rewind. Before we had realized we were doing everything wrong yet again, we had already gotten code coverage instrumented with Istanbul. If you are doing code coverage in JS and haven't looked at Istanbul, it's about time you do. It provides an amazingly in-depth look at your code. Files, statement, branches, functions, and lines. This differs substantially to my experience with at least one other code coverage tool which we will talk about in a moment -- I'll get there, don't worry. So we had Istanbul up and running, then we refactor everything to use PhantomJS and bam, code coverage is completely busted. Hacked and hacked to no avail, and then it dawned on me. Mocha (and by that virtue, Istanbul) was now running in the browser (ok, PhantomJS, but still...), which meant it could no longer write it's data to the file system like it used to when we were running Mocha from the command line directly.
Needless to say, this made the chance of getting our wonderfully nice Istanbul coverage reports up and running again look a little bleak. Of course, at this point, we weren't fully aware of how wonderfully nice Istanbul's reports were. I took this opportunity to re-evaluate our tool selection for code coverage, and figured I would try out Blanket.js. It offered the promise of being simpler to instrument (no special step to modify the source code you want to coverage test), and you know, might actually work with our setup. Once we got it spun up I noticed something right away though: the coverage information was nowhere near as robust. Blanket.js offered just line based coverage information. Don't get me wrong, line based coverage is better than no coverage, but I'd seen better. I wanted better. We had to have the better solution to solve this correctly.
We figured out that with a few little config changes we can make our test framework pull up the code we instrumented with Istanbul. It also turns out that that is about all Istanbul needs to "work". Notice I said "work" and not work. We had code coverage running. We had the data. We could see it through DOM inspection when we loaded the test suite in our browser. Oh yeah! Did I forget to tell you that part? We could run 'grunt test:browser' and it opened our unit tests in our preferred browser for interactive unit testing! Now remember that browser bit, it's about to come back to bite us in the. Come on, coverage data! Don't be shy, come hide out on my hard drive where my tools can consume you. Maybe that didn't help things -- Anyway. So then someone (who is not me, and who will give me flack for not remembering they did it -- I'll come back and edit in the correct name of our brilliant genius once their name comes to light) figures out a way to pass the data back from the browser through PhantomJS to Grunt, and have it write said data to a file on the hard drive. Like I said, Genius.
And best of all
I've wrapped all of this knowledge up into a GitHub repository named Testing Browser Javascript Completely. This is essentially the same setup we use. There's more goodies I want to add to it, but it has the basics all there and ready for you to start building. Did we make any big mistakes? Could we do things better? Send us a PR or open an issue! JavaScript testing is too important to be this hard to set up correctly. Let's work together and keep an open conversation about how to make this easier. We all win when we have better go-to tooling. Making it easy to test JavaScript means it is more likely it will get tested.