This weekend I was invited by a few friends from the Ann Arbor PHP User Group to join them on Saturday night and figure out something to work on together.
TLDR: I need to do this more. It was immensely fun.
So it started off with a few ideas flying around on what to build, and then I’d mentioned that I have wanted to build an app for estimation poker since forever. Also — it seems I can be somewhat persuasive.
So the four of us sat down (Jonathan, Kelly, Jason and I) and we sorted out what our MVP (Minimum Viable Product to those of you who don’t live in startup land) was going to be. We settled on features and the basics of the protocol, and then had to pick technology. I’d seen that Ember.js seems particularly well built for building a multiple concurrent user system, so I suggested that, and I believe it was Jonathan who suggested Node.js for the back end, and of course — socket.io for communication. Jonathan and Jason would pair to build the back end, while Kelly and I would take the divide and conquer approach for the front end. With all of that decided, there was only one other choice to make…
Because it’s a fun name to say, that’s why.
So as of today, the minimum viable pieces are actually working. You can check out the github repository, or even see the live demo up on Heroku. I’m hoping we can maybe look at using it at work to help encourage participation during planning meetings perhaps, but even if that never comes to fruition, it has certainly been a fun project to work on, even just as far as it is now. It still has a lot of rough edges, but you can see it starting to come together.
The long story
Needless to say, where I work we’re pretty serious when it comes to testing. We want not only to get it right, but to have it actually be right, and to have it give us the information we need to be able to make accurate decisions about our level of confidence in the code we produce.
Things are not always as they seem
The trick is — you don’t know that until you have gone too far. You’ll read things that say “Ok, install X, Y and Z with npm” and little do you know you’re already eating from the poisoned apple. If you are pulling your testing libraries (not support files, but the testing libraries themselves) you have already lost the war. First they get you to install Mocha and Chai with npm. Then you’re installing some abstraction layer that occupies globals.window to make your code think it’s running in a browser. This works right up until you start testing things that 100% expect to be running in a browser (like jQuery plugins). [Insert brick wall here.]
Testing code meant to run in a browser requires a browser
Now that we had a basis to work from, we needed to make sure we make sure we could use this specific setup with Jenkins, so we dug in and figured out that with only a minor change, we could also get it to output an XUnit formatted log file. Yay!
Now that we have the basics covered…
We’ve now hit our very basic bare minimum of acceptable tooling. We can write unit tests against any code we have, and run them in a CLI interface, quickly, to make sure we have not broken anything. Now we needed a way to make sure that we were writing unit tests against all of the code we had, so we would know that our changes hadn’t broken anything. [Maestro, cue the music for the great code coverage war, please…]
Well, to be fair, it wasn’t actually a war. Let me rewind. Before we had realized we were doing everything wrong yet again, we had already gotten code coverage instrumented with Istanbul. If you are doing code coverage in JS and haven’t looked at Istanbul, it’s about time you do. It provides an amazingly in-depth look at your code. Files, statement, branches, functions, and lines. This differs substantially to my experience with at least one other code coverage tool which we will talk about in a moment — I’ll get there, don’t worry. So we had Istanbul up and running, then we refactor everything to use PhantomJS and bam, code coverage is completely busted. Hacked and hacked to no avail, and then it dawned on me. Mocha (and by that virtue, Istanbul) was now running in the browser (ok, PhantomJS, but still…), which meant it could no longer write it’s data to the file system like it used to when we were running Mocha from the command line directly.
Needless to say, this made the chance of getting our wonderfully nice Istanbul coverage reports up and running again look a little bleak. Of course, at this point, we weren’t fully aware of how wonderfully nice Istanbul’s reports were. I took this opportunity to re-evaluate our tool selection for code coverage, and figured I would try out Blanket.js. It offered the promise of being simpler to instrument (no special step to modify the source code you want to coverage test), and you know, might actually work with our setup. Once we got it spun up I noticed something right away though: the coverage information was nowhere near as robust. Blanket.js offered just line based coverage information. Don’t get me wrong, line based coverage is better than no coverage, but I’d seen better. I wanted better. We had to have the better solution to solve this correctly.
We figured out that with a few little config changes we can make our test framework pull up the code we instrumented with Istanbul. It also turns out that that is about all Istanbul needs to “work”. Notice I said “work” and not work. We had code coverage running. We had the data. We could see it through DOM inspection when we loaded the test suite in our browser. Oh yeah! Did I forget to tell you that part? We could run ‘grunt test:browser’ and it opened our unit tests in our preferred browser for interactive unit testing! Now remember that browser bit, it’s about to come back to bite us in the. Come on, coverage data! Don’t be shy, come hide out on my hard drive where my tools can consume you. Maybe that didn’t help things — Anyway. So then someone (who is not me, and who will give me flack for not remembering they did it — I’ll come back and edit in the correct name of our brilliant genius once their name comes to light) figures out a way to pass the data back from the browser through PhantomJS to Grunt, and have it write said data to a file on the hard drive. Like I said, Genius.
And best of all
One of the things I learned very early on about working with a large team, is if you don’t make tasks as frictionless as possible, they have a tendency to not get done.
When we bought into using Behat as our functional testing framework of choice, it came with a mandate that when we install Behat, it cannot come from public sources. GitHub goes down. Repositories get updated. There have been instances of repos being compromised. We had to insulate ourselves from all of that risk. Fortunately, Composer has a GREAT tool for this, called Satis.
Satis provides you a way to create your own Packagist repository, complete with distributable tarballs of supported libraries. The one problem that I ran into (and this may have since been fixed? I’m not sure!) is that I couldn’t get it to download the dependencies of my dependencies. For example, your composer.json requires Package A which requires Package B. When I tried this, Satis would only build a repository with Package A. Knowing this would cause trouble down the line, I decided there had to be a way to make this simpler.
Out of that, Satis Repository Builder was born.
It will generate a Satis repository and upload it to S3 all with a single command. It’s still got some cleanup, but I realized that if I hadn’t done it in the last 4 months I wasn’t likely to do it in the next 4, and perhaps some feedback (or pull requests!) will spur some new life into my interest in it.
Once you have uploaded your repo, you can simply use the following setup in your composer.json to enforce pulling from your Satis repository:
That’s it! I’m sure there’s tons that can be done to the repo builder, and I look forward to seeing issues and pull requests!
One last piece of advice from the trenches
When you’re using a tool like this to decouple yourself from the risk of third party updates, be sure you are as specific as possible. List all of your dependencies and your dependencies dependencies, so when you do need to go back and upgrade a version of something, you control the scope of the update, not composer. The more specific you are in your composer.json about versions, the less variation you will see between builds of your satis repository.
I had the great fortune to be invited into the Columbus tech community and present my Virtualizing your stack with Vagrant and Puppet talk at the 2013 Columbus Code Camp. I had a blast, and if you’re reading this from Columbus, thank you for having such an awesome community.
This talk has been almost completely revamped since the last time I gave it. We walked through what Vagrant is, and how it relates to various virtual machine systems and cloud providers, and then forked from there to talk about how to use Puppet to create meaningful configuration of your servers.
Finally, and I think most importantly, we talked about how to sell the time investment to your superiors, and discussed the fact that if your development environment is not as close of a carbon copy of your production environment as possible, there is no clear way to verify that the code you have in your development environment will work at all once released to production.
(if the slides are not showing up, they may not have finished processing just yet)
p.s. The video was reconstructed by manually taking the audio and combining them with the images in iMovie. If there’s something wrong, please let me know and I will work to correct it.
p.p.s. There is (somehow) a complete section on Hiera missing! I don’t know if I opened the wrong slide deck or what. The up side is there apparently wasn’t time to cover the material anyway, but the next time I give this talk it will be there.
What a ride this year has been, and it’s only slightly over half way done! I got a new job, and then we got acquired. While doing that, I’ve moved twice, each time carting loads of things from across the country. It’s been a heck of a year, and it’s nowhere near over.
All this change has made a few things exceptionally clear:
- My work very much fulfills me technically. In the last 5 months, I’ve worked on so many awesome projects. It’s left little time for other professional pursuits, however, which makes me sad, and I wish my friends at Point 5 Foundry well.
- On a related note, when I am not at work, I really just want to be with my family, relaxing, and having fun. Or, perhaps, if you’re an API fan, rest-ing.Yeah, ok, that was bad. I admit.
- If I was going to interrupt any of these two things, it would be spent being involved in the community. Our community is awesome! It’s not just our community though. I only have this for a career because of all of the hard work put in by so many really amazing people, so the least I can do is try and pay it forward.
So with that, there is one other announcement I am really excited to officially make (though, granted many of you may know by now, since I am so slow in writing this): I am now co-organizer of the San Francisco PHP User Group. Mike and I have already gotten one talk on the books, and we’re cooking up something extra special for September. I can’t wait until we can announce it.