One of the things I learned very early on about working with a large team, is if you don’t make tasks as frictionless as possible, they have a tendency to not get done.
When we bought into using Behat as our functional testing framework of choice, it came with a mandate that when we install Behat, it cannot come from public sources. GitHub goes down. Repositories get updated. There have been instances of repos being compromised. We had to insulate ourselves from all of that risk. Fortunately, Composer has a GREAT tool for this, called Satis.
Satis provides you a way to create your own Packagist repository, complete with distributable tarballs of supported libraries. The one problem that I ran into (and this may have since been fixed? I’m not sure!) is that I couldn’t get it to download the dependencies of my dependencies. For example, your composer.json requires Package A which requires Package B. When I tried this, Satis would only build a repository with Package A. Knowing this would cause trouble down the line, I decided there had to be a way to make this simpler.
It will generate a Satis repository and upload it to S3 all with a single command. It’s still got some cleanup, but I realized that if I hadn’t done it in the last 4 months I wasn’t likely to do it in the next 4, and perhaps some feedback (or pull requests!) will spur some new life into my interest in it.
Once you have uploaded your repo, you can simply use the following setup in your composer.json to enforce pulling from your Satis repository:
That’s it! I’m sure there’s tons that can be done to the repo builder, and I look forward to seeing issues and pull requests!
One last piece of advice from the trenches
When you’re using a tool like this to decouple yourself from the risk of third party updates, be sure you are as specific as possible. List all of your dependencies and your dependencies dependencies, so when you do need to go back and upgrade a version of something, you control the scope of the update, not composer. The more specific you are in your composer.json about versions, the less variation you will see between builds of your satis repository.
I had the great fortune to be invited into the Columbus tech community and present my Virtualizing your stack with Vagrant and Puppet talk at the 2013 Columbus Code Camp. I had a blast, and if you’re reading this from Columbus, thank you for having such an awesome community.
This talk has been almost completely revamped since the last time I gave it. We walked through what Vagrant is, and how it relates to various virtual machine systems and cloud providers, and then forked from there to talk about how to use Puppet to create meaningful configuration of your servers.
Finally, and I think most importantly, we talked about how to sell the time investment to your superiors, and discussed the fact that if your development environment is not as close of a carbon copy of your production environment as possible, there is no clear way to verify that the code you have in your development environment will work at all once released to production.
(if the slides are not showing up, they may not have finished processing just yet)
p.s. The video was reconstructed by manually taking the audio and combining them with the images in iMovie. If there’s something wrong, please let me know and I will work to correct it.
p.p.s. There is (somehow) a complete section on Hiera missing! I don’t know if I opened the wrong slide deck or what. The up side is there apparently wasn’t time to cover the material anyway, but the next time I give this talk it will be there.
On Saturday, February 16th, 2013, I talked my way through setting up the same sort of contact form we set up in Code Evolution: Contact Form (part 1) using Silex instead of creating our own framework. There was a lot of invaluable discussion around the room about the value frameworks bring to the table as well.
The Decorator Pattern lets you to easily extend a given set of objects (either grouped via class inheritance or interface) and extend their functionality in a way that avoids needlessly duplicating code.
Our example today will hopefully be pretty close to form, and while generally contrived, will simply illustrate the use of the Decorator Pattern.
Let’s say for instance, you run a video rental service. Let’s call it… SmedRocks. Now your crazy programmers have already built your inventory tracking system as a web service, and you have no control over how they have implemented it, but you have to fight the good fight and soldier on. Features must be implemented. To make this even simpler, our API has one function:
getVideos – This returns all of the titles we have, no matter if they are currently rented or not.
I’m sure you can already see the problem with how it’s returning data. Rented movies and available movies are all mixed in! What a pain!
Ok, so let’s look at what we need to do to get started. Let’s make our API. This isn’t how you would ACTUALLY do this, it’s just an example to get us moving along:
class MovieApi
{publicfunction getVideos(){returnjson_decode(file_get_contents('http://example.org/rest/getVideos.json'));}}
Excellent! We can now get our videos, but wait — we’re in the middle of building this and example.org seems to have gone offline. This isn’t going to help us get this going! So let’s refactor real quick so we can keep going on other things.
Look at that! We’ve now defined a central MovieApiInterface which both implementations conform to, and have both an online and offline implementation which will also ultimately make writing tests easier.
Ok, now for the juicy part: we need to be able to ask this system for just rented or available movies. We could extend each API implementation, but that’s going to be some duplication we can live without.
Enter the Decorator Pattern.
With the decorator pattern, we can build an object that accepts a MovieApiInterface object in it’s constructor, and provides a uniform higher-level way to interact with our lower-level API.
Again, this is basic code to get you thinking, not optimized production-ready code.
class SpecificMovieFinder implements MovieApiInterface
{protected$movie_api;publicfunction __construct(MovieApiInterface $movie_api){$this->movie_api=$movie_api;}publicfunction getVideos(){return$this->movie_api->getVideos();}publicfunction getRentedMovies(){$movies=$this->getVideos();$rented_movies=array();foreach($moviesas$movie){if($movie->status=='rented'){$rented_movies[]=$movie;}}return$rented_movies;}publicfunction getAvailableMovies(){$movies=$this->getVideos();$available_movies=array();foreach($moviesas$movie){if($movie->status=='available'){$available_movies[]=$movie;}}return$available_movies;}}
Now we have a simpler API that actually conforms to how we will use it in practice instead of how the API designers built the service. This is great! They get their way, and we get ours. Everyone wins.
So how do we use it?
// development configuration$offline_api=new SpecificMovieFinder(new OfflineMovieApi());// live configuration$online_api=new SpecificMovieFinder(new LiveMovieApi());
Now we get the same functionality on multiple implementations, and as an added bonus, our SpecificMovieFinder also still implements the MovieApiInterface allowing us to use it interchangeably with any other service that may need our api down the line!
My good friend Beau Simensen pointed out that unfortunately, my example is a little deficient. I’ll let his gist do the talking:
Thanks Beau!
What are some other places you can think of using this pattern?
Now, Dan had mentioned that this could be done through simple inheritance instead of via interface contract, and after some thought, I realized that I disagree.
Typically with an API object, you will initialize with a remote URL, and perhaps a token:
publicfunction __construct($api_url,$auth_token)
But remember our new Cache API proxy only takes an API object and a Cache object.
publicfunction __construct($cache,$api)
This fundamentally changes the contract that the initial API object creates, and as such, should then NOT extend the original API implementation.
Now, you COULD subclass the MyApi class, and add the Cache control via setter injection, but personally I don’t think that implementation is nearly as clean as providing a proxy object to add the caching functionality.
p.s. I’ll leave QuickTime running but I think it did eat the recording of the presentation — sorry guys! I’ll put the slides up soon but I don’t know how much context they will provide given how light they were compared to the commentary that went along with it.
Update (12/07/2012):
It has been pointed out that my example was perhaps a little too contrived, so I think I found a better one.
Let’s say you’re building a system out, and you know you want logging, but you don’t know what sort of implementation you want to do for said logging. Given that instance, let’s start with our basic interface:
class MyMonologLogger implements Logger
{protected$monolog;publicfunction __construct($monolog){$this->monolog=$monolog;}publicfunction info($message){$this->monolog->addInfo($message);}publicfunction debug($message){$this->monolog->addDebug($message);}publicfunction fatal($message){$this->monolog->addCritical($message);}}
Now we have two ways of logging messages. Now comes the constraint that we want to be emailed when fatal errors are logged. Your first instinct would be to extend MyMonologLogger and add it by overloading fatal(), but then we can’t use that functionality on any other logger we ever build. How do we build this functionality in a way that we can reuse over and over again? The Proxy pattern.
class MyEmailingLoggerProxy implements Logger
{protected$logger;publicfunction __construct(Logger $logger){$this->logger=$logger;}publicfunction fatal($message){$this->logger->fatal($message);mail('admin@example.com','A fatal error has occurred',$message);}publicfunction info($message){$this->logger->info($message);}publicfunction debug($message){$this->logger->debug($message);}}
And now, no matter what we choose as our logging backend, now or in the future, we will always easily be able to have those fatal errors emailed to us by simply putting our chosen logging system inside an instance of MyEmailingLoggerProxy.