Patterns

Social media in the physical world

So one of the things I have been seeing more and more are meat-space kiosks that are enabling (and encouraging) you to interact with them by sharing the activities you participated in via your social media identities.

How are they doing this? By having you type your credentials directly into the kiosk. Not only is this a Really Bad Idea(tm) but even the act of encouraging the generally non-security-savvy population that this is a “thing” is horrifically scary. No longer do you need to click on a phishing email to lose your password, all you have to do is buy something from a kiosk which has this configuration in it, from a kiosk which has been hacked. Oh wait, it’s not like that ever happens, right? Certainly Target would never get hacked, and if Target is safe, well, maybe the little guys will be fine too.

This is a patently Really Bad Idea but I don’t think it’s going away, so what I propose is this: sites and services that consider themselves identity providers (a.k.a. you offer OAuth login credential verification for third party sites/apps/projects/whatever), with their mobile app, should provide an easy way to generate a limited-time-use OAuth token, and then provide a way to display it via QR code, or similar.

Granted, this would require adding a webcam to the kiosks, but webcams are dirt cheap, and the net positive for everyone involved. Heck, I bet it turns out to be so much more user friendly that the rates of those social participation options becomes more frequent. Imagine Retailers could even, with this new, nearly painless, option, even offer users a chance to tweet, or post a status, about their in-progress transaction to receive some sort of discount, or special offer.

Bottom line: let’s get real and not encourage the general population to foster insecure password management choices. Entering your password (which is statistically likely your password to everything) into a public kiosk which exists in an unknown state of security is a bad idea, every time. Making it normal is an even worse idea. Let’s wrangle this under control before it becomes even more wide spread.

Applying functional programming design principals to server architecture design

It occurred to me this morning that there are actually quite a few parallels between functional programming and infrastructure design and management.

It all started by what I realized that I said while talking about environments: Production is meant to go from one stable, working, vetted version of code to another stable, working, vetted version of code. Any state between those two is invalid and should (preferably) never occur.

If you cycle on that again, you start to see that most deployment processes you know about violate this One Basic Rule(tm).

I posit that if you are deploying new code to currently running hosts that are handling traffic, you are doing it wrong.

Think about it like this: what is the one core feature of every highly scalable functional programming language? Every one has (or has developed patterns which essentially create) immutable values.

So when we scale this out of software and apply it to infrastructure, your code is the value of your server. If you are changing the value of your server while other processes are trying to access it, you’re going to run into concurrency issues. Ask any developer about sharing data between threads, and they’ll quickly tell you it’s difficult. Why, then, do we improperly share data between releases of our software?

The simple answer is that you have two options for atomic deployments that follow the rules of immutability:

  1. Drop the servers you are deploying to out of the flow of traffic. This is the easiest, but still fails to honor the spirit of immutability because the value of the server is still changing, it’s just changing while nobody is looking.
  2. Spin up new instances, and slowly work them into live traffic, confirming along the way that you are in fact getting the expected behavior out of the code.

Now, I know this is all hand-wavy because it glosses over the important aspect of data migration: I don’t have an answer there, yet. I suspect the true answer to that part of the solution would be something to the effect of being able to seamlessly decouple your entire system from write traffic (using a request proxy which could ‘pause’ calls) for some period of time while data updates are done.

What if, to create a truly fault tolerant design, you simply create a nearly 100% asynchronous  API. All requests come in and go into a process queue, and are handled from there. This way you are never required to turn off traffic to do an atomic update of your software because you can simply tell it to stop processing while the update progresses.

Thoughts?

About the Decorator Pattern

The Decorator Pattern lets you to easily extend a given set of objects (either grouped via class inheritance or interface) and extend their functionality in a way that avoids needlessly duplicating code.

Our example today will hopefully be pretty close to form, and while generally contrived, will simply illustrate the use of the Decorator Pattern.

Let’s say for instance, you run a video rental service. Let’s call it… SmedRocks. Now your crazy programmers have already built your inventory tracking system as a web service, and you have no control over how they have implemented it, but you have to fight the good fight and soldier on. Features must be implemented. To make this even simpler, our API has one function:

getVideos – This returns all of the titles we have, no matter if they are currently rented or not.

Now getVideos returns a JSON array like so:

{
    {
        'title': 'Some Movie',
        'status': 'rented'
    },
    {
        'title': 'Some Other Movie',
        'status': 'available'
    }
}

I’m sure you can already see the problem with how it’s returning data. Rented movies and available movies are all mixed in! What a pain!

Ok, so let’s look at what we need to do to get started. Let’s make our API. This isn’t how you would ACTUALLY do this, it’s just an example to get us moving along:

class MovieApi
{
    public function getVideos()
    {
        return json_decode(file_get_contents('http://example.org/rest/getVideos.json'));
    }
}

Excellent! We can now get our videos, but wait — we’re in the middle of building this and example.org seems to have gone offline. This isn’t going to help us get this going! So let’s refactor real quick so we can keep going on other things.

interface MovieApiInterface
{
    public function getVideos();
}
 
class LiveMovieApi implements MovieApiInterface
{
    public function getVideos()
    {
        return json_decode(file_get_contents('http://example.org/rest/getVideos.json'));
    }
}
 
class OfflineMovieApi implements MovieApiInterface
{
    public function getVideos()
    {
        $json = '{{"title": "Movie 1", "status": "rented"},{"title": "Movie 2", "status": "available"}}';
        return json_decode($json);
    }
}

Look at that! We’ve now defined a central MovieApiInterface which both implementations conform to, and have both an online and offline implementation which will also ultimately make writing tests easier.

Ok, now for the juicy part: we need to be able to ask this system for just rented or available movies. We could extend each API implementation, but that’s going to be some duplication we can live without.

Enter the Decorator Pattern.

With the decorator pattern, we can build an object that accepts a MovieApiInterface object in it’s constructor, and provides a uniform higher-level way to interact with our lower-level API.

Again, this is basic code to get you thinking, not optimized production-ready code.

class SpecificMovieFinder implements MovieApiInterface
{
    protected $movie_api;
 
    public function __construct(MovieApiInterface $movie_api)
    {
        $this->movie_api = $movie_api;
    }
 
    public function getVideos()
    {
        return $this->movie_api->getVideos();
    }
 
    public function getRentedMovies()
    {
        $movies = $this->getVideos();
        $rented_movies = array();
 
        foreach ($movies as $movie) {
            if ($movie->status == 'rented') {
                $rented_movies[] = $movie;
            }
        }
 
        return $rented_movies;
    }
 
    public function getAvailableMovies()
    {
        $movies = $this->getVideos();
        $available_movies = array();
 
        foreach ($movies as $movie) {
            if ($movie->status == 'available') {
                $available_movies[] = $movie;
            }
        }
 
        return $available_movies;
    }
}

Now we have a simpler API that actually conforms to how we will use it in practice instead of how the API designers built the service. This is great! They get their way, and we get ours. Everyone wins.

So how do we use it?

// development configuration
$offline_api = new SpecificMovieFinder(new OfflineMovieApi());
 
// live configuration
$online_api = new SpecificMovieFinder(new LiveMovieApi());

Now we get the same functionality on multiple implementations, and as an added bonus, our SpecificMovieFinder also still implements the MovieApiInterface allowing us to use it interchangeably with any other service that may need our api down the line!

My good friend Beau Simensen pointed out that unfortunately, my example is a little deficient. I’ll let his gist do the talking:

Thanks Beau!

What are some other places you can think of using this pattern?

About the Proxy Pattern

Edit: Added an update with a more concrete example further down.

I just wanted to share and provide a further explanation on the Proxy Pattern example I gave at the Ann Arbor PHP/MySQL User Group today.

Source: https://gist.github.com/4302634

Now, Dan had mentioned that this could be done through simple inheritance instead of via interface contract, and after some thought, I realized that I disagree.

Typically with an API object, you will initialize with a remote URL, and perhaps a token:

public function __construct($api_url, $auth_token)

But remember our new Cache API proxy only takes an API object and a Cache object.

public function __construct($cache, $api)

This fundamentally changes the contract that the initial API object creates, and as such, should then NOT extend the original API implementation.

Part of doing OOP well involves the management of contracts we establish. Changing the rules of a function in a sub-classed entity violates the Liskov substitution principle (as referenced in SOLID object oriented design).

Now, you COULD subclass the MyApi class, and add the Cache control via setter injection, but personally I don’t think that implementation is nearly as clean as providing a proxy object to add the caching functionality.

 

p.s. I’ll leave QuickTime running but I think it did eat the recording of the presentation — sorry guys! I’ll put the slides up soon but I don’t know how much context they will provide given how light they were compared to the commentary that went along with it.

 

Update (12/07/2012):

It has been pointed out that my example was perhaps a little too contrived, so I think I found a better one.

Let’s say you’re building a system out, and you know you want logging, but you don’t know what sort of implementation you want to do for said logging. Given that instance, let’s start with our basic interface:

1
2
3
4
5
6
interface Logger
{
  public function info($message);
  public function debug($message);
  public function fatal($message);
}

Ok, great! So, because production schedules are tight, we decide to give a very basic implementation first:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class MyBasicLogger implements Logger
{
  public function info($message)
  {
    error_log('INFO: '.$message);
  }
 
  public function debug($message)
  {
    error_log('DEBUG: '.$message);
  }
 
  public function fatal($message)
  {
    error_log('FATAL: '.$message);
  }
}

Alright! We now have our basic logger implemented!

Oh, what’s that you say? You want to use Monolog in place of error_log() in production? sure!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class MyMonologLogger implements Logger
{
  protected $monolog;
 
  public function __construct($monolog)
  {
    $this->monolog = $monolog;
  }
 
  public function info($message)
  {
    $this->monolog->addInfo($message);
  }
 
  public function debug($message)
  {
    $this->monolog->addDebug($message);
  }
 
  public function fatal($message)
  {
    $this->monolog->addCritical($message);
  }
}

Now we have two ways of logging messages. Now comes the constraint that we want to be emailed when fatal errors are logged. Your first instinct would be to extend MyMonologLogger and add it by overloading fatal(), but then we can’t use that functionality on any other logger we ever build. How do we build this functionality in a way that we can reuse over and over again? The Proxy pattern.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class MyEmailingLoggerProxy implements Logger
{
  protected $logger;
 
  public function __construct(Logger $logger)
  {
    $this->logger = $logger;
  }
 
  public function fatal($message)
  {
    $this->logger->fatal($message);
    mail('admin@example.com', 'A fatal error has occurred', $message);
  }
 
  public function info($message)
  {
    $this->logger->info($message);
  }
 
  public function debug($message)
  {
    $this->logger->debug($message);
  }
}

And now, no matter what we choose as our logging backend, now or in the future, we will always easily be able to have those fatal errors emailed to us by simply putting our chosen logging system inside an instance of MyEmailingLoggerProxy.

I hope this clears some things up!

twitter