Hey there, I'm a geek.
I'm also obsessed with tackling problems that "can't" be solved, and don't believe in sitting still.

I'm currently fascinated by the Internet of Things (IOT), Quantified Self, and ways to help people utilize the mass of data around them in productive ways to power their decisions and lives. If you have any questions, a problem you want to wrestle with, or a critique you'd like to send my way, hit me up via email or any of the other links on the left.

 

Building a PaaS (Part 1)

Part 1 of a series exploring the process of building a custom PaaS using Docker on DigitalOcean. In this post we'll take a look at what we're trying to build, some of the necessary requirements and initial architectural plans, as well as outline the next steps for the coming week.

Intro

In the recent months I've become a big fan of Google App Engine (GAE) and the power it provides with regard to instant scalability and overall infrastructure. In my work with Zipkick however, we were lucky enough to win a $5,000 hosting credit from the rockstar crew over at DigitalOcean, so needless to say we'll be doing all our hosting there for the foreseeable future.

For the purposes of our early development phase, standard single-box servers have been plenty, and truthfully most of our servers are still on the smallest option DigitalOcean offers (512MB memory, 20GB SSD). As we begin to accelerate towards a beta launch, however, I've started exploring options to install and/or build an IaaS/PaaS into the DigitalOcean paradigm. Like most developers, I've had my eye on Docker for a while now, and thought this seemed like a great chance to dive in and explore, and in doing so I'll be documenting my journey here for all of you to comment, critique, and hopefully learn from a few of my successes and failures. Without further ado, let's get started.

Requirements

Since the primary purpose of the first version of our PaaS will be powering the hosting for Zipkick, we'll use those server needs to guide our initial set of requirements. At the moment, those include:

  • Web facing load balancer
    • Needs to be able to add and remove servers in response to traffic
    • This of course means we need to be able to build and store server images that can be applied to a server instance upon creation, with as little spin-up time as possible
  • Internal networking between server types (web, database, cache, etc)
    • For our initial setup, we won't worry about a fully private VPN, but instead rely on whitelisting connections
    • We'll also rely on the usual practice of setting environment variables on each server to hold connection details (IP address, username, password, etc)

Initial Architecture concepts

  • One "PaaS" software install
    • This will be the dashboard for the whole PaaS, including:
      • Setting up (and executing) load balancer rules
      • Saving/Loading machine images
      • Receiving notifications from other machines (ie. traffic reports from load balancer, activation message from newly spun up servers, etc)
    • Will actively monitor server statuses (the latin geek in me demands that that be "stati", but for now the desire to be coherent is winning out...)
      • Via Load balancer reports for any servers managed by LB, and it's own querying for all other servers
  • Load balancer
    • For early development, this will likely live on the same machine as PaaS dashboard, but not required
    • This will send messages to dashboard reporting traffic levels, any issues with servers, etc
  • Servers
    • Types
      • Web (frontend and backend)
      • DB
      • Cache/In-Memory storage
    • On spin-up, request dependency connection info (ie. backend requests connection credentials for DB and cache), which is then stored in appropriate environment variable
      • Each type can have different dependencies, as managed via dashboard during image creation/editing
    • If necessary, dashboard can "push" new connection info (ie. DB server goes down and new one has to be spun up)

Next steps

While all this planning and writing is truly beautiful, it's... wait, no, screw that, I'm a developer, I would take doing over planning any day (which does explain the state of several of my earlier projects...). As such, next steps are diving in. This coming week I'll be installing our current ZipKick server images into docker containers and experimenting with the spin-up scripts. Tune in next week for the next segment in our gripping tales.

Post by on