A few words about Solange project
For nearly nine months, I am working on a project called Solange which consists of DevOps platform which makes it painless to deploy new projects in production. Surprisingly, I haven't written too much about it, not because it's a secret (the source is publicly available), but mostly because it is difficult to get started and describe the project which is not yet in its final shape.
I think now is a good time to describe what was done, and what are the expectations for the future.
Creating virtual machines
You know Chef and Puppet, right? Well, one part of my project is very similar to those two products. It allows to describe a virtual machine or a set of virtual machines, and then deploy them across the data center. Without entering in the details about Chef and Puppet, I will instead describe how my own one is organized.
Profiles and instances
The project uses profiles to describe similar machines and instances to describe the details of a specific machine.
For instance, every virtual machine which hosts MongoDB with replica sets shares the same profile. This blog relies on two MongoDB machines working in pair, while the redirection website uses a different pair of MongoDB machines; Iris service uses another pair, etc. They all use the same profile, but also six different instances, for six VMs.
This model makes it easy to define the frame, such as “PostgreSQL with load balancing” or “Internal SVN server with mirroring” or “Public read-only SVN server”, and then set the specifics inside the instances, for example “Backup directory for PostgreSQL database” or “The number of mirrors for an internal SVN server”.
Both profiles and instances use JSON as its primary way to configure basic elements, such as the list of apt
packages to install, and bash or Python scripts (or any executable) as a way to configure things which are not supported through JSON configuration.
This allows to change basic settings (such as the NFS backup path) with ease by changing the JSON file, while having the power to do anything we need in bash.
Current state
The project is actively used internally, and the whole infrastructure relies on it. This blog is hosted on virtual machines deployed with a single command. The same applies to our source servers. Actually, every website and web service hosted at Pelican Design & Development uses the project, including the PXE machines which were used to deploy Linux to the server which hosts the virtual machines.
I don't consider the project to be ready for public: while it is usable, it terribly lacks documentation and tests. The fact that the project grew organically while I was learning the basics of Linux, Python and Node.js led to the difficulty to test and to document the project correctly.
Still, the project is usable, so you're free to play with it if you want.
Further development
In the next months, the goal is to make it possible to make this platform less manual. In fact, while automation is here, configuration should still be changed by hand, and it is still required to type the command which deploys or destroys a virtual machine.
What I need, on the other hand, is to be able to tell to the system that I need, for instance, two MongoDB replica sets in pair and two small instances of application servers for a given project. I don't care what should be the configuration: what I care is that I have a given URI pointing to the project within the SVN repository, and I want it to run in production. What IP addresses should the system chose or what ports should be opened is not my problem: it is at the stage when I define the original profiles.
This represents a huge amount of work, but will also make it extremely easy to deploy projects. Ideally, this can even allow to sell computing power to other companies. For instance, what if a small company has a choice: either programmers may spend days or weeks understanding Amazon AWS or Windows Azure, or they can have all the infrastructure they need in one click at Pelican Design & Development, and start developing the project right now?
Using continuous deployment
While the previously described part of the project is essential, the goal is also to have a full continuous deployment platform. What I mean by that is that going to a website, clicking on a bunch of buttons and seeing the project deploy itself in the wild is nice, but not enough. With the number of revisions an average project can have per day, it is simply out of question to do those things by hand.
Therefore, deployment should be automated even further. It's not a question of telling to the system that I need two failover app servers and two replica sets for a given project in a SVN repository, but rather telling: “Here is my project. Every time I do a commit, test the new revision and intelligently deploy it in production, ensuring that there will be no downtime and no information loss.”
Getting to this level of automation would be even harder. Nevertheless, I know this is doable, and I'm pretty sure I will get there, one feature at a time. The road will not be easy, but I know my goal and I know what I should do for the next few days; and that's all I need.