Home Home

Official version control, competing systems and free market

Arseni Mourzenko
Founder and lead developer, specializing in developer productivity and code quality
February 24, 2016

The way version control service is organized within many corporations is that there is an operations department which is in charge of the official system, and, depending on the company, developers are either forced to use this system, or are simply highly encouraged to do it.

For instance, if SVN is the version control of choice, then all teams should use SVN for all projects.

The reasoning at the origin of those policies is, I admit it, rather solid. Having one tool used by everyone is expected to lead to:

  • Better quality of service (QOS). Since the service is managed by a dedicated team of specialists who know exactly what they are doing, it makes it possible for them to focus on providing the best quality and reliability for all employees.

  • More savings. If the SVN server is used by ten persons and costs $2 000 per month to maintain, it has a cost of $200 per month per user. If the same server is used by a thousand of persons, its cost drops down to $2 per month per user—a significant difference. Or we can do it the other way around: if the company is willing to spend $20 per month per developer for version control without focusing on savings, it means that the operations team would get $200 per month if the server is used by ten developers, or $20 000 per month if the server is used by all thousand employees. More money means more skillful persons spending more time improving the service, meaning a better QOS.

  • Better integration. If source code moves from one project to another or is shared between projects, the process is painless: svn mv is what it gets the job done.

  • Better audit. The company can handle all the source code in one place, track who can access sensitive code, and block access to source code to a disgruntled employee really fast before firing him.

  • Better backup management. This is simply QOS applied to backups.

All this should be true in some companies, but it's not the case everywhere. What I've seen in practice is that centralized version control is poorly managed, leading to regular interruptions of service. The most terrible case I've heard of from a colleague was a company where TFS server was down at least once per week and at least for a few hours. This was so annoying that he ended up creating a client which would take source code and share it through Windows shared network folders (security policies wouldn't permit having any custom server service within the company and most ports were closed anyway) with his pairs. The app included basic versioning and merging, so it could actually be an elementary replacement for TFS. Unsurprisingly, a few months later, the internal use of the tool was forbidden by the boss “for security reasons”, so the developers were back to sending code through e-mails when TFS was down.

Most cases I've seen were not that excessive, but still, the QOS was far from perfect. If version control is down for even an hour per month, preventing one hundred employees from working, this would mean not only 100 paid hours wasted, but also loss of focus, frustration and, inevitably, decreased productivity. As I remember, when discussing those issues with developers, they were usually admitting that “if only they could host version control themselves”, things would get better.

I'm not sure if they are right. Hosting version control is not an easy thing, and I believe that frustrated developers were largely underestimating the complexity of the task. It is one thing to host SVN on a personal server, and it is a very different thing to ensure high QOS and reliable data protection for dozens of developers. But the fact that the task is difficult doesn't mean it can't be done.

What I suggest is that instead of relying on a single version control, companies should authorize teams to set up their own version control services. They probably should be advised and/or audited on regular basis, especially when it comes to security and backups, but they should at least have the possibility to be able to compete with the official, or better to say “historical” version control.

This would lead to several benefits:

  • Developers could chose a proper system based on the specificity of their project. For instance, if the official version control is SVN, it could make perfect sense to use Git for a geographically distributed team where some members work on regular basis without internet connection (for instance on a plane). It could also make sense for a team working on a .NET app to pick TFS which has a slightly better integration with Visual Studio.

  • Developers who used their own version control could promote it to other teams. At this state, this becomes really interesting. Instead of having a dedicated team of system administrators who don't really care a lot about the quality, we now have multiple systems which compete with each other. This, in turn, encourages everyone to deliver a better service to a lower cost.

  • With regular audits, anyone could make an informed choice based on the central authority system which publishes trusted information about the characteristics of different version control systems available within the company. For instance, such system could indicate that Perforce server has an average response time of 250 ms. and has daily on-site and off-site backups, while SVN1 has a response time of 900 ms. but hourly on-site and off-site backups and SVN2 has a response time of 50 ms. but only weekly on-site only backups. Given those metrics, it makes sense to use SVN2 for prototypes, but one would rather use Perforce or SVN1 for anything else, even if it comes at the price of slower response times.

    Those are just examples. The central authority may contain more metrics, or may present them in a different manner. I haven't though about the subject, but I think the focus should be on clear visualizations for easy comparison, coupled with verbose technical data and information about the way those statistics were gathered, in order to ensure transparency and let nerds actually spend hours talking about the importance of five milliseconds difference in the mean response time.

But what about integration, that is the issue of migrating a piece of code from one version control to another? For me, this is a non-issue. If the team wants to change their version control, many popular systems have tools which make it possible to migrate the full history of changes from other popular systems. If, on the other hand, teams want to simply share common code which is reused in several projects, I think the approach is wrong: instead, there should be a common library which is then distributed through the packaging systems such as npm, pip or NuGet.

A few years ago, I would find the idea of competing version control systems rather strange. For Pelican Design & Development, my choice was clearly SVN for a few good reasons, and I couldn't imagine why would I let another system be used together with the official SVN. Today, I would see such competing systems more as an opportunity to provide better and cheaper services which are better suited for specific projects and teams. For instance, if a team loves distributed version control systems, why forcing them to use SVN?

In a similar way, I find it perfectly reasonable for a team to pick a language which is not officially supported within Pelican Design & Development, and to add this support. Microservices approach I currently use makes the interoperability seamless. The only thing is to ensure that there is a good support for testing and automation, especially in the context of continuous deployment.