YAGNI principle in action

Arseni Mourzenko
Founder and lead developer
May 15, 2020
Tags: rant 34 productivity 36 quality 36 refactoring 12 agile 10

Dis­cussing YAG­NI with col­leagues, I of­ten find my­self in a sit­u­a­tion where they give me a con­crete ex­am­ple where they were very hap­py to think about a giv­en fea­ture or as­pect be­fore it was re­al­ly need­ed. As those dis­cus­sions al­ways fol­low the same log­ic, I be­lieve it would be eas­i­er to just write an ar­ti­cle and re­fer to it lat­er on.

Yes­ter­day, a col­league came with a very con­crete ex­am­ple where he found that ap­ply­ing YAG­NI wasn't a good idea. He was work­ing on a web ap­pli­ca­tion. The ap­pli­ca­tion grew nat­u­ral­ly: it start­ed small—about a dozen of users who were friends of the de­vel­op­er—and a few years lat­er count­ed about two thou­sand ac­tive users. This growth led to two is­sues my col­league had to deal with: the band­width us­age and the con­fi­den­tial­i­ty of the data. When the ap­pli­ca­tion was small, things were easy: it was host­ed on a sin­gle EC2 in­stance and no­body re­al­ly cared how the in­for­ma­tion was stored and trans­mit­ted. But more users meant that a few ex­tra ze­ros ap­peared in the Ama­zon's in­voice, and that peo­ple start­ed won­der­ing what's re­al­ly hap­pen­ing with the data un­der the hood.

This had to lead, among oth­ers, to two pure­ly tech­ni­cal mea­sures. First, the re­sources had to be com­pressed; pre­vi­ous­ly, text re­sources such as HTML or JSON were trans­mit­ted un­com­pressed, lead­ing to ex­tra band­width be­ing wast­ed. Sec­ond, the ap­pli­ca­tion had to be moved from HTTP to HTTPS.

As I ac­com­pa­nied my col­league with the mi­gra­tion to HTTPS, know­ing a bit more than him about, for in­stance, the HTTP head­ers that he needs to use to in­crease the se­cu­ri­ty, we start­ed talk­ing about how dif­fi­cult is it to do those two changes which are ex­pect­ed to be so sim­ple. He com­plained, in fact, that he spent three days im­ple­ment­ing com­pres­sion, and had dif­fi­cul­ties with HTTPS as well. “If only I have thought about com­pres­sion and HTTPS from the very start of the pro­ject!”—he told me.

Well, the “if only” rea­son­ing is quite the op­po­site of YAG­NI, and it is in part the “if only” log­ic which made those two changes so dif­fi­cult in the first place.

I won­dered, why was it so dif­fi­cult for him to add com­pres­sion. I mean, it's just a bunch of lines to add, maybe some con­fig­u­ra­tion to change. So he showed me the de­tails, and in­deed, the con­trap­tion wasn't easy to tame. An­a­lyz­ing the sit­u­a­tion, we iden­ti­fied three dif­fi­cul­ties:

It was time now to ask our­selves why those dif­fi­cul­ties ex­ist in the first place.

Third-par­ty li­braries and oth­er fash­ion­able things

The first two ones were sim­ple. The choice to use those third-par­ty li­braries was made in or­der to sim­pli­fy the pro­ject, by del­e­gat­ing the work to them. This is a great thing when the ben­e­fits of the li­brary out­per­form its cost. In this cur­rent case, how­ev­er, those li­braries just added an ex­tra lev­el of com­plex­i­ty and re­quired ex­tra code. I'm not say­ing that those li­braries are per­fect­ly use­less; what I as­sert, how­ev­er, is that in this pro­ject, they were an overkill. Both li­braries were large and were try­ing to han­dle a tremen­dous amount of cas­es and sit­u­a­tions. The pro­ject didn't use even 1% of those li­braries, while in­cur­ring their cost. This is a clear vi­o­la­tion of YAG­NI.

What my col­league should have done, orig­i­nal­ly, is to start small. A pro­ject which would be used by a dozen of per­sons doesn't need the fan­cy frame­works and li­braries which are fash­ion­able at the mo­ment. Start sim­ple, and add de­pen­den­cies only when you clear­ly see how they would ben­e­fit the pro­ject.

It seems a sim­ple rule, but I no­tice that more and more ju­nior pro­gram­mers imag­ine a new pro­ject as an or­ches­tra­tor of fan­cy li­braries and tech­nolo­gies, rather than a tool which does a giv­en set of tasks. In some com­mu­ni­ties, this ten­den­cy is be­yond san­i­ty. It seems to­day that every web ap­pli­ca­tion should start with An­gu­lar or Re­act or any oth­er fan­cy thing, and nec­es­sar­i­ly ship with dozens or even hun­dreds of third-par­ty li­braries from the day zero. The re­sult is just clum­sy ap­pli­ca­tions which per­form poor­ly and re­quire megabytes of band­width for the most el­e­men­tary thing. If third-par­ty li­braries are not enough, in­fra­struc­ture seems to go the same way too. A few days ago, dis­cussing with an­oth­er col­league one of my pro­jects, I men­tioned that the pro­ject has a sim­ple home made queue sys­tem. As soon as he heard the word “queue,” he start­ed telling that I need to set­up an Apache Kaf­ka clus­ter. This is like see­ing some­one us­ing System.Collections.Generic.Queue<T> in the source code, and claim­ing that he re­al­ly needs to stop do­ing that, and start re­ly­ing on Rab­bit­MQ.

At this point of the dis­cus­sion, most pro­gram­mers re­spond: “Well, that's all nice, but I ob­vi­ous­ly wouldn't start with, say, an in-mem­o­ry dic­tio­nary to cache some data, when I al­ready know from the be­gin­ning that I need a dis­trib­uted cache so­lu­tion. I don't want to rewrite my home-made so­lu­tion lat­er, and nei­ther should you.”

The fact is, in most cas­es, you think you need a giv­en tech­nol­o­gy, but you don't know it for sure, and es­pe­cial­ly you don't know the de­tails.

Here's an ex­am­ple. I spend a lot of my per­son­al time de­sign­ing REST ser­vices. I have a lot of them, some small, some large. In many cas­es, when I start work­ing on a new ser­vice, I have no ob­jec­tive in­for­ma­tion about its scale, nor do I know all the fea­tures it would have. So I start small: if the ser­vice needs to store data, I don't au­to­mat­i­cal­ly pro­vi­sion a bunch of ma­chines for a Post­greSQL or a Mon­goDB clus­ter. In­stead, I use plain files, of­ten cou­pled with a very lazy ap­proach where, for in­stance, I would load an en­tire col­lec­tion of ob­jects when I just need one. This is by no means a good ex­am­ple of per­for­mance, but I sim­ply don't care, be­cause a ser­vice which process­es twen­ty queries per hour won't be my next headache even if every re­quest loads one megabyte of data, in­stead of a few bytes.

Lat­er on, some ser­vices grow and need a real data­base. Some of you may say: “Hey, see, I told you so; now you're rewrit­ing the data ac­cess lay­er, in­stead of do­ing some­thing use­ful.” But you for­get that it took a few min­utes to write the “data ac­cess lay­er” in the first place. For the sake of sim­plic­i­ty, I'm ready to sac­ri­fice a few min­utes of my time. You also for­get an­oth­er im­por­tant as­pect: since I ran the ser­vice in pro­duc­tion for some time, and since I now have a clear un­der­stand­ing of how the ser­vice is used, I know ex­act­ly how to struc­ture the data­base, and what would make more sense: a re­la­tion­al data­base, or some NoSQL so­lu­tion. And that vi­sion saves me hours for small pro­jects, and could save men-years to big cor­po­ra­tions work­ing on large pro­jects.

Oth­er ser­vices stay small, and are per­fect­ly hap­py stor­ing their data in plain JSON or XML files. Ob­vi­ous­ly, this is not as sexy as Cas­san­dra, or Elas­tic­search, or I don't know what else is fash­ion­able to­day, but it has one thing: it solves a giv­en prob­lem in the most sim­ple way. And when you'll be scratch­ing your head about the way to mi­grate your Elas­tic­search clus­ter from ver­sion 6 to ver­sion 7 and call it chore in Git, I would mean­while be work­ing on some­thing which brings the ac­tu­al val­ue.

You aren't gonna need this piece of code

If de­pen­den­cies weren't enough, my fel­low pro­gram­mer was also adding ex­tra code to han­dle cas­es which wouldn't ex­ist. As I was telling, he end­ed up with four dif­fer­ent ways to serve con­tent. One of them was deal­ing with large JSONs. Orig­i­nal­ly, there was a sus­pi­cion that some very spe­cif­ic el­e­ment in the ap­pli­ca­tion would grow, and so its JSON rep­re­sen­ta­tion will be­come quite large. I spent at least half an hour try­ing to fig­ure out what ex­act­ly is large. It was so con­fus­ing that the col­league was ob­sti­nate­ly re­fus­ing to give me even a rough es­ti­mate, but end­ed up ad­mit­ting that we are not talk­ing about gi­ga­bytes, nor hun­dreds of megabytes, but pos­si­bly about a few megabytes of text.

His con­cern was, so, that large JSON would be prob­lem­at­ic for the mem­o­ry us­age on the serv­er, and al­low an at­tack­er to ex­ploit it to cause a de­nial of ser­vice by sim­ply fill­ing all the mem­o­ry on the serv­er. So there was a cus­tom code to se­ri­al­ize the ob­ject to JSON in parts, and then flush those parts to the client as they are gen­er­at­ed. The re­sult was a piece of 200 LOC of code. Hav­ing worked on chun­ked up­load trans­fers for Flask, I must ad­mit that 200 LOC doesn't sound ex­ces­sive for this prob­lem.

How­ev­er, the prob­lem it­self sim­ply doesn't ex­ist. De­spite the growth of pop­u­lar­i­ty of the ap­pli­ca­tion, the largest JSON re­sponse for the past month, ac­cord­ing to the logs, is 400 KB. Sim­i­lar­ly, right now, the bot­tle­neck on EC2 is not the mem­o­ry, which is used at 25% in av­er­age, with oc­ca­sion­al peaks at 40%, but the CPU (the ap­pli­ca­tion does per­form some in­ter­est­ing stuff which re­quires com­put­ing pow­er and could eas­i­ly cap the CPU at 100% for a while). De­spite this pos­si­bil­i­ty to per­form a de­nial of ser­vice, there were no iden­ti­fied at­tempts to do so, and a few at­tacks were tar­get­ing the pass­words of the users. This one is pret­ty fun­ny, giv­en that the ap­pli­ca­tion doesn't store user's pass­words.

An­oth­er part was deal­ing with dates. The web ap­pli­ca­tion was pro­vid­ing an API which was used not only by the ap­pli­ca­tion it­self, but also by a re­port ser­vice. In or­der for this re­port ser­vice to work cor­rect­ly, a sep­a­rate JSON se­ri­al­iza­tion mech­a­nism was made in or­der to han­dle the idio­syn­crasy of tex­tu­al rep­re­sen­ta­tion of time­zones. I'm sad to tell that de­spite its ug­li­ness, this part has its rea­son to be here, and if my col­league was fol­low­ing YAG­NI, the re­sult would still be the same.

A third part was deal­ing with Uni­code. There was in the ap­pli­ca­tion a spe­cif­ic part where Uni­code char­ac­ters were ac­cept­ed, and only there—users could put Uni­code char­ac­ters in their names. The prob­lem with Uni­code was a mix of the data­base quirks and an un­der­stand­able lack of ex­pe­ri­ence of my col­league with Uni­code. Now the fun­ny part. The data­base quirks were solved in the next ver­sion of the data­base. But hav­ing this par­tial Uni­code sup­port was a choice my col­league made four months be­fore the re­lease of the new ver­sion of the data­base. The first user with Uni­code char­ac­ters in his name was reg­is­tered… sev­en months lat­er. Would my col­league have wait­ed, he could just up­grade the data­base, avoid mak­ing any hacks in the source code, and still wel­come the new user with an Uni­code name.


The im­pres­sion that you're lucky that you thought about a giv­en fea­ture or tech­nol­o­gy months be­fore you re­al­ly need­ed it is mis­lead­ing. The fact that right now, it would be dif­fi­cult to im­ple­ment the fea­ture or tech­nol­o­gy, doesn't mean that you are right when you im­ple­ment some­thing for the needs which may ap­pear in the fu­ture, but rather that your pro­ject is way to com­plex, like­ly be­cause you didn't rely on YAG­NI prin­ci­ple enough.

Cram­ming every­thing which is fash­ion­able and de­sign­ing soft­ware to do what you might want it to do in the fu­ture is not a cor­rect way to cre­ate soft­ware prod­ucts. It re­sults, as shown by re­peat­ed ex­pe­ri­ence, in bloat­ware, in dif­fi­cult to main­tain so­lu­tions, in prod­ucts which are way more com­plex than they need to be.

If you can't clear­ly ex­plain why you need such or such thing, for­get about it. Move to a fea­ture that you know that you need right now. Use your time to refac­tor code. To test it. And only when you ab­solute­ly know, ob­jec­tive­ly, that you need a mes­sage queue ser­vice, a data­base, a caching so­lu­tion, a cus­tom way to han­dle dates, a fan­cy li­brary which would save you hun­dreds of lines of code while cost­ing a few hours of pro­gram­mers' time and a few min­utes per year in terms of main­te­nance lat­er, only then do what needs to be done.