Don't use it, it's slow

Arseni Mourzenko
Founder and lead developer
177
articles
November 6, 2014
Tags: performance 13 profiling 4

Re­cent­ly, I’ve re­ceived an e-mail like this from a col­league of mine, the e-mail be­ing sent to a few peo­ple in the com­pa­ny:

“FYI, we have re­moved SQL com­put­ed columns used when com­put­ing the gauges. The prof­it is im­pres­sive: the prices are now loaded im­me­di­ate­ly, while be­fore, they took up to fif­teen sec­onds to load. There­fore, I rec­om­mend you to not use com­put­ed columns in SQL Serv­er.”

The only ac­cept­able re­ac­tion to that would be:

WHAT THE…! ARE YOU CRAZY, DUDE?!

Sur­pris­ing­ly, some com­pa­nies have an op­po­site re­ac­tion and wel­come such non­sense.

I start to be­lieve that there are peo­ple who don’t get per­for­mance and bench­mark­ing, and noth­ing would help them. For them, bench­mark­ing is:

This deep mis­un­der­stand­ing is par­tic­u­lar­ly harm­ful on three lev­els.

The first to be harmed is the code base it­self. Voodoo op­ti­miza­tion will of­ten de­grade the code base and cause im­por­tant is­sues some­where in the pro­ject. Of­ten, those op­ti­miza­tions slow down the prod­uct sig­nif­i­cant­ly: un­less the change ac­tu­al­ly solves a bug or re­places an in­ap­pro­pri­ate al­go­rithm by an ap­pro­pri­ate one, the de­crease in per­for­mance is usu­al­ly here.

The sec­ond to be harmed is the pro­ject it­self. Voodoo op­ti­miza­tion per­formed by in­ex­pe­ri­enced pro­gram­mers usu­al­ly takes time. Ran­dom­ly chang­ing code in or­der to move from “bad” to “good” can take days or weeks. Since there are no pro­fil­ing, the process can only be ran­dom, so in­stead of work­ing on 4% of the code base which caus­es 50% of per­for­mance is­sues, one works on 100% of the code base, i.e. 25 times more than need­ed. The lack of both prop­er tools and the prop­er tech­niques makes things even more dif­fi­cult. If we also con­sid­er that the “op­ti­miza­tion” slows down the prod­uct and de­creas­es the qual­i­ty of the code base, with­out bring­ing any­thing use­ful, it’s easy to see how harm­ful such tasks could be.

The third to be harmed is the com­pa­ny. By ac­cept­ing such prac­tices, the com­pa­ny builds a cul­ture based on myths, rather than facts. When the un­ver­i­fied, un­found­ed as­ser­tions of this type be­come an ac­cept­ed prac­tice, some im­por­tant is­sues would ap­pear soon­er or lat­er, with dread­ful con­se­quences on the abil­i­ty, for this com­pa­ny, to make rea­son­able choic­es. Any­thing may be­come a fact, and a non­sense such as: “I re­mem­ber work­ing with Python. It’s ter­ri­bly slow. We should re­al­ly use Java in­stead.” be­comes a com­mon­ly ac­cept­ed fact, since no­body ques­tions the as­ser­tions re­lat­ed to per­for­mance.

Not only peo­ple stop ques­tion­ing the as­ser­tion it­self, but they even stop think­ing about the gen­er­al­iza­tion. Since we don’t have any ac­tu­al sta­tis­ti­cal data, we can’t tell what the con­text is. Some­body done an ad-hoc per­for­mance com­par­i­son of A ver­sus B in a spe­cif­ic con­text, us­ing spe­cif­ic tools, un­der spe­cif­ic cir­cum­stances. Giv­en the lack of de­tails, gen­er­al­iza­tion oc­cur, and now the ac­cept­ed fact be­comes “A is al­ways, glob­al­ly slow­er than B”. It has noth­ing to do with the orig­i­nal com­par­i­son, but who cares?

I try to imag­ine a sci­en­tist who would try to pub­lish a re­port such as:

“I’ve done some re­search, and found that eat­ing ice cream every day in­creas­es the risk of a heart at­tack. I ad­vise every­one to stop eat­ing ice cream: a per­son who nev­er eats ice cream can­not have a heart at­tack.”

Such non­sense is im­pos­si­ble in sci­en­tif­ic com­mu­ni­ty. The sim­i­lar non­sense is com­mon in many IT-re­lat­ed com­pa­nies.

What would a cor­rect per­for­mance re­port look like?

If we were com­par­ing some tech­ni­cal choice ver­sus an­oth­er, we could, in­deed, find that the al­ter­na­tive A is faster than the al­ter­na­tive B. A re­port could be made from this ob­ser­va­tion. This re­port would men­tion:

If the re­port lacks one of those el­e­ments, its cred­i­bil­i­ty should be ques­tioned, and the au­thor can be in­vit­ed to work a bit more on it. If the re­port lacks two or more of those el­e­ments, the re­port is pret­ty use­less and should be thrown away.