Don't ask writing quality code

Arseni Mourzenko
Founder and lead developer
177
articles
February 7, 2018
Tags: quality 36 workplace 7

Four years ago, I com­plained about peo­ple who are ask­ing to write good code in a con­text where the com­pa­ny cul­ture it­self leads to bad code in the first place. I ex­plained the ne­ces­si­ty of RCA, and the use­less­ness of in­di­vid­ual sug­ges­tions to write qual­i­ty code.

It is time now to ex­pand on the sub­ject.

Cul­ture of qual­i­ty is a cu­ri­ous beast, and is not easy to tame. It is es­sen­tial­ly im­pos­si­ble to ob­tain qual­i­ty on sim­ple re­quest. In­stead, qual­i­ty should be nur­tured and should be­come the in­her­ent part of the cul­ture of the com­pa­ny. A few com­pa­nies I con­sult­ed over the years had a cu­ri­ous ap­proach to qual­i­ty: from mar­ket­ing per­spec­tive, they were qual­i­ty-dri­ven, ans so were their in­ter­views. They were also ex­pect­ing the in­ter­vie­wees to put an em­pha­sis on qual­i­ty, and em­ploy­ees them­selves were ex­pect­ed to make it look like the qual­i­ty mat­ters.

There are lots of ways to make it look like you know any­thing about qual­i­ty when you work as a de­vel­op­er. This is a non-ex­haus­tive list of the tech­niques:

Many pro­gram­mers do know and use on reg­u­lar ba­sis those tech­niques. Com­pa­nies where the in­her­ent cul­ture is to look qual­i­ty-dri­ven will at­tract more of those pro­gram­mers, and en­cour­age this be­hav­ior in the ones who don't have it orig­i­nal­ly.

The un­for­tu­nate as­pect of qual­i­ty is that it is easy to min­gle real qual­i­ty and some­thing which pre­tends to have qual­i­ty in it. It hap­pens with the prod­ucts we buy—of­ten a prod­uct which looks sol­id at the be­gin­ning ap­pears to be made of cheap com­po­nents. In the case of prod­ucts, this prob­lem can be mit­i­gat­ed by ac­quir­ing more knowl­edge. For ex­am­ple, I made a mis­take a few years ago to buy an UPS from In­fos­ec. Now that I took apart sev­er­al UP­Ses from sev­er­al com­pa­nies and know a bit bet­ter what makes a good UPS, I won't fall into the same trap.

But the cul­ture of qual­i­ty can hard­ly be com­pared to a tech­ni­cal qual­i­ty of the com­po­nents used in a de­vice. So what do we do to en­sure that qual­i­ty work is pro­duced, if mar­ket­ing qual­i­ty to em­ploy­ees has a such dis­guised ef­fect?

For sim­pler things, it could come to mon­ey. For in­stance, if you de­vel­op a prod­uct which needs to be se­cure, just throw more mon­ey at se­cu­ri­ty. If in­ter­ac­tion de­sign of your prod­uct mat­ters, use the mon­ey lever­age to hire bet­ter de­sign­ers who will spend more time work­ing on your prod­uct. Need re­li­a­bil­i­ty? More mon­ey will help buy­ing more servers and hire peo­ple who are spe­cial­ized in re­li­a­bil­i­ty as­pects of a soft­ware prod­uct. Need per­for­mance? Mon­ey can get you that, too.

But you can't ex­change mon­ey for qual­i­ty in a straight­for­ward man­ner. You can't sim­ply pay ex­tra to those of the de­vel­op­ers who make it look like they care about qual­i­ty. The only thing you will get in re­sponse is more be­hav­ior like the one I de­scribed above.

So here it comes, the ques­tion of an ob­jec­tive eval­u­a­tion of the qual­i­ty of a prod­uct. I al­ready con­sid­ered this sub­ject in the past, and em­pha­sized the im­por­tance of a val­u­a­tion over a met­ric and the one of a dif­fer­ence be­tween the per­cep­tion of the code qual­i­ty by the pro­gram­mers them­selves and the ac­tu­al qual­i­ty of the code base. In the same ar­ti­cle, I ex­plained why the eval­u­a­tion of the code qual­i­ty by its au­thors is ir­rel­e­vant when it comes to iden­ti­fy­ing the qual­i­ty of the code.

If qual­i­ty can­not be mea­sured di­rect­ly, there are ways to do it in­di­rect­ly, ways which trans­late into ob­jec­tive met­rics such as the num­ber of is­sues re­port­ed for a pe­ri­od of time, cor­re­lat­ed with the num­ber of re­leas­es, the ve­loc­i­ty of the team and the num­ber and size of fea­tures for the same pe­ri­od of time. Here, the qual­i­ty of code is not eval­u­at­ed di­rect­ly; the met­ric rather de­pends on the code qual­i­ty which pre­ced­ed the pe­ri­od of time, as well as re­flects the con­tex­tu­al qual­i­ty of work of the de­vel­op­ers.

An­oth­er in­di­rect met­ric would be the av­er­age time need­ed for new de­vel­op­ers to be­come pro­duc­tive. For prod­ucts which are at the high­er end of the qual­i­ty scale, this is a mat­ter of days; no­tice that the size of the prod­uct is ir­rel­e­vant (the do­main com­plex­i­ty, how­ev­er, is a fac­tor to take in ac­count).

A se­ries of coun­ters can be set in a way that, com­bined, they will make high qual­i­ty code blos­som, with­out en­cour­ag­ing the de­vel­op­ers to make them look qual­i­ty-dri­ven by some de­vi­ous means. Com­bined is the im­por­tant word here: no sim­ple met­rics will pro­duce qual­i­ty.

Sim­ple, non-sys­temic met­rics pro­duce the be­hav­ior I de­scribed at the be­gin­ning of the ar­ti­cle. Or when they don't, they en­cour­age be­hav­ior which is not par­tic­u­lar­ly fruit­ful. For in­stance, mea­sur­ing the com­pli­ance of a code base to style rules is very sim­ple. But could you imag­ine a met­ric which would tell you how well de­sign pat­terns were ap­plied, and whether they were ap­plied only in lo­ca­tions where one needs them, or not?

Ex­am­ple

A few months ago, I helped a col­league set­ting up such sys­tem. Ex-de­vel­op­er, he was a pro­ject man­ag­er of a team of six de­vel­op­ers. The qual­i­ty suf­fered, and ba­sic en­cour­age­ment to write qual­i­ty code wouldn't have any ef­fect. My col­league sup­posed that the low qual­i­ty is due to the fact that the de­vel­op­ers work un­der pres­sure, but when he achieved to low­er the pres­sure, the qual­i­ty re­mained low.

We start­ed by de­sign­ing to­geth­er a sys­tem con­tain­ing twen­ty-six met­rics; some were gath­ered au­to­mat­i­cal­ly on every com­mit or every night. Some were de­signed to be a re­sult of reg­u­lar team meet­ings, col­lect­ed by av­er­ag­ing the notes giv­en by de­vel­op­ers them­selves. Some were very sim­ple; oth­ers—quite com­plex to un­der­stand and to gath­er. The whole sys­tem was shared with the team, and a few meet­ings en­sured that every­body un­der­stand those met­rics and is hap­py with them. At the is­sue of those meet­ings, four met­rics were dropped at the re­quest of some team mem­bers.

The next step was to chose the re­in­force­ment mech­a­nism. Mon­ey was out of ques­tion—French cul­ture and laws make it prac­ti­cal­ly im­pos­si­ble to re­ward ex­cel­lence among em­ploy­ees—so in­stead, I sug­gest­ed to try to play on rep­u­ta­tion; de­vel­op­ers seemed slight­ly de­mo­ti­vat­ed, but quite com­pe­tent, so sim­ple rep­u­ta­tion with­in the team could work, un­like with teams where pro­gram­mers view their job as a bare mean to ob­tain mon­ey.

Fi­nal­ly, RCA helped un­der­stand­ing some of the rea­sons de­vel­op­ers were not pro­duc­ing qual­i­ty code. One of the rea­sons was that al­though every­one claimed know­ing SOL­ID prin­ci­ples, they seemed to have nev­er seen code which fol­lowed SOL­ID prin­ci­ples out­side very ba­sic ex­am­ples from the books. Show­ing those prin­ci­ples in prac­tice to them was a huge boost. In the same way, it ap­peared that no­body was re­al­ly fa­mil­iar with de­sign pat­terns, and no­body could re­al­ly claim a role of a soft­ware de­sign­er—af­ter all, they were de­vel­op­ers, and were hired as de­vel­op­ers, not de­sign­ers. I helped my col­league to spot the ba­sic is­sues with the cur­rent de­sign, and the com­pa­ny is prepar­ing to hire a de­sign­er to fix the re­main­ing prob­lems.

Over the past two months, met­rics showed an in­crease in qual­i­ty. No, it's not an in­crease from 5.92 to 9.27, or what­ev­er dig­its you put in­stead; it is rather an in­crease from “has se­vere is­sues” to “you've got is­sues” from the ar­ti­cle I al­ready quot­ed above. If they don't screw up the hir­ing of the soft­ware de­sign­er, I'm sure they will quick­ly move to “mi­nor con­cerns.”

A nice as­pect of that is the hap­pi­ness of de­vel­op­ers them­selves which boost­ed right af­ter we dis­cussed the met­rics and start­ed de­ploy­ing them, which is a di­rect con­se­quence of Hawthorne ef­fect, but also in­creased over time. The hap­pi­ness it­self was mea­sured from some of the twen­ty-two met­rics, in­clud­ing:

Fig­ure 1 The hap­pi­ness of the de­vel­op­ers. The dashed part of the curve cor­re­sponds to the part pri­or to the col­lec­tion of the met­rics, and cor­re­sponds to a slop­py and un­sci­en­tif­ic rep­re­sen­ta­tion of the im­pres­sion we got from the de­vel­op­ers them­selves, as well as the im­pres­sion of the pro­ject man­ag­er on the over­all mood of the team. This is not how you should col­lect data.