Seven lies to stakeholders

Arseni Mourzenko
Founder and lead developer
November 6, 2014
Tags: communication 27 management 33

Those are the points that are fre­quent­ly sub­ject of mis­un­der­stand­ings or plain mis­in­for­ma­tion be­tween IT per­son­nel and stake­hold­ers. En­coun­ter­ing one of sev­er­al of those points in a pro­ject is a good in­di­ca­tion that com­mu­ni­ca­tion should be re­viewed.

Fal­la­cy: if it works and brings you mon­ey, it's a suc­cess.

If a pro­ject works for now, it doesn't mean any­thing. The busi­ness suc­cess at a giv­en mo­ment of time in­volves a huge amount of fac­tors. An ex­am­ple: as a start­up, cre­ate a great prod­uct, but don't ad­ver­tise it and don't talk to any­one about it. The pro­ject will bring $0. Does it mean that it's IT team's fault?

One of the cru­cial as­pects which play against pro­ject suc­cess is tech­ni­cal debt. Busi­ness­peo­ple don't see it, and some of them don't even know what this thing is. This makes it par­tic­u­lar­ly dan­ger­ous. A pro­ject which has a huge tech­ni­cal debt is un­doubt­ed­ly a fail­ure, since the longer it re­mains ac­tive, the more it costs, ex­po­nen­tial­ly. The fact that at a pre­cise mo­ment, it is sold well is in­signif­i­cant.

Fal­la­cy: we can de­liv­er all those fea­tures in six months.

Learn Ag­ile, dude (or dudette)! The per­son who takes a risk to de­liv­er a fixed set of fea­tures in six months is shoot­ing him­self in the foot. Not only es­ti­ma­tion is one of the most dif­fi­cult tasks in de­vel­op­ment (while be­ing of­ten use­less), it's also plain­ly stu­pid to be­lieve that the re­quire­ments won't rad­i­cal­ly change dur­ing those six months.

If you work on gov­ern­ment pro­jects or oth­er pro­jects where months or years are spend do­ing ar­chi­tec­ture and de­sign and one of your col­leagues leaves, call me, I'm look­ing for a job.

Fal­la­cy: we, stake­hold­ers, know ex­act­ly what we need, and we have all the de­tails you need.

No, you don't. I have worked for dozens of per­sons and com­pa­nies and read hun­dreds of re­quests/de­scrip­tions. The least de­scribed ones con­tained two para­graphs of un­read­able crap typed by a drunk­en guy who tried to de­scribe some­thing he doesn't un­der­stand him­self. The most de­scribed ones con­tained un­til fif­teen pages of bad­ly-writ­ten con­tra­dic­to­ry stuff no­body reads (in­clud­ing the au­thor him­self). Those de­scrip­tions were out­dat­ed a few days af­ter the be­gin­ning of the pro­ject, and when they were up­dat­ed, the up­dates were them­selves out­dat­ed at the mo­ment they were re­leased.

Writ­ing a cor­rect WBS takes time. Lots of time. If you've spent years study­ing re­quire­ments gath­er­ing and years writ­ing re­quire­ments cor­rect­ly for your new pro­ject and you are look­ing now for a de­vel­op­er, then call me, I'm look­ing for a job.

Fal­la­cy: writ­ing qual­i­ty code makes pro­ject de­vel­op­ment slow.

Code is writ­ten once and read again and again. Bad code is ex­treme­ly ex­pen­sive and huge­ly slows down the pro­ject. If a team of five ex­pert de­vel­op­ers may re­lease a giv­en prod­uct in four weeks, the same prod­uct will take months and months to a team of in­ex­pe­ri­enced pro­gram­mers. They may crap some­thing in a few weeks, but then they will spend months and months fix­ing it. It is not un­com­mon for the pro­jects to have 95% of the time spent on main­te­nance, i.e. bugs fix­ing, giv­en that the cost of this fix­ing is rarely con­sid­ered in ad­vance by busi­ness­peo­ple.

Fal­la­cy: we can con­cen­trate at the same time on qual­i­ty, speed and cost.

The pro­ject man­age­ment tri­an­gle is clear. With­in a same com­pa­ny, a quick, high qual­i­ty pro­ject re­quires mon­ey. Qual­i­ty of a quick and cheap pro­ject will suf­fer. If high qual­i­ty is re­quired but there is no mon­ey, the pro­ject will be slow.

Then there is pro­duc­tiv­i­ty in­volved, as well as a bunch of dif­fer­ent fac­tors. There is mon­ey wast­ed, and there are peo­ple which are sim­ply not re­li­able enough when it comes to re­leas­ing a pro­ject. There are com­mu­ni­ca­tion is­sues and man­age­ment is­sues. There are in­ter­nal con­flicts. There are weak­ly eval­u­at­ed risks. All this can raise or drew the three points of the pro­ject man­age­ment tri­an­gle at the same time.

Nev­er­the­less, there is no such a thing as a low cost, high qual­i­ty pro­ject done fast. There are just com­pa­nies which are slight­lyhuge­ly more pro­duc­tive and trust­wor­thy than oth­ers, as well as cus­tomers which are more thought­ful than oth­ers.

Fal­la­cy: we don't need to au­to­mate this; the process can eas­i­ly be done by hand.

Every re­peat­able process which is not au­to­mat­ed is a source of hu­man mis­take. I stopped count­ing the num­ber of man­u­al de­ploy­ments, man­u­al up­dates, man­u­al back­ups and man­u­al every­thing I've screwed, some­times with dis­as­trous con­se­quences. It's not my fault: it's the fault of the process. The more it is mo­not­o­nous and long, the eas­i­est it is to screw it.

More­over, when done by hand, a repet­i­tive process is of­ten much slow­er, and by much, I mean mil­lions or bil­lions times slow­er. When it's slow, it means that it won't be done fre­quent­ly. In a case of de­ploy­ment, it means that there would be no con­tin­u­ous de­liv­ery.

I worked on pro­jects where the com­plete­ly man­u­al de­ploy­ment can take up to four hours. That's four hours of time some­body could have been spend­ing to do some­thing use­ful in­stead.

Hu­mans are bad for repet­i­tive tasks. Com­put­ers are ex­cel­lent at this. There is not a sin­gle rea­son to do process­es such as de­ploy­ment man­u­al­ly.

Fal­la­cy: we're at 90%.

This an­swer to the ques­tion about the progress of the team makes no sense and is usu­al­ly wrong. The only pos­si­ble an­swer a team can give to stake­hold­ers is:

“Look at our board: we've just fin­ished this and that, and there are cur­rent­ly those tasks in our back­log.”

90%-done is in­dica­tive of the lack of process and lack of pre­cise met­rics of pro­gres­sion. “I feel that 90% of the work is done” is par­tic­u­lar­ly mis­lead­ing: the ques­tion is not about what some­one feels, but what is ob­jec­tive­ly done and what re­mains. Of­ten, there is a cor­re­la­tion with an­oth­er feel­ing: the feel­ing that a fea­ture is im­ple­ment­ed, while it's un­sta­ble and was nev­er test­ed.

Even if the 90%-done wasn't just a feel­ing, it re­mains er­ror prone, since it omits one thing: that the back­log evolves. If 900 cas­es are done in a pe­ri­od of four months and a half, and there are 100 cas­es in the back­log, it doesn't mean that the team will fin­ish the pro­ject in two weeks. It only says that there are 100 cas­es in the back­log at this mo­ment of time. Noth­ing less, noth­ing more. Maybe the stake­hold­ers will add hun­dreds of cas­es the next week, or maybe the team will find a se­ri­ous is­sue which would re­sult in heavy changes of what was al­ready done: this will re­sult in spend­ing much more time on the pro­ject than two weeks. Or maybe busi­ness­peo­ple will de­cide to ter­mi­nate the pro­ject, in which case the team will spend the next two weeks work­ing on stuff which has noth­ing to do.

The 90%-done fal­la­cy is based on two mis­takes: that one can pre­dict the fu­ture and that there is a way to mea­sure pre­cise­ly what is done and what is planned. Be­ing noth­ing but mis­lead­ing, it should be avoid­ed.