Dunning–Kruger effect

Arseni Mourzenko
Founder and lead developer
November 9, 2014
Tags: hiring 15 quality 36

Dun­ning–Kruger ef­fect con­sists for un­skilled per­sons to be­lieve that they are more ex­pe­ri­enced, and for high­ly skilled per­sons to un­der­es­ti­mate their lev­el, be­liev­ing that tasks, which they find easy, are equal­ly easy for oth­ers.

I want to men­tion two sit­u­a­tions where Dun­ning–Kruger ef­fect mat­ters.

Code qual­i­ty

When I worked as a free­lancer, I of­ten had to work on an ex­ist­ing code base. Be­fore start­ing the job, I al­ways asked per­sons who wrote the code base how good the code is. The goal wasn't to know is it re­al­ly good: Dun­ning-Kruger ef­fect makes the an­swer ir­rel­e­vant. The ac­tu­al goal was to de­ter­mine the per­cep­tion of code qual­i­ty by the pro­gram­mers com­pared to the ac­tu­al qual­i­ty of the code base.

Once I had the an­swer, the next step was to ac­tu­al­ly as­sess the code qual­i­ty. This is not rock­et sci­ence, but has two prob­lems:

On the oth­er hand, I can tell that one code base is good enough, and an­oth­er one has se­vere is­sues. The fol­low­ing terms can be ap­plied to give an idea of code qual­i­ty:

Note that this shouldn't be trans­lat­ed into met­rics. For ex­am­ple, mi­nor con­cerns is not 4/6. Num­bers will per­vert the idea which is to give a val­u­a­tion, not a met­ric. The dif­fer­ence be­tween a val­u­a­tion and a met­ric is sim­i­lar to the dif­fer­ence be­tween sys­temic and an­a­lyt­ic ap­proach.

With a met­ric, the point is to make a com­par­i­son or to set a thresh­old. A stu­dent who has a spe­cif­ic mark can say that he's bet­ter than 62.4% of his pairs in his class, or that he passed the fi­nal exam.

With a val­u­a­tion, the point is to know what is the cur­rent sit­u­a­tion and whether it should im­prove. A stu­dent who hears from his teacher that she is a lit­tle con­cerned about his re­cent drop in marks knows that he should be bet­ter do­ing ad­di­tion­al ef­forts. He doesn't care if every oth­er stu­dent is in a sim­i­lar sit­u­a­tion and he doesn't know whether he will pass or not the fi­nal exam.

When I tell that the code base has se­vere is­sues, this means that I'll ad­vise to my cus­tomer to do an au­dit to see what is go­ing wrong, and to work with him to im­prove the root prob­lem first; the changes to the code will wait, be­cause do­ing them right now will be a big mis­take. At this stage, there is no com­par­i­son with any oth­er team I worked with (or any oth­er team with­in the cus­tomer's com­pa­ny) and there are no thresh­olds.

So why do I ask per­sons who wrote the code base how good the code is? Know­ing how they per­ceive it helps me un­der­stand­ing the sit­u­a­tion, and ask­ing sev­er­al per­sons helps even more. I was once as­sist­ing a team as a .NET con­sul­tant: as for­mer PHP pro­gram­mers, they were build­ing a prod­uct and en­coun­ter­ing some .NET-re­lat­ed is­sues. I had a chance to talk with the man­ag­er, the lead pro­gram­mer and an­oth­er pro­gram­mer. The man­ag­er was con­fi­dent that the code base is very good. The lead pro­gram­mer and an­oth­er pro­gram­mer weren't that sure, and ad­mit­ted that it should have been bet­ter if they had more time. It ap­peared that the code base had se­vere is­sues. More­over, the team couldn't even get from the com­pa­ny a serv­er for ver­sion con­trol, and of­ten, files were over­writ­ten. It was ob­vi­ous that the lack of knowl­edge of .NET wasn't the pri­ma­ry prob­lem, so I ex­plained the sit­u­a­tion to the man­age­ment and ad­vised them to fo­cus on the ac­tu­al is­sues, thus re­duc­ing the time need­ed to de­vel­op the piece of soft­ware.

Dur­ing my ca­reer, every team with an un­work­able code base was telling that their code could have been bet­ter, but with­out say­ing that it is ac­tu­al­ly un­work­able. One of the ex­pla­na­tions be­ing Dun­ning–Kruger ef­fect, I have an al­ter­na­tive one: those teams are usu­al­ly work­ing un­der pres­sure for months on a pro­ject which is late for months, and not only they can't tell the man­age­ment that the sit­u­a­tion is un­bear­able, but they can't even ad­mit to them­selves that they are do­ing it wrong. Ad­mit­ting it would force them to stop any de­vel­op­ment and to re­fo­cus on the im­por­tant prob­lems; know­ing their man­age­ment, they know this smells “You're fired”.

Teams with code base which rais­es con­cerns are usu­al­ly over­es­ti­mat­ing the qual­i­ty of their code. Some teams, on the oth­er hand, are quite re­al­is­tic in their per­cep­tion of qual­i­ty. They may lack prag­ma­tism, mo­ti­va­tion or lever­age¹, but they know some­thing is wrong.

Teams with good code bases are, for some, re­al­is­tic in their per­cep­tion of qual­i­ty, or un­der­es­ti­mate it. In both cas­es, some teams will con­stant­ly at­tempt to im­prove the code base, while oth­ers will not both­er. I be­lieve that this sec­ond group are the teams of skilled but not mo­ti­vat­ed de­vel­op­ers: they won't in­tro­duce is­sues be­cause they are skilled, but they won't make any ef­fort to im­prove the code base, be­cause they don't re­al­ly care. I also be­lieve that pro­jects per­formed by very skilled and mo­ti­vat­ed de­vel­op­ers may fall into this group if they have straight pri­or­i­ties. If the pri­or­i­ty is to re­lease a piece of soft­ware in a very short pe­ri­od of time, they won't tar­get code ex­cel­lence, be­cause this would pre­vent them from re­leas­ing the prod­uct on sched­ule.

As for the ex­cel­lent code bases, I've not seen ones yet. I imag­ine that those code bases are writ­ten by very skilled de­vel­op­ers who, be­cause of Dun­ning–Kruger ef­fect, be­lieve any­body can do the same. The teams are high­ly mo­ti­vat­ed and high­ly au­tonomous, with code qual­i­ty be­ing a pri­or­i­ty set by the man­age­ment.


Some com­pa­nies love ask­ing can­di­dates to as­sess their lev­el in a spe­cif­ic lan­guage or tech­nol­o­gy. This is a big mis­take, for sev­er­al rea­sons.

An­oth­er con­cern I have with this ap­proach is that it con­flicts di­rect­ly with the goal of an in­ter­view or a pre-in­ter­view. As­sess­ing the skills of a per­son is a job of a per­son who is hir­ing the can­di­date. What can­di­date thinks about his lev­el is com­plete­ly ir­rel­e­vant here.

It could be rel­e­vant in some rare cas­es. An un­skilled coder who rates him­self 7/10 in a lan­guage may have trou­ble dur­ing code re­views or may not be able to in­te­grate well in a team, es­pe­cial­ly a team of pro­gram­mers more skilled than him. But this trait is not hard to elu­ci­date dur­ing the in­ter­view. Ac­tu­al­ly, most peo­ple like that will be­come very ir­ri­tat­ed when the in­ter­view­er will high­light omis­sions in the knowl­edge of the lan­guage or tech­nol­o­gy by the can­di­date. An­oth­er edge case is a per­son who has se­vere self-es­teem is­sues and can eas­i­ly rate him­self at 3/10 while hav­ing a good lev­el. Here again, an in­ter­view will in­evitably high­light this is­sue.

Thus, I see no cas­es where ask­ing a can­di­date to as­sess his own skills.

1 Prag­ma­tism, mo­ti­va­tion and lever­age are the three traits which are es­sen­tial in a team to solve the is­sues they en­counter. Prag­ma­tism is need­ed to an­a­lyt­i­cal­ly as­sess the is­sue and find so­lu­tions; with­out prag­ma­tism, the team may ei­ther not ful­ly un­der­stand the is­sue or be un­able to find a so­lu­tion. Mo­ti­va­tion is im­por­tant be­cause with­out mo­ti­va­tion, the team will like­ly over­look the is­sues and fo­cus on sim­ple tasks. Lever­age is es­sen­tial if the team needs to with­stand bad man­agers.