Dunning–Kruger effect consists for unskilled persons to believe that they are more experienced, and for highly skilled persons to underestimate their level, believing that tasks, which they find easy, are equally easy for others.
I want to mention two situations where Dunning–Kruger effect matters.
When I worked as a freelancer, I often had to work on an existent code base. Before starting the job, I always asked persons who wrote the code base how good the code is. The goal wasn't to know is it really good: Dunning-Kruger effect makes the answer irrelevant. The actual goal was to determine the perception of code quality by the programmers compared to the actual quality of the code base.
Once I had the answer, the next step was to actually assess the code quality. This is not rocket science, but has two problems:
Subjectivity, which can partially be mitigated if taken care of. For example, C# code base which doesn't match Microsoft's C# coding conventions has nothing wrong with it as soon as the code is consistent. Here, style is subjective, while consistency is an objective criteria.
Inability to measure quality, because quantifiable rules don't work. This means that there is no way I can rate one code base as 7/10 and another one as 3.5/10, because any algorithm I can build to determine those two ratings will be easy to game and particularly questionable.
On the other hand, I can tell that one code base is good enough, and another one has severe issues. The following terms can be applied to give an idea of code quality:
Excellent: I wish all my code were that good. I would enjoy working on this code base and especially with people who have wrote it. Very probably, I can learn a lot from them.
Good enough: if all code in the world was at this level, the world would be better. I would enjoy working on this code base and eventually improve a few problems I've noticed.
Minor concerns: the code is good enough, but I noticed several points which should be improved during my intervention. Very probably, the team will be receptive to my pieces of advice and learn quickly how to avoid similar issues in the future.
You've got issues: there are points which should be improved preferably before doing any change. Working on code without resolving those issues first can be risky. The team may require a few weeks of training, or there may be the issues with the manager who doesn't understand how software development works.
Has severe issues: there are issues which should be improved immediately. Working on code without resolving those issues first is crazy. The team requires intensive training and there are severe management issues.
Do you haz teh codez?: there is nothing I can do personally. The code base should be thrown away, and skillful persons should be hired instead of the actual team. The management doesn't understand how to hire programmers correctly.
Note that this shouldn't be translated into metrics. For example, minor concerns is not 4/6. Numbers will pervert the idea which is to give a valuation, not a metric. The difference between a valuation and a metric is similar to the difference between systemic and analytic approach.
With a metric, the point is to make a comparison or to set a threshold. A student who has a specific mark can say that he's better than 62.4% of his pairs in his class, or that he passed the final exam.
With a valuation, the point is to know what is the current situation and whether it should improve. A student who hears from his teacher that she is a little concerned about his recent drop in marks knows that he should be better doing additional efforts. He doesn't care if every other student is in a similar situation and he doesn't know whether he will pass or not the final exam.
When I tell that the code base has severe issues, this means that I'll advise to my customer to do an audit to see what is going wrong, and to work with him to improve the root problem first; the changes to the code will wait, because doing them right now will be a big mistake. At this stage, there is no comparison with any other team I worked with (or any other team within the customer's company) and there are no thresholds.
So why do I ask persons who wrote the code base how good the code is? Knowing how they perceive it helps me understanding the situation, and asking several persons helps even more. I was once assisting a team as a .NET consultant: as former PHP programmers, they were building a product and encountering some .NET-related issues. I had a chance to talk with the manager, the lead programmer and another programmer. The manager was confident that the code base is very good. The lead programmer and another programmer weren't that sure, and admitted that it should have been better if they had more time. It appeared that the code base had severe issues. Moreover, the team couldn't even get from the company a server for version control, and often, files were overwritten. It was obvious that the lack of knowledge of .NET wasn't the primary problem, so I explained the situation to the management and advised them to focus on the actual issues, thus reducing the time needed to develop the piece of software.
During my career, every team with an unworkable code base was telling that their code could have been better, but without saying that it is actually unworkable. One of the explanations being Dunning–Kruger effect, I have an alternative one: those teams are usually working under pressure for months on a project which is late for months, and not only they can't tell the management that the situation is unbearable, but they can't even admit to themselves that they are doing it wrong. Admitting it would force them to stop any development and to refocus on the important problems; knowing their management, they know this smells “You're fired”.
Teams with code base which raises concerns are usually overestimating the quality of their code. Some teams, on the other hand, are quite realistic in their perception of quality. They may lack pragmatism, motivation or leverage¹, but they know something is wrong.
Teams with good code bases are, for some, realistic in their perception of quality, or underestimate it. In both cases, some teams will constantly attempt to improve the code base, while others will not bother. I believe that this second group are the teams of skilled but not motivated developers: they won't introduce issues because they are skilled, but they won't make any effort to improve the code base, because they don't really care. I also believe that projects performed by very skilled and motivated developers may fall into this group if they have straight priorities. If the priority is to release a piece of software in a very short period of time, they won't target code excellence, because this would prevent them from releasing the product on schedule.
As for the excellent code bases, I've not seen ones yet. I imagine that those code bases are written by very skilled developers who, because of Dunning–Kruger effect, believe anybody can do the same. The teams are highly motivated and highly autonomous, with code quality being a priority set by the management.
Some companies love asking candidates to assess their level in a specific language or technology. This is a big mistake, for several reasons.
Many candidates will give themselves a higher mark, making it irrelevant.
Dunning–Kruger effect will increase even more the marks of unskilled persons. I know several C# coders who don't understand lazy evaluation, don't know what
IDisposableis and who don't know the difference between named and optional arguments who will rate themselves 7/10 for C# and .NET.
If the company is using those marks to filter candidates (i.e. have a specific threshold), this will filter some of the most skilled candidates because of the Dunning–Kruger effect.
If the company is using those marks to compare candidates, the comparison will be misleading. While coders I was talking about previously will rate themselves at 7/10, some programmers who are actually able to create C# applications quite well will rate themselves at 6/10.
Another concern I have with this approach is that it conflicts directly with the goal of an interview or a pre-interview. Assessing the skills of a person is a job of a person who is hiring the candidate. What candidate thinks about his level is completely irrelevant here.
It could be relevant in some rare cases. An unskilled coder who rates himself 7/10 in a language may have trouble during code reviews or may not be able to integrate well in a team, especially a team of programmers more skilled than him. But this trait is not hard to elucidate during the interview. Actually, most people like that will become very irritated when the interviewer will highlight omissions in the knowledge of the language or technology by the candidate. Another edge case is a person who has severe self-esteem issues and can easily rate himself at 3/10 while having a good level. Here again, an interview will inevitably highlight this issue.
Thus, I see no cases where asking a candidate to assess his own skills.