Well I was certainly surprised. The Dunning-Kruger Effect may not be real, but merely an artefact of what happens when one measures two things that look more related than they really are. I posted about it frequently a decade ago when I was writing my lengthy "May We Believe Our Thoughts?" series. That there are arrogant, overconfident people the new findings do not dispute, but they cast enormous doubt on the idea that the effect in humans is linear according to actual knowledge. From the article:
Take-home message:
- The Dunning-Kruger effect was originally described in 1999 as the observation that people who are terrible at a particular task think they are much better than they are, while people who are very good at it tend to underestimate their competence
- The Dunning-Kruger effect was never about “dumb people not knowing they are dumb” or about “ignorant people being very arrogant and confident in their lack of knowledge.”
- Because the effect can be seen in random, computer-generated data, it may not be a real flaw in our thinking and thus may not really exist.
Ironically, "We can't tell if this highly-plausible claim exists in reality or not" is very close to the original theory. We can't tell if we can't tell, or maybe we can tell -- we can't tell.
ReplyDeleteMaybe it never meant much more than that people assume they're standard-issue until they see hard evidence otherwise. If you can carry a tune, you assume everyone can carry a tune. If math is hard or easy for you, you assume that's human nature until you meet someone for whom it's wildly easier or harder. My athleticism is mediocre, so I often see films of people doing feats of strength or agility that I would have assumed were impossible, as it everyone were about like me.
ReplyDeleteI'm going to go out on a limb and say that point number 3 in the take-home message is at least a bit of an overstatement. It seems to me what the random data disproves is not the possible existence of a D-K phenomenon but the attempt to quantify it into a linear progression that tracks perceived vs actual competence, especially that at some point in the progression we completely stop overestimating competence. I think most people have the experience of thinking some new task or project could be accomplished fairly easily, only to run up against problems that in hindsight would have been predicted and mitigated by someone more familiar with the task. That doesn't mean that experts don't cause their own problems but that they are more likely to find novel ones.
ReplyDeleteTomasso Dorigo tried to reproduce the exercise. Assume people's estimates are correlated with their actual abilities, but with some variation. As the Nuhfer paper points out, you'll always get an X-shape(*), but if people are good at estimating their abilities the X is very narrow. If they are poor, the X is wider.
ReplyDelete(*)If your ability is low, you have a lot more guessing room _above_ your actual level than below, so random guesses will tend to be high--the reverse obtains for the high ability group.
FWIW, I still suspect there's a difference between not knowing what you don't know, and knowing what you don't know.
"If your ability is low, you have a lot more guessing room _above_ your actual level than below, so random guesses will tend to be high--the reverse obtains for the high ability group."
ReplyDeleteI'm guessing that this statement explains 100% of the effect.