Cognitive biases are frequently observed systematic errors in judgment which follow certain patterns. They manifest in all areas of human life, including competitive programming, even though it is supposed to be practiced by some of the most rational individuals in the world. Let me present several biases which I feel are the strongest in competitive programming.
The number of red members at both TopCoder and Codeforces is approximately 1% of all people ever rated at respective tracks: 590/63k for TopCoder Algorithm track, 40/6k for TopCoder Marathon track, 654/54k for Codeforces. Even when we add members who have been red at some point of time in the past but are not now, the share of red might grow to 2% but not beyond that. And yet the search for instructions on joining the elite squad is one of the most frequently asked questions in competitive programming, with slightly varying time intervals allotted.
Survivorship bias is the error of focusing on people who "survived" some process (in this case - became red) and overlooking those who didn't due to their lack of visibility. Everybody in the world of competitive programming focuses on survivors: you see red members in "top rated" and "top contributors" lists, in "record books", in contest leaderboards and discussions, in onsite finals of the tournaments, in interviews and feature articles... You just don't get to see people stuck in Div 2 after competing forever, and there are a lot more of the later than of the former.
Of course, this effect can also be attributed to illusory superiority (overestimate of one's desirable qualities compared to other people), depending on whether people seeking to become red in 3 months believe that becoming red is generally easy and available to anyone, or that becoming red is hard but they can do it easily. This is a topic for a separate study.
Have you ever seen an editorial which frustratingly replaces an actual proof of a fact with a brief "it is easy to prove" / "it is evident that"? Or an editorial which says "It's a simple DP solution" without explaining the solution? Probably yes. Likely this is another bias at work: curse of knowledge leads better-informed parties (the problem writer) to find it genuinely difficult to think about problems from the perspective of lesser-informed parties (people who can't solve the problem on their own and have to read the editorial). The omitted proof is really evident to the writer, and the DP with 3 parameters is really simple, and how could anyone not see this? Of course, we can't rule out the possibility that the editorial writer was just feeling lazy :-)
Same goes for problems which were solved by a lot fewer participants than was anticipated by the writer: when you arrange problems into a set, you already know their solutions, so you tend to underestimate their complexity for people who solve it from scratch.
First observations of this bias date back to 2009, but it was widespread much earlier than that. Ad hominem fallacy means that people judge arguments of others based on their personal traits, in this case - based on their rating. Similar statements issued by a red and a gray will yield a lot more upvotes for the first author.
"TopCoder is dying" is a popular idea nowadays; a lot of things, both related (like some issues with an SRM) and completely unrelated (like 5th anniversary of Codeforces), trigger an outburst of eulogies. At the same time strictly positive news (like return of for-fun Marathon matches) get hardly any reaction at all. That's confirmation bias: the tendency to filter and interpret information in a way that confirms one's existing beliefs. It also works the other way round: if you don't believe in the fall of TopCoder, you're not going to be swayed by one SRM gone wrong.
There are more biases involved in this effect, to name just a few:
I leave it as an exercise to the reader to name biases I exhibit in this article :-)
How to become red in 3 months? (a.k.a. survivorship bias)
The number of red members at both TopCoder and Codeforces is approximately 1% of all people ever rated at respective tracks: 590/63k for TopCoder Algorithm track, 40/6k for TopCoder Marathon track, 654/54k for Codeforces. Even when we add members who have been red at some point of time in the past but are not now, the share of red might grow to 2% but not beyond that. And yet the search for instructions on joining the elite squad is one of the most frequently asked questions in competitive programming, with slightly varying time intervals allotted.
Survivorship bias is the error of focusing on people who "survived" some process (in this case - became red) and overlooking those who didn't due to their lack of visibility. Everybody in the world of competitive programming focuses on survivors: you see red members in "top rated" and "top contributors" lists, in "record books", in contest leaderboards and discussions, in onsite finals of the tournaments, in interviews and feature articles... You just don't get to see people stuck in Div 2 after competing forever, and there are a lot more of the later than of the former.
Of course, this effect can also be attributed to illusory superiority (overestimate of one's desirable qualities compared to other people), depending on whether people seeking to become red in 3 months believe that becoming red is generally easy and available to anyone, or that becoming red is hard but they can do it easily. This is a topic for a separate study.
It's easy to prove... (a.k.a curse of knowledge)
Have you ever seen an editorial which frustratingly replaces an actual proof of a fact with a brief "it is easy to prove" / "it is evident that"? Or an editorial which says "It's a simple DP solution" without explaining the solution? Probably yes. Likely this is another bias at work: curse of knowledge leads better-informed parties (the problem writer) to find it genuinely difficult to think about problems from the perspective of lesser-informed parties (people who can't solve the problem on their own and have to read the editorial). The omitted proof is really evident to the writer, and the DP with 3 parameters is really simple, and how could anyone not see this? Of course, we can't rule out the possibility that the editorial writer was just feeling lazy :-)
Same goes for problems which were solved by a lot fewer participants than was anticipated by the writer: when you arrange problems into a set, you already know their solutions, so you tend to underestimate their complexity for people who solve it from scratch.
Ratingism (a.k.a. ad hominem fallacy)
First observations of this bias date back to 2009, but it was widespread much earlier than that. Ad hominem fallacy means that people judge arguments of others based on their personal traits, in this case - based on their rating. Similar statements issued by a red and a gray will yield a lot more upvotes for the first author.
The fall of TopCoder (a.k.a. confirmation bias)
"TopCoder is dying" is a popular idea nowadays; a lot of things, both related (like some issues with an SRM) and completely unrelated (like 5th anniversary of Codeforces), trigger an outburst of eulogies. At the same time strictly positive news (like return of for-fun Marathon matches) get hardly any reaction at all. That's confirmation bias: the tendency to filter and interpret information in a way that confirms one's existing beliefs. It also works the other way round: if you don't believe in the fall of TopCoder, you're not going to be swayed by one SRM gone wrong.
There are more biases involved in this effect, to name just a few:
- illusory truth effect: people tend to believe things which are repeated often, even if they are not true;
- backfire effect: given evidence against their beliefs, people can reject it and strengthen the beliefs.
I leave it as an exercise to the reader to name biases I exhibit in this article :-)
No comments:
Post a Comment