The new tournament simulator is nearing usability, and there will be some significant results very soon, I hope.
In the meantime, I have progress to report on improving on one of the chief fairness metrics. Fairness (C) is defined, qualitatively, as the degree to which a tourney design rewards superior performance. But the method heretofore used to measure this quality can be criticized as too narrowly focused on the overall winner of the tournament. In this post, I’ll propose an extension of the metric that considers not just the overall winner, but every place for which there is prize money.
The redefined fairness (C) metric will be useful not only for comparing the fairness of particular tournament designs, but also for determining the payouts themselves.
Here is the basic technique. Sort the skill factors of the players from best to worst. Then sort the payouts (expressed as a percentage of the total prize fund) from highest to lowest. Multiply the two columns, and sum the results. Then do the same for the actual result of the tourney. Subtract this from the first result, add 0.01, and take the reciprocal. That’s the new fairness (C) measure.
Here’s an example, showing the results of the top six players in a 16DE tournament. The tournament divides its prize fund with 60% to the winner, 30% to the runner up, and 10% to the third place finisher, i.e., the loser of the lower bracket final.
If the results went perfectly to form, the best player would win the 60%, the next best the 30%, and the third best the 10%, as shown in the ideal payout column. Multiplying these results by the skill column, and taking the sum, one gets 1.884. Now suppose, for a particular iteration of the tournament, the actual results were that the best player came third, the third-best player won, and the fifth-best player came third, as shown in the actual payout column. Summing the actual results yields 1.577. Then taking the difference, adding 0.01 (to make the result 100 rather than infinity when the actual result happens to correspond to the ideal result), and taking the reciprocal yields 3.158, which is fairness (C) for that particular iteration. The average value for a large number of iterations will approach the new and improved fairness (C) measure for the tournament design itself.
Skill | ideal payout | extended | actual payout | summed |
2.077 | 60% | 1.246 | 30% | 0.623 |
1.656 | 30% | 0.497 | 0% | 0 |
1.410 | 10% | 0.141 | 60% | 0.846 |
1.229 | 0% | 0 | 0% | 0 |
1.081 | 0% | 0 | 10% | 0.108 |
0.953 | 0% | 0 | 0% | 0 |
sums | 1.884 | 1.577 | ||
new fairness (C) | 3.158 |
Note that the old fairness (C) measure is simply a special case of the new one – the case in which the payout schedule awards 100% of the prize fund to the overall winner.
The new measure will, as the old one did, allow us to compare the fairness of different designs, and to do it more generally. It will consider the results not just at the top, but as deep as the prize money goes, and so will no longer ignore anything that happens in a consolation.
And the process can be inverted. Instead of judging the fairness of different tournament designs for a given payout structure, it can be used to judge the fairness of different payouts for a given tournament design!
Note that the new measure does not combine fairness (B) and fairness (C) into a single fairness (D) measure. That goal remains elusive.
2 thoughts on “Fairness ($), Part II”