Finding Fairness in Analyzed Brackets

Today we’ll dig into the rather surprising result that the shifted version of our 16DE tournament was at least as good, and in several respects better, than the standard, unshifted version. 

The result was that the expedient of shifting the C and D drops in the sample 16-team double-elimination tournament (16DE) improved the tournament in pretty much every respect. It was conceived as a way to make the tournament more efficient by reducing the number of rounds from eight to seven, but it also had the effect of boosting both the fairness score and the participation score for the design. It would seem that this means that the shifted-drop design should be considered even when the number of rounds is not a consideration.

Today, I’ll try to show why this is the case. I’ll do this by drawing on the statistics in the analyzed brackets for 16DE’s with and without the shift: 16-upper-standard-analyzed16-lower-standard-analyzed16-upper-shift-analyzed, and 16-lower-shift-analyzed.

To begin, however, I want to pick up another lead from yesterday’s post and explain why, when I first became aware of this sort of drop shifting, I thought that it was grossly unfair.

I was visiting the Hoosier Backgammon Club in Indianapolis for the first time. The HBC usually runs a small tournament at its weekly meetings. As the tournament has to be completed in a few hours, it saves a round by eliminating the grand final – the winner of the winner’s bracket got most of the small prize fund and the most club ranking points, and gets to go home early. What would otherwise be a losers bracket was run as a consolation, with the winner getting the rest of the money, and somewhat fewer points. I don’t remember the bracket in great detail, but I think it must have been a 16DE, probably with a few byes.

As it happened, I made it to the final of the upper bracket, and then lost. So, I assumed, that meant that I’d drop into the final of the consolation. But the rounds had been shifted, so I dropped not into the final, but into a semi-final. That’s not fair! Based on years of experience playing in such tournaments, I knew that the last drop went to the final, not the semi-final. And look at that bracket – to drop me further than they should, they’d made the bracket lopsided! (I lost again in the next round.)

Mine was a classic fairness (A) complaint – the tournament was not being run the way I expected. It was not a particularly strong objection, as I certainly hadn’t played any differently based on my misunderstanding the format. The question is, does my fairness (A) complaint have any merit with respect to fairness (C)? (See Fairness for an explanation of what I mean by fairness (A) and fairness (C).)

The answer is, I hope to show you, absolutely not! The HBC tournament was, in fact, fairer than the one I thought I should be playing, as it was more fair, in large part, precisely because of what it did to me.

Now, it’s not that I didn’t suffer a disadvantage. As I saw it, but chances of winning the consolation went from about 50% to about 25% (they were actually lower, but I won’t get into that because it requires some calculations not on the analyzed brackets, which are for a full double-elimination, not a consolation). But the advantage I was losing was itself, unfair, and the tournament itself was fairer because I didn’t get it.

On the analyzed bracket, the green numbers below each line show the average error rate for the players who have occupied that line. They are not affected by the difference between a full double-elimination and a consolation – they don’t care that the grand final will not be played. As teams progress from left to right on the bracket sheets, the average error rate goes down and the teams get better. The error rates for the in the first round of the tourney are about 44, 32 for the second round, to 22 and 15 for the next two rounds, and to 10 for the upper bracket champion.

In the lower bracket, the teams also get better as you go from left to right, but the pattern is more complicated because teams are also dropping in for the upper bracket. The line average error rate increases when you move from the upper bracket to the lower bracket – teams that drop drop because they lost, and so are, on average, not as good as the average team on the line they’re dropping from. Thus, the drops have error rates of 55, 42, 30, and 20 for the A, B, C, and D rounds, respectively.

Now, the fairest thing would be for the teams dropping into the lower bracket to meet teams that are of the same average skill. It’s not possible to match skill levels exactly, but some drops are better than others in this respect.

The last drop has an error rate of 20.03. In a standard bracket, its opponent will be much better at 14.41. In fact, the last drop is not quite as good, on average, as either of the players who compete in the previous round for the opportunity to play the last drop. If, instead, you send the last drop one round deeper in the lower bracket, you get a fairer match: 20.03 plays 24.88.

The same is true for the next to last round of drops, which are shown as the C drops on the bracket. They’re rated at 29.69. With standard drops, they play a 24.7 opponent, while with the shifted drops they find 33.81 foe.

The shifted drops are not perfect – in both cases, they overcorrect. But they’re better, and that’s why the overall fairness score for the tournament is fairer.

In tomorrow’s post, I’ll address the other part of of my fairness (A) complaint: that the bracket must be unfair because it’s lopsided.





3 thoughts on “Finding Fairness in Analyzed Brackets”

  1. Pingback: Dropping Seeds

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: