This post will summarize the learning from the last couple, which can then be safely ignored.
In my original “pie” post, Dividing the Pie, I briefly discussed the possibility of using the fairness (C) measure to find optimal ways of dividing a prize fund for a particular tourney. But I then said it wouldn’t serve that purpose because the payout scheme would always devolve into a winner-takes-all.
That was incorrect, but incorrect in a really trivial and unuseful way. What I found, after many, many rounds of simulation, was that that happens in some very low-luck scenarios, but not in the more typical case. Well, it turns out that what does happen in a more typical case is that the payout scheme devolves into an equally unhelpful share-the-wealth division.
Nothing to see here. Move along. I had a bad idea, and then spent a couple of days discovering that it was, indeed, a bad idea – just bad in a novel way.
Sorry to have wasted your time. Please come back. I promise to write something better soon.
So why don’t I just delete the posts?
Well, I was once a scientist after a fashion, and one of my specialties was the sociology of information. I liked to look at and think about the way that scientists interact with each other and with their literatures.
Imagine that my recent “pie” posts had been submitted to the International Archives of Tourneygeekery (the “IATg”). The work wouldn’t have been published. It wouldn’t even have been sent out for peer review. The IATg, like any self-respecting scientific journal, wants to publish exciting articles with important results. And the discovery that bad ideas are, indeed, bad ideas doesn’t get anyone excited.
And that’s a problem. It leads to what’s sometimes called the publication bias. It’s a bias that causes all sorts of errors to become part of “the literature”.
When scientists have bad ideas, they usually manage to see why the ideas are bad. But sometimes they are not able to do this. Then the bad idea looks like an important good idea, and it gets published. No matter that it’s something that has been dismissed as hogwash by everyone who’s looked at it so far. By seeking to publish only important work, the journals are complicit in a process that causes layers of sludgy bad ideas to clog up the literature.
Now the IATg does not, alas, exist. As far as I can tell, there’s no real scientific journal that publishes the kind of work you read here on tourneygeek.
But there is that one guy. Me. I absorbed, in a past life, some of the norms of science. I try to write tourneygeek with the rigor I would bring to work submitted to the IATg, if only it existed. And, as proprietor of tourneygeek, I also get to act as the gatekeeper for what gets published. And I decree that tourneygeek, in the interest of eliminating publication bias, will publish negative results!
Hear, hear! Besides, I’m pretty sure I’m reading IATg right here and now 😉 .
LikeLike
The way you were wrong may seem “trivial” but I wouldn’t call it “unuseful”. The reason I think it’s useful is because the Fairness(C) criterion failed FOR THE EXACT OPPOSITE REASON YOU BELIEVED IT WOULD. (I apologize for the all-caps; I’m not sure what sort of formating code works in these comments.)
You believed, intuitively, that a payout criterion that drew entirely on Fairness(C) would favor a “Winner-Take-All” scenario, because Fairness(C) is skill-based, favoring the most-skilled. (My initial hypotheses would have been that a payout scheme based on Fairness(C) would have awarded the payouts as a ratio of each individual player’s skill, subject to a qualification minimum based on the ratio between a tournament’s entry fee (for an individual entrant) to its (total) prize pool. i.e. if Player A had a 50% higher skill rating than Player B, the payout ratio between A and B would have been 3:2.) An “even-steven-among-the-‘cashed'” distribution was COUNTER-intuitive to the purpose of Fairness(C), yet that was the result of applying a Fairness(C) criterion to the payout matrix.
The question left unanswered is “WHY did Fairness(C) produce a counter-intuitive result?” In my opinion, this is NOT trivial, though it may take an inordinate amount of effort to (consciously) find the “Why” versus the insights gained from doing so. But understanding this “Why” would lead to a better model.
Personally, since a person’s skill variance seems quite large, especially in an opposed setting, where the key criterion is the DIFFERENCE is the results as opposed to the results themselves, I would rather believe that the ideal payout would be the province of the highly subjective Fairness(A). As an example, I am not keen on the 50/30/20 split because 2nd place gains less from 3rd place than 3rd place gains from 4th (out).
On a side note, I am against contests of objective skill (e.g. Bowling, Home Run Derbys, etc.) being decided in single-elim style, preferring “X Past the Post” or “Highest-minus-Y” models.
As for the second half of the post, Publication Bias is not exclusive to the scientific community: Newspaper corrections are commonly hidden in a side corner of an inner page. Additionally, there is an element of self-preservation interest against publishing an article saying that a prior article was “wrong” in today’s politically-charged, high-information-flow social climate, though that wasn’t the case a century ago.
LikeLike
Since fairness (C) is measured by the difference between an ideal distribution and the actual distribution, one way to ensure that there’s no difference is to pay all places the same. That is, to pay each of the N players 1/N of the prize fund. This hadn’t occurred to me, but it does go a long way toward explaining why an even division among however many places you decide to pay looks like a good result to the iterative method.
I agree that the fairness of a payout schedule is, at least for the most part, a matter of subjective fairness (A) considerations. I do think that simulation can shed some light on the matter, however, where you have to allocate money between players in two brackets that never get resolved. If you don’t drop the losing finalist in the upper bracket into the consolation, it’s helpful to know that the winner of a consolation is statistically likely to be a better player than a losing finalist in the main bracket, and so should be paid at least as much.
It might be useful to gather information about payout schedules in actual use to see if they disclose a general sense of what’s fair. I’d be interested to know whether your objection to a 50/30/20 split is a common one.
I think that when a knock-out structure is imposed on things like bowling or home-run derby, it’s usually for the purpose of creating spectator interest. It’s all very well to observe that that kind of system is not particularly fair, but that just goes to show that fairness isn’t always the chief criterion.
LikeLike