Retooling the Simulator

Tourneygeek has now gone Gaussian.

I’ve had some misgivings about my initial efforts on the tournament simulator. The individual match model simply added a uniformly-distributed random number representing the skill of the player (which didn’t change over the course of each iteration of the tournament) to another uniformly-distributed random number (fresh each time) for each of the two players.

But uniform distributions are rare in the real world. In particular, it seemed to me that the uniform distribution of skill factors didn’t make for a realistic differences at the top of the distribution – that the skill levels of the best players were closer together than they would be in most real tournaments.

So, at the suggestion of my friend Chuck Bower, I substituted Gaussian distributions for the uniform ones. And while I was at it, I put in a couple of parameters that ought to let me tune the model to reflect more closely what happens in particular events.

I’ve parameterized the balance between skill and luck. The old model was hard-wired to give equal weight to the two. And I still think that that’s a sensible balance for looking at tournament formats in the abstract.

Another parameter is a sort of participant threshold for elite competitions. It makes sense, I hope, to assume that the participants are drawn from some sort of normal distribution with respect to overall skill. But for elite competitions, you’re really sampling the top of the distribution, because the less skillful players don’t get to participate at all. As a default, I’ll usually simply exclude the lower half for the distribution by kicking out any entry with a Z-score less that zero.

But now I’ll be able to tweak the model if I find that the distribution of results differs markedly from the distribution of observed results for some particular competition. So, for example, if I want to show the possible effects of a less extreme seeding regime on the NCAA basketball tournament, I’ll be able to begin by tuning my base model to produce, reasonably closely, results similar to those of the past several years of real NCAA tournaments.

The coefficient of fairness will be computed in a similar way, but the numbers themselves will not be comparable. The new fairness coefficient is:

1 /  ((mean(Z-score of best player) – (Z-score of actual winner) + 0.01))

As before, the 0.01 is there to keep the coefficient from going to infinity for a design in which the best player always wins, which would happen is the luck factor were zero. The highest possible fairness value is 100.

With basic assumptions (luck factor = 1, elite threshold = 0), the most skillful player will win somewhat more often than was the case with the old model. This is because in a normal distribution, there’s a greater range of differences at the top of the distribution than there is in the middle of the pack. I expect that this will bring the results closer to those of real tournaments, but time will tell.

With these changes, my tournament simulator has a good deal more computational complexity than the old one. So it will run slower, and I will not be able to knock out as many 10,000,000-trials runs in a day as I could in the past. I may decide that I don’t really need that many, but I’ll see. In any case, while I intend to go back and redo all of the experiments I’ve reported in earlier posts, but don’t hold your breath. And in the meantime, please bear in mind that, while the general conclusions I’ve drawn are unlikely to change much, the statistics that support them are not comparable to the new ones.

 

 

2 thoughts on “Retooling the Simulator”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: