Model Madness 2019 - Introduction, Estimated Difference Metric

in #model-madness6 years ago

Every year, millions of brackets are digitally constructed, written and scribbled just to see every one fall short of the ultimate goal. Perfection. But the odds are not in the favor of those in such a pursuit. 1 in 9.2 quintillion. Which makes filling such a bracket virtually impossible. Seems like a challenge worthy of my time.


orange.png

Before we move into the statistics and algorithm based part of this series of inevitable failures, one must ask why someone would waste their time either watching an excessive amount of college basketball or spending a lot of time reading through a bunch of statistics and composing different algorithms to squeeze as much information as we can get from the numbers that are recorded by statkeepers.

Well, I enjoy modeling and college basketball a lot. And since we'll never see a perfect bracket, this gives me a yearly source of subpar content that I can pollute this blockchain with. Perhaps, I might inspire some inspiring machine learning prodigy to waste their energy on pointless fun endeavors like this rather than using them to make a fleet of evil self-driving cars that will enslave the human race.

There's something that oddly appealing about certain failure and seeing how far through the gauntlet you can get before your hopes and dreams are dashed. May the odds never be in your favor.


Today, we'll go over the first metric that I have experimented with, the Expected Difference Metric (EDM) Rating. The EDM Rating is based on the idea that the margin of victory increases as the difference between the two teams increases. So, the greater the difference in EDM Rating, the greater expected margin of victory of the higher score team than lower score team.

Now of course such a model isn't perfect, but given a good sample size, it can give us a good idea of how teams match up against one another for a given match up. This metric also is tuned over time and thus has a higher bias towards more recent samples than older samples. In the case of college basketball, the bias is towards more recent games than older games.

To calculate this metric, we simply use scores of a particular game and compare the actual difference in scores against the expected difference in scores. Afterwards, we update the expected difference by a small amount to move the EDM Rating to reflect this discrepancy.

In terms of defining this idea mathematically, a difference of 100 EDM points corresponds to an expected difference of 1 point. This scales linearly. This means that if one team has a rating that is 100 EDM points higher than the other team, the model "expects" that team to win by one point. If one team has a rating 1000 EDM points lower than the other, the model "expects" that team to lose by 10 points.

Let's say Team A has an EDM rating of 1600 and Team B has an EDM rating 1200. The expected margin of victory is 4 for Team A. Let's say Team A wins by 4. Then the model makes no adjustment. Because the expected difference and actual difference are the same. The model has done it's job and no adjustments are necessary. This is unlikely as the college basketball universe isn't governed by universal laws, but in the rare case that the model is exactly correct, it doesn't adjust.

But let's say Team A wins by 14. The expected difference for Team A is smaller than the actual difference. So, we need to increase the distance between Team A and Team B. We calculate the delta value (the small change) by multiplying this difference in differences by some scaling coefficient. In my model, I'm using 5 for the coefficient. Since the difference of differences is 10, we multiply 5 and get 50. We add 50 to Team A and take away 50 from Team B.

Team A now has an EDM rating of 1650 and Team B has a EDM rating of 1150. If these teams met up again immediately, the model now expects Team A to have a 5 point advantage rather than a 4 point advantage.

But let's say Team A lost by 11 to Team B instead. The expected difference for Team A is now larger the actual difference (4 vs -11). So, we need to decrease the EDM score of Team A and increase the EDM score of Team B. The difference of differences here is 15. We multiply by 5 to get 75, the delta value. Team A loses 75 points and Team B gains 75 points.

Team A now has an EDM rating of 1525 and Team B has an EDM rating of 1275. Team A is still favored by 2.5 points by the model in an immediate match up, but this is because the model is conservative to individual samples and the hope is that over a larger sample size an accurate EDM rating is found.

Of course, this model is rather simplistic. It assumes a uniformity in margin across all teams which probably is not the case in real life. Depending on offensive and defensive styles, some teams might struggle against teams of lower EDM scores and excel against others of higher EDM scores.


Of course, such a model would be incomplete without a demonstration. So, by throwing thousands of Division 1 games into our model, we can see which teams this model thinks are the best and should be favored to go deep in the tournament. These games exclude games against non-D1 teams in order to keep the data more uniform and to avoid giving weaker teams too much credit. All teams start out with an EDM rating of 1500. This means the average EDM rating across the board is 1500.

Listed below are the Top 25 teams by EDM Score and their corresponding AP Ranking as a means of reference. All ratings are up to date as of the morning of February 26.

#TeamRecordEDM ScoreAP Ranking
1Gonzaga27-23738.6801
2Duke24-33128.3903
3North Carolina22-52984.6815
4Virginia24-22980.3634
5Kentucky23-42879.8082
6Texas Tech22-52820.38711
7Michigan State23-52807.6306
8Houston26-12769.4998
9Nevada25-22740.95412
10Buffalo23-32678.64921
11Tennessee23-32661.9497
12Virginia Tech21-62556.35120
13Michigan24-42538.7859
14Marquette23-42507.82210
15Wofford21-42495.22224
16Iowa State20-82489.945NR (28)
17VCU21-62474.160NR (32)
18Cincinnati23-42445.55623
19Purdue20-72444.00014
20Auburn17-92438.612NR (30)
21Florida State22-62433.98318
22UCF20-62404.996NR (-)
23Louisville18-102386.060NR (26)
24NC State20-82380.733NR (-)
25Maryland21-72379.90817

This model lines up pretty nicely with the AP poll despite not watching a single second of basketball and only looking at the score of each game. One thing to note is that is tends to rate mid-major schools on winning streaks slightly higher than the AP poll. The AP poll tends to look more at the season as a whole while our model is looking more at recent data without any "big-school bias". Gonzaga probably exemplifies this trend the most. They haven't lost since mid-December and have dominated their mediocre competition. This may be a potential weakness in determining winners for the tournament, but may be useful in determining potential upsets for underrated mid-major teams. But it remains to be seen.

Next time we'll either visit another type of model or begin developing more complex variants of this model. My goal is to have five different algorithms and have overall composite algorithm before the deadline for brackets rolls around. Feel free to share your thoughts and feel free to play around with modeling ideas that you have. This model barely scratches the surface of potential methods to look at.

Sort:  

Pollute away 😄

While I care not about college basketball, using ML to beat the bookies is a passion of mine and this was an enjoyable reading.

Looking forward to your next post 🍻

Posted using Partiko Android

Hello @statsplit! This is a friendly reminder that a Partiko user has just followed you! Congratulations!

To get realtime push notification on your phone about new followers in the future, download and login Partiko using the link below. You will also get 3000 Partiko Points for free, and Partiko Points can be converted into Steem token!

https://partiko.app/referral/partiko

Coin Marketplace

STEEM 0.25
TRX 0.20
JST 0.038
BTC 93864.76
ETH 3457.40
USDT 1.00
SBD 3.78