There are probably ways to tinker with Elo and make the predictions more accurate, but I've been happy with how well-calibrated it has been over the years. When Elo says 80% chance; it's been pretty accurate in that 2 out of 10 times, the underdog will win. As it should be. And making it more complex to match prior results more accurately is no guarantee that it would be more accurate going forward too. Overfitting is just as bad.Hoxwurth wrote: ↑Mon Mar 14, 2022 3:42 pmThe ELO calculation is likely good, but any model that uses prior games will struggle where so many teams like the Ivies missed so much time. One could quibble that the assumptions regarding the ELO start are wrong where Virginia (winner of the last two natties) was ranked behind Georgetown and Cornell to start the season. COVID missed games aside, changing the model could be effected by making different assumptions such as weighting later (i.e., tournament) games more heavily.Gobigred wrote: ↑Mon Mar 14, 2022 3:07 pmModify your ELO algorithm to reflect reality on the field. Results matter. Else no one but you will care about ELO.laxreference wrote: ↑Mon Mar 14, 2022 10:48 amFair. Georgetown's resume is strong though. 6th nationally in the SOR ratings, although it is behind where Princeton sits in 4th. Princeton ends up where they are because they are still working their way up the Elo ratings. At 8th, they are several spots behind Georgetown (2nd). When the model averages it all out, Hoyas come out ahead. We'll see if that holds true over the coming weeks. If they both keep winning, the Tigers will eventually eclipse the Hoyas because their SOS from here on out is so much better.
I remember reading that the ELO model does a pretty good job predicting future games, so tinkering may not make the model more accurate.
Also, the problem with adding to any model is that you make it more complex and complexity means it's harder to interpret, which is a definite negative. If there were issues where Elo was getting too many games wrong, then I'd consider it, but for the time being, it performs well enough that the downside of adding complexity is too high.
The Ivies did present a challenge. But I would argue that the objective Elo model, which ended up having the Ivies much higher-ranked than most rankings, actually was a lot closer to the mark. People assumed that the Ivies would be down because of the layoff, but that was without evidence; my model, for lack of a better option, assumed they'd be just as good as they were in 2020.