Gobigred wrote: ↑Thu May 19, 2022 6:57 am
CU77 wrote: ↑Wed May 18, 2022 9:31 pm
ICGrad wrote: ↑Wed May 18, 2022 9:13 pm
Why would you assume that a committee comprised of knowledgeable representative from a ranges of conferences is always going to just take the Dukes and NDs of the world?
Because I'm a Cornell grad. We will never forget 1970.
And I will never forget 2007 when RPI caused a team to be seeded above unbeaten Cornell because it had a lot of "good losses." In the regular season that team lost to eventual NCAA seeds 1, 2, 5 and 8, and beat only seed 7. Yet RPI gave it RPI's third highest score, and the committee seeded it 3rd. How can a metric be taken seriously when it gives a team that has a 1 - 4 record against the top 8 teams the 3rd seed? You have to look at whom did a team beat and to whom did it lose, and adjust accordingly.
Why don't you, CU77, explain to us why you think RPI is the best solution for selecting and seeding?
Or, again, Duke this year: #7 RPI with a single
game against a top 10 RPI team (which they won, over #9 Virginia), a win over the #13 team., two wins over the #17 team...and 6 losses, including 3 to teams ranked #20 or lower in the RPI.
When I asked how that translates to a #7 RPI (how can you be #7 when your highest win is over #9 and you have 3 losses to teams 20 or lower?) or when I suggest that their RPI was grossly inflated by games (not wins...
games) against teams with good records, the answer I keep getting is that RPI can't be inflated, it's a formula, it's the maths, etc, etc...
Look, I get it. I'm a computer programmer for a Fortune 100 company. But sometimes algorithms kinda suck. Sometimes they're wrong, or have severe limitations that are exposed by edge cases. You don't just shrug and say: "It's the maths." (At least I don't; they'd fire my ass). You fix the algorithm. You come up with something better.
[Or you do what the committee did this year: You use the results as one datapoint, but you look at other data points as well.]
In Duke's case (or the case Gobigred cited above, or Hopkins in 2019, or Hopkins in 2016, etc...) it's not that they didn't pass the eye test; they didn't pass the smell test. The inputs produced outputs that were simply awful and made no sense. To just suggest we shrug and say "Oh well; that's RPI for ya" because it's easier or "objective" seems like flawed reasoning.