My brain got increasingly fed up with this as I went along and I cut more corners. I think I may have even lost some school down in the 50’s while switching lists. I guess that will show up when the bracket comes out.
I won’t be updating it on the basis of the last few games. This will be the list, subject to a few decisions, all of which revolve around purity of method versus what I think will happen. By purity of method, BYU, Belmont, and Wisconsin are all ranked higher than I think they should be – insanely higher in the matter of Belmont. OTOH, I think Florida should go higher than where they ended up by my system.
I will probably have better predictions if I move teams that are clearly out of place. OTOH, the original experiment was not to know anything and just slam together a mathematical model for fun. On the other, other hand, it stopped being fun and I won’t be improving on the model for next year. If I do this again, it will take a different form. Blowout wins and losses versus virtual ties seems to provide good predictive value, at least as far as the conference tournaments have shown (rough estimate from reading up this week). But it is tedious enough to work out by hand, and results in odd enough records (such as an adjusted 43-12 W-L) that it then gets difficult to do anything else with them, such as factor in strength of schedule. Just too many teams. It would be fun to do lots of complicated things with a dozen teams. With 80, not so much. It’s not worth the candle.
I did discover that there are many mathematical models for Bracketology, some with more sophistication than what you’d ever do by hand I have no knowledge which is best. Maybe that will be next year’s project instead – rate the methods. I looked at a composite site, and every method had Ohio State and Kansas #1 & 2, usually in that order. After that it gets stranger.