So, it’s October 15th, just a few days until the first official Bowl Championship Series (BCS) rankings of the 2008 season are released, and, in case you didn’t notice, I added a link to the official BCS homepage to the other links on the side of my blog page. It’s always handy to be able to access the BCS rankings at a moment’s notice. And, if you really need an instant headache, that’s a great place to start.
I decided, once again last night, to embark on a mission to understand the BCS ranking formula. I sat at my favorite establishment with a laptop, a tall glass of my favorite beverage and a few of my friends. Unfortunately, none of my friends are M.I.T. statisticians. So we sat there, confounded and confused, silently reading, and wasting, through our efforts to concentrate and comprehend, almost all of the money we spent (on booze) trying to relax.
I’m going to try to explain the BCS formula in the simplest possible terms. There are six computer ranking systems that are considered. These belong to: Jeff Sagarin, Anderson & Hester, Richard Billingsley, Kenneth Massey, Peter Wolfe, and the (Wesley) Colley Matrix. Each has their own methodology, which, in every case except Anderson & Hester, is explained fully on their respective websites. For that reason alone, the Anderson & Hester numbers immediately suffer a credibility gap with me. Combine that with the fact that their website looks like it was the result of somebody’s tenth grade computer class project, and I find myself wondering how in God’s name they could have been included in the formula that helps determine the right to play for, arguably, the most prestigious of the championships in college sports. One can only hope they had to present a summary of their methodology to the BCS czars at some point, and that their methods are at least somewhat more scientific than, let’s say, pulling a number out of your ass. I urge you to go to the websites of each of the computer rankings included in the BCS (they can be found through the BCS link HERE), and read the explanation of their ranking process, and, if you understand all that, you might qualify to be the flight control officer for a space shuttle launch.
A team’s BCS ranking is based on the average of three things: the USA Today Coaches Poll, the Harris Interactive (media) Poll, and a combined average of four of the six computer rankings – the highest and lowest of the six scores are dropped. One common fallacy is that the BCS formula somehow rewards teams for running up the score. This is not true, except to the degree that human voters in the Harris Interactive Poll and the USA Today Coaches Poll may be influenced by an impressive margin of victory, and, therefore rank one team that wins convincingly higher than another that wins close games. Margin of victory is not a factor in the six computer rankings, and, in fact, that was a deliberate decision on the part of the BCS architects so that bad sportsmanship (i.e. runaway scores) would not be “encouraged or rewarded.”
I'm actually one of those people that thinks the BCS is relatively functional, and a better alternative to many of the much-heralded college football playoff scenarios that have been proposed, but that is a subject for another time, another column, probably during the offseason.
In the meantime, after an entire evening of digesting this information, along with a delicious salad and the better part of a dozen adult beverages, I decided it might be best to just blindly await the wisdom of the BCS czars when it is finally published , and not really try to understand it. After all, I now have a headache.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment