Why College Football BCS Rankings Are So Mysterious

Alabama's Mark Ingram (22) runs on a 14-yard touchdown as Auburn's Josh Bynes (17) and Sen Derrick Marks (94) defend during their NCAA college football game at Bryant-Denny Stadium in Tuscaloosa, Ala., Saturday, Nov. 29, 2008. Alabama, which is just one win away from a likely BCS birth, won 36-0. AP Photo/Dave Martin

As the new President-elect, Barack Obama faces one of this country's most vexing problems.

Obama has promised the American public that he will bring change to a stagnant system that is controlled by a few wealthy men that control the millions of dollars at stake. During a Nov. 16 "60 Minutes" interview, Obama elaborated on his plans: "Eight teams. That would be three rounds to determine a national champion. I don't know any serious fan of college football who has disagreed with me on this. So, I'm going to throw my weight around a little bit. I think it's the right thing to do."

That's right, fixing the college football post-season is on the national agenda. Prior to 1998, the collective wisdom of football coaches and sportswriters decided the fate of college teams by ranking them in two weekly polls, with the final lists deciding the season's champion. This led to problems when the media poll did not agree with the coaches poll, and dual champions would have to be named. The Bowl Championship Series (BCS) was created to finally provide a national championship game so that at least the No. 1 and the No. 2 ranked teams could play each other at the end of the season.

Of course, working backwards, how are we sure that the two teams selected are indeed the No. 1 and No. 2 teams? Should we fall back on the polls or should we use the other four BCS bowl games to provide an eight team playoff, as our next president suggests?

Sea of rankings

Since the playoff system seems to be an uphill battle, let's focus on the current BCS polling solution and why it has so many doubters. The weekly BCS rankings consist of three components: the Harris Interactive poll (114 writers); the USA Today coaches poll (60 coaches); and the infamous "computer" rankings (6 independent systems averaged together). Each component counts for one third of the total, with the average point value of all three determining the rankings from 1 to 25.

The human polls are self-explanatory but come with an opportunity for bias among writers and coaches, as well as varying methods of ranking. This uncertainty and frequent lack of logic helped support the use of automated ranking models. Just feed in the data from previous games and have the rankings derived according to the embedded algorithm. Human emotion and bias are eliminated, but the focus is now on the correctness of the model.

Unfortunately, of the six models used by the BCS, only one, by astrophysicist Wesley Colley, provides all of the mathematical details, while the other five claim proprietary rights and keep their methods shrouded. In a Nov. 19 interview with the Birmingham News, BCS administrator Bill Hancock admitted, "We don't have the formulas and that's by design. The commissioners are not in the computer business and don't want to be. But on the other hand, they want to know that the computer rankings they hire are the best they can be. Because we're hiring the service, we don't have any control over the math."

Even the coaches are in the dark. "I don't know how the computer thing works," USC coach Pete Carroll said earlier this month. Typically in science, a hypothesis is proposed and then checked against observations to find out if it's valid. However, in college football or any sport there are no definitive observations, as each team does not play every other team. So, the best we can do is compare a model's results with other human polls or other computer-based rankings. Since there is no final "right" answer, any system's output is going to be open for disagreement.

SOS! Wins and losses seem to be the simplest statistic to use to compare teams. Within conferences, teams typically play every other team so a winning percentage (wins divided by games played) provides a reasonable ranking. However, comparing teams across conferences becomes the challenge, as we can't assume that each conference has equally strong teams. So, a "strength of schedule" (SOS) variable is added to each model. The algebra fun begins in knowing how deep to take this SOS factor. If Team A beats Team B, we need to know how good Team B is by analyzing its previous opponents. But, how good are Team B's previous opponents? This backward chain needs to stop somewhere.

Thankfully, when trying to rank only the top 25 teams, the iterations can stop when there is only a negligible change in ratings. A team that plays weaker teams in their non-conference schedule not only runs the risk of an upset, but also lowers their SOS. The NCAA has also prohibited the use of margin of victory as a factor to prevent unsportsmanlike run-ups in the score. Its not a perfect system, but that's OK with the BCS' Hancock. "We know that there's no one computer ranking that can adequately tell you who's going to win it on Saturday," he said. "We just need something to add a little science and that's what we have."

Dan Peterson blogs about sports science at his site Sports Are 80 Percent Mental and at Scientific Blogging.