Seasonal Prediction Review

Today’s post is from friend of the site Topher Doll, a fastidious recordkeeper who has spent the past few years tracking the success of myriad predictions and metrics used to measure team strength. Enjoy.


Each year we are bombarded with predictions on how the season will shape up. From models built by people far smarter to me to predictions based on the feelings of a writer or researcher. Individually these are fun to read while agreeing or disagreeing with each team. They are wonderful conversation starters, and often a big draw for many sites, but I think at the end of the year they are generally forgotten outside of those predicted champions who fails to make the playoffs or when a basement team shatters expectation. But I think we can glean two things from these predictions:

  1. If we can collect enough predictions we can build a picture of how the NFL community as a whole, from writers to fans to researchers, feel the season will shake out. Once we have that collection we can get a feel for which teams are generally viewed as good or bad, as potential risers or teams likely to regress. At the end of the season we are able to get a much more complete view of how teams fared compared to expectations since we are using a much larger sample of predictions than just a single source.
  2. We can learn what models, processes and writers are consistently good at predicting the quality of a team. If one prediction source is regularly accurate, within a reasonable margin, perhaps we should listen to them more often than a source that can’t see how the season will shape out with any kind of accuracy. This is just a general rule we follow in all facets of our lives and it should be used here.

Since I have only been doing this for a few years the second item isn’t quite ready yet since I’d prefer to have more data on each source before judging whether they can accurately predict how a season will unfold. But what we can do is dive into the first point. Let me lay out my method:

  1. I collect season predictions from a variety of qualified sources. These range from veteran sports news writers, analytics sites, former players and coaches. I won’t list how each source fared here since we are focusing on how teams did against the predictions rather than how each source did at predicting the season. In total I collected seasonal record predictions from 27 sources. Some of these used models that produced marginal wins (like saying a team will win 5.8 games) and since I don’t really know what 5.8 games is I rounded those up. There were certain situations where I allowed half wins if the source clearly states they believe this team (along with another team) will tie.
  2. Once I have my data collected I extract a few key pieces of information:
  • Average record
  • Highest predicted record
  • Lowest predicted record
  • The standard deviation, meaning how much variation there was for a team. The higher it was the less agreement there was on that team
  1. Once the season has ended we can compare how the season actually went against the predictions and we get our Δ and if the team fell within the range of predicted records. From this we can see if a team underachieved but still fell within the predicted range of records or if a team overachieved so highly that it blew past any prediction the sources had for that team.

Let me give an example:

Team Avg StDev High Low Actual Δ
Buf 4.94 1.84 8 2 6 1.06

 

We see Buffalo was predicted to get 4.94 wins with a standard deviation of 1.84 wins. The highest predicted record was 8 wins while the lowest was 2. We can also see that there was consensus around two ends rather than near the center. At the end of the season they had won 6 games. That means they overachieved by 1.06 wins.

Once we have this for the entire league we can paint a nice picture. Let’s start with a larger example, here is the AFC East:

Team Avg StDev High Low Actual Δ
Buf 4.94 1.84 8 2 6 1.06
Mia 6.00 1.87 10 2 7 1.00
NE 11.76 1.01 14 10 11 -0.76
NYJ 5.64 1.78 9 2 4 -1.64

 

In this chart we can see the range of predicted records (the blue bars), the expected average (white dot) and the actual record (black dot). Reading this we can see both the Bills and Dolphins beat their predicted records by about one win but both were within range of expectations. The Jets and Patriots underachieved but to varying degrees. We can also see the Patriots had a narrower range of expectations compared to a team like Miami.

With out bases covered I think it is time to head to the deep end:

Avg StDev High Low Actual Δ
AFCE
Buf 4.94 1.84 8 2 6 1.06
Mia 6.00 1.87 10 2 7 1.00
NE 11.76 1.01 14 10 11 -0.76
NYJ 5.64 1.78 9 2 4 -1.64
AFCN
Bal 8.08 1.08 11 6 10 1.92
Cin 6.80 1.22 8 4 6 -0.80
Cle 5.58 1.22 8 4 7.5 1.92
Pit 10.46 0.91 12 9 9.5 -0.96
AFCS
Hou 8.78 1.63 13 6 11 2.22
Indy 6.18 1.70 9 2 10 3.82
Jax 10.32 1.55 13 8 5 -5.32
Ten 8.08 1.08 10 6 9 0.92
AFCW
Den 6.68 1.38 9 4 6 -0.68
KC 8.74 1.23 11 7 12 3.26
LAC 9.96 1.31 13 8 12 2.04
Oak 6.96 1.17 9 5 4 -2.96
NFCE
Dal 8.54 0.96 10 7 10 1.46
NYG 6.74 1.54 10 3 5 -1.74
Phi 10.86 0.88 12 9 9 -1.86
Was 6.66 1.50 9 2 7 0.34
NFCN
Chi 6.50 0.87 7 4 12 5.50
Det 7.72 0.98 10 6 6 -1.72
GB 10.44 1.42 12 7 6.5 -3.94
Min 10.88 1.30 13 9 8.5 -2.38
NFCS
ATL 10.08 1.32 12 8 7 -3.08
Car 8.60 1.04 11 7 7 -1.60
NO 10.18 1.09 13 9 13 2.82
TB 5.16 1.84 8 1 5 -0.16
NFCW
ARI 5.52 1.17 7 4 3 -2.52
LAR 10.62 1.42 13 8 13 2.38
SF 8.56 0.96 10 7 4 -4.56
Sea 7.14 1.40 9 4 10 2.86

 

Visually:

I won’t go team by team but I do want to talk about a few outliers. The teams that overachieved the most against expectations were the Bears (5.5 wins over), Colts (3.8) and Chiefs (3.3) and it is no surprise that Nagy, Reich and Reid were generally considered among the best candidates for Coach of the Year (which Nagy won with Reich and Reid placing 3rd and 4th respectively in the AP award). The Bears rode Nagy’s offense and Vic Fangio’s defense to a 12 win season. The Colts were beating on Andrew Luck’s return to health and an improved defense and it paid off. People viewed the Chiefs as a weak bet despite Reid’s track record which he and his 2nd year phenomenon quarterback Patrick Mahomes decided to make look silly.

On the opposite side you had the Jaguars (5.3 wins below), 49ers (4.6) and the Packers (3.9) who all saw high expectations flop. The Jags defense was unable to cover for Bortles struggles as he regressed to his mean and this fell apart. The 49ers lost their starting quarterback to injury but had other struggles that grabbed fewer headlines. The Packers may have been the most surprising. With a healthy Aaron Rodgers the Packers were viewed as a lock to make the post-season and while Rodgers was healthy this year his own personal failings were only magnified by a porous defense and inconsistent offensive weapons around him. Though, strange or not, only one of these teams fired their coach, the Packers.

In the middle you had the Tampa Bay Buccaneers (0.2 wins below expected) and Denver Broncos (0.3 wins above expected) who were the teams who came in closest to expectations. Both of these teams played as expected and both teams had to find new coaches once the season was over.

In terms of teams that there was little consensus on the Bills, Dolphins and Bucs all stand near the top. The Bucs make sense as their quarterback situation was up in the air. The Bills had a highly drafted rookie on the roster but at the start of the season was on the bench. The Dolphins were coming off a mediocre year but had some big free agent signings and both the quarterback and coach were in life or death situations when it came to their jobs.

If we examine the opposite, teams most agreed on, we see the Bears, Eagles and Steelers. The Eagles are a safe bet for most at the start of the season, they were the Super Bowl champs. The Steelers as well were a reliable option since they’d been a team you could count to win 10 games or so almost every season. The sources being so focused on the Bears is odd though. Young quarterback and new coach usually don’t yield predictable results, and it wasn’t predictable as the Bears broke free of those expectations.

This was my third year doing this and my first time sharing it. Over those three seasons of tracking this the correlation between the predict records and actual records is 0.601 with an R2 of 0.426. Overall there is a strong connection between predicted records and how things actually shape up and I think it is a useful milestone going forward. There will always be outliers, the surprise Super Bowl contender or the pre-season favorite who never lives up to the hype, that is what makes the NFL exciting, but over the past three years it seems that we can get a general idea of the strength of a team if we have a large enough sample of seasonal predictions.