You might think grading the draft a few days after it occurs is a fool's errand. How could anyone possibly know how well teams drafted so soon? Don't we have to wait a few years to see how the players perform? Why are we still talking about the draft anyways?

These are all fine questions, especially the last one. I can't believe I'm still talking about it myself. But these post-draft draft grades -- the kind where analysts assign academic grades to each team according to their loot -- create questions: Mainly, do these tell us anything at all?

Before I go any further, we should acknowledge there's some logic in grading drafts immediately rather than three or four years later when we have better information. When teams are picking players, they are not privy to visions of the future (if they are, they should return their crystal balls). They can use only the best information available, however imperfect it may be. So if we're truly assessing a team's draft day performance, intellectual honesty would mandate we consider only the information the decision-makers had.

But this is often not what draft grade articles do. They explicitly evaluate which teams got better, which, for the analysts, means predicting which players will succeed in the NFL. In theory, this should be easier after the draft, because analysts get to consider system fits and roster competition.

Back to the original question. I looked at the 2009, 2008 and 2007 draft grades from ESPN's Mel Kiper Jr.,'s Pete Prisco and's John Czarnecki, three of the few writers who issued draft grades for three consecutive seasons. I then compared these draft grades to the Career Approximate Value for each team's draft haul that year, according to Pro Football Reference. I chose to total the CarAV rather than average it out because nobody involved -- teams, analysts and fans -- cares about drafting efficiency, just overall improvement.

In addition, I included two controls of sorts: the number of selections and the average pick position for each team's draft. The number of selections will test the idea that draft results are inherently random. If this turns out to be the strongest relationship, we can chalk it up to the theory that the best draft strategy is simply to pick the most players. Contrastingly, the average pick position will give us some clue as to whether early picks matter more than late ones.

Theoretically, the analysts should be better than these two basic metrics, since they can weigh both of these considerations along with team needs and other factors. So how did this all work out?

First, the anecdotal: Were analysts able to identify the best and worst drafts over the three-year period from 2007-2009? Dallas' 2009 draft took the title for the worst in all three years with an abysmal CarAV of 17, which is particularly awful considering it had 12 picks that year. At the time, all three analysts considered it a poor draft. Czarnecki gave the Cowboys a C, Prisco a D+ and Kiper a D. So they all pretty much got that one. Same goes for the 2008 Chargers draft, the worst of that year, which received grades of C-, C and C+, respectively. (All three demonstrated pretty serious grade inflation, as Ds were rare and Fs presumably reserved for teams that failed to submit their selections in time.) The 2007 Patriots draft is an interesting case, as they had the worst draft from that year, yet Prisco awarded them an A and the other two gave Bs. But all three factored in that the Patriots exchanged picks for Wes Welker and Randy Moss, and last I checked those deals worked out a little bit. So score one for the analysts outsmarting fancy metrics.

As for the best drafts of each year, all three analysts spotted them fairly adeptly, although Czarnecki gave the 2009 Packers, the year's best class, a C. Still, it seems, at least anecdotally, that the analysts are better than the controls at identifying the studs and duds.

But what about the draft as a whole, including that all-important meaty middle? How do the analysts do there?   


Don't pay attention to that graph. I just wanted to prove to you how much work I did. Look at all the pretty lines! (Quick aside: the green line, Total CarAV -- the one that looks like a heart monitor -- shows you how completely and utterly nuts anyone is who thinks they understand the draft.)

Here's the actually important chart:


This is the correlation between all the craziness in the first chart with CarAV. At first, I thought the analysts did well. All their grades had moderate correlations with CarAV. Czarnecki, in particular, stood out on this front. Given all the uncertainty inherent in predicting NFL success, demanding a strong correlation would have been too much, so I was ready to praise their insight, especially compared to average pick position, which didn't do so well. (The negative part doesn't matter; it just means that the relationship was inverted, which we would expect since the lower-numbered picks tend to yield better players.)

But look at the number of picks correlation! It has the strongest one of all, even slightly better than Czarnecki, the best of the analysts, and much better than Prisco or Kiper.

So what did we learn? Draft grades can factor in a few things raw numbers can't -- or are much more difficult to consider -- such as trades involving picks. But analysts also bring in their psychological biases to negate any benefit this may offer. In the end, draft grades aren't any better than simply counting the number of picks each team made, the simplest metric of all. Another point for "the draft is a random mess" theory.