You've probably heard of "Occam's razor," the principle that, in the absence of certainty, the simplest explanation is the likeliest to be correct. In science, it's often used as a heuristic, guiding researchers toward more likely discoveries and faster solutions. The logic behind it is not that "simplest equals best," but that -- given an infinite number of possible explanations for any given phenomenon -- researchers could search their entire lives for the answers if they didn't work systematically. This is why Occam's razor proves useful: Start with the simplest explanation, see where it takes you, and then go from there.

NFL scouting appears to function in direct opposition to Occam's razor. It starts with the premise that the best way to choose NFL players is to invest an ever-expanding amount of resources into a nebulous, opaque and poorly defined process. Divergent opinions are solicited from a plethora of experts who surely know what good throwing mechanics look like, for example, but can only guess as to whether they actually matter. Yet there is no apparent reason why Occam's razor shouldn't apply to NFL draft decisions in the same way it applies to other research.

With that in mind, I decided to run a small research project of my own, to see if the scouting process could be improved by identifying the concepts most worthy of analysis. I started by trying to reverse engineer scouting reports, using machine-learning algorithms. The idea was to determine if certain individual phrases or words contained in scouting reports mattered more than others in predicting NFL success. If the results were compelling, that would suggest that scouting analysis can be simplified and possibly refocused.

But a fundamental roadblock emerged before I could get to that point: I couldn't find any evidence that scouting actually works.

* * *

For the data, I went back into the ESPN NFL draft archives. I needed to go back far enough that the players included already would have had plenty of time to establish themselves in the NFL -- but the farther back I went, the more scarce the scouting reports became. I settled on the 2009 draft class as the best available data set, balancing those two concerns.

There are two components to the ESPN scouting reports we can measure. The first is the numerical Scouting Score given to each player, which for these players ranged from a high of 97 (Aaron Curry, Jason Smith) to 20 (a bunch of people). To measure NFL success, we used Pro Football Reference's Weighted Career Approximate Value metric, or CarAV. While it's not a perfect measure of overall success, CarAV does allow us to compare players across positions. Its shortcomings are somewhat mitigated when the players being compared all entered the league at the same time, as is the case here.

To de-emphasize the numerical values produced by CarAV, I created three groups: Washouts, Contributors and Players:

  • Washouts have a CarAV of 0-2. These guys are busts, barely contributing to any NFL roster. You probably haven't heard of most of them.
  • Contributors have a CarAV between 3 and 11, meaning they played on NFL rosters and occasionally started.
  • Players have a CarAV of 12 or more, meaning they consistently started or added value at the NFL level.

Racking up a CarAV of 12 is an incredibly low bar for NFL success. To put this in perspective, here are some of the players at that level: Mohamed Massaquoi (13), Louis Murphy (13), Beanie Wells (14), Darius Butler (15) and Patrick Chung (16). For comparison, here are the top CarAVs for this draft class: Clay Matthews (50), LeSean McCoy (48), Louis Vasquez (46) and Matthew Stafford (44).

The categories are purposely vague and generous, because at this point, we're just trying to determine if scouts consistently can predict which players will last longer than the average of 3.5 years. So I simply scatter-plotted all the players from the 2009 draft class, with Scouting Score on one axis and CarAV on the other:

2009draftgraph

This shows that Scouting Score is, indeed, positively correlated with CarAV, so that's good! But it's also readily apparent how many "misses" there are. Statistically speaking, this is a textbook "moderate" correlation -- neither incredibly strong nor prohibitively weak. Using the correlation coefficient, we can estimate which CarAV group a player should have belonged to, based on his Scouting Score. Most of the red squares are highly touted prospects who didn't live up to expectations, but some of them are poorly graded prospects who emerged as better players at the pro level.

Out of 257 players drafted in 2009, 55 percent of them (142) fell into a different group than their Scouting Scores predicted -- meaning that despite the positive correlation, the Scouting Scores were wrong more often than they were right. (Somewhat impressively, two players were given scouting scores over 80 yet earned a CarAV of zero.)

In fact, college statistics alone serve as better predictors of NFL success than ESPN's Scouting Scores do. As David Lewin wrote in 2008, completion percentage and games started in college are the best predictors available for NFL quarterback performance. The same has been found with wide receivers and receiving touchdowns.

In short, the scouting experts actually fare worse than even the most rudimentary statistical analysis, using data that any fan can access. It seems that one's ability to play college football is actually the best predictor of his ability to play in the NFL.

One could argue that the numerical grades are haphazard and unscientific, and that's true. Anticipating this, I also performed an analysis of the full text of the scouting reports, to see if any words or phrases were particularly indicative of NFL success or failure. I thought I might find that players identified as having "quick hands" or "a good first step" might tend to be productive at the pro level, for example, while players with "stiff hips" or "confidence issues" might tend not to be.

But this simply wasn't true. No scouting terms correlated in any way, shape or form with NFL performance. In fact, the only terms that showed even the slightest predictive capacity were related to college performance: "for a total of," "appears in 13 games," "total tackles" and so on. It appears that the actual words in scouting reports are predictive only when they regurgitate college statistics.

* * *

We often hear about a player's intangibles, as if this is something scouts can evaluate in a significant way. Scouts interview college players and (in some cases) evaluate the way a player looks into the camera. Does any of this actually amount to anything?

In fact, this set of scouting reports showed no evidence whatsoever of any prowess at evaluating personality traits, work ethic or any other intangible quality. There was no correlation between the intangible score and CarAV, even when adjusting for overall score. (This is in part because pretty much everyone was graded highly on intangibles.) The most charitable reading of this result is that the intangibles score is intended as a search for the rare red flag -- a reason to avoid a player rather than a reason to draft him. This would be a fine take on the subject, but it's not how ESPN often discusses intangibles, or how teams seem to approach the concept.

Another factor to consider is that these are media scouting reports, not team scouting reports. One assumes that ESPN put a lot of time, effort and money behind them -- but they are not prepared by current employees of any NFL team, so they don't tell us how NFL teams are actually evaluating these players. It's conceivable that team scouting reports are much more efficient than ESPN's, but given how many drafting errors are made each and every year, that doesn't seem all that likely.

* * *

If the combination of college stats and scouting provides less insight into future NFL success than just using college stats alone, then why do scouts exist?

Human beings have an innate desire for answers, but just because someone wants to know something very badly, it doesn't mean it is possible to learn it. In the same breath, scouts will tell you how different the pro game is from college -- how the skills are often non-transferrable -- and then commence predicting which players will transfer their skills to the NFL. All the while, they will assert that it's not their goal to predict NFL success perfectly, but merely to provide information -- a fatal conceit shrouded in false modesty.

Teams pay millions of dollars to be told things nobody can possibly know. Year after year, poor draft results show just how little they do know, yet they continue to invest in scouting. Perhaps they think that it's better to pretend to know something than to admit you know nothing at all. It is, if nothing else, great hustle.