The hot hand effect gets hot again

When statisticians make an appearance in the press or popular culture they are often portrayed as contrarians who take pleasure in debunking the misconception in common intuitions. The statistician, the media seems to believe, is like the cold splash of water after a hangover - a sobering antidote to irrationality that may be unwanted - but it is for our own good. It is a rap that isn’t all bad.

After all, statisticians have been the ones to tell us that it is wrong to be more frightened of riding in planes than cars, that playing Mozart to babies won’t make children smarter, or that switching in the Monty Hall problem is a winning strategy.

For nearly 20 years, the 'hot hand' - the idea that athletes play better when they have already been playing well (or 'in form' as they say in Britain) - was another one of those common beliefs that statistics had seemingly overturned. Since Gilovich, Vallone, and Tversky published their analysis of basketball shot-making on the Philadelphia 76ers in 1985 and concluded that there was no evidence of a hot hand, a string of papers corroborated this view. The accumulating case against the hot hand has been so convincing that the phenomenon is now referred to as the 'the hot hand fallacy'.

But earlier this month, a paper presented at the 2014 MIT Sloan Conference, the annual gathering of the sharpest minds in sports analytics, turned the long-held view about the hot hand on its head. Authors and Harvard sports statisticians Bocskocksky, Ezekowitz, and Stein published an analysis of over 83,000 shots during the NBA 2012-2013 seasons that showed a 1.2 to 2.4 percentage increase in shot-making for players who have been performing beyond expectations; a small effect but still significant evidence of the hot hand in existence.

What caused this reversal of view? Simply put, data. In the past, studies of the hot hand have made the assumption that shot selection is independent of past performance. Prior analysts were probably doing so more out of convenience than true belief. Even if they thought it was more likely that players who played beyond their personal average would, with the confidence better performance brings, go for more difficult shots. It wouldn’t have been easy to get data to account for shot selection. Now with big data collection systems like SportsVU of STATS, Inc., these kinds of limitations have virtually disappeared.

Using the SportsVU database of NBA shots, Bocskocksky and colleagues had precise details (shooter, position on court, time of shot, etc.) about each shot made in the 2012-2013 season at their fingertips. With the additional information about shot characteristics, they were able to demonstrate that players who are performing well do in fact take more difficult shots on average, suggesting that previous analyses have failed to detect a hot hand effect because they have assumed that performance has no influence on shot selection.

In a recent episode of the Slate sports podcast Hang Up and Listen, the panel of sports writers wondered whether the Bocskocksky paper raises doubt about the value of statistical thinking. Players and fans have intuitively believed in the hot hand for years but statisticians are only now catching on. While I believe the Bocskocksky paper is more a demonstration of the importance of quantitative analysis than not, the discussion of the Hang Up and Listen crew does raise an important point about an inherent dilemma of statistical analysis, which is that analyses are made under a set of simplifying assumptions.

But it would be unfair to blame the statistical approach for oversimplified assumptions due to insufficient data just as it would be unfair to blame doctors for treating patients before a cure has been found. The real value of the statistical framework isn’t that it always gets it right. But that by being transparent and explicit about its assumptions, it is (unlike intuition) at least open to revision.