I’ve been noticing this more of late, although I realize it’s been with us for quite a long time. You’ve seen it, too. A new report or an opinion piece will quote some statistic, whether it be a percent of people who watch the Daily Show who lean left or how many cubic kilometers of ice sheet we’ve lost this year. But what they won’t tell you is some context in which to interpret those statistics.
Take the ice sheets: if I told you that Antarctica loses 100 cubic kilometers of ice annually, what would you make of that number? Large? Small? Cause of worry? Or just an interesting datum? Honestly, by itself it’s impossible to tell. What you need to know is how many cubic kilometers of ice are in Antarctica, for example. Or how much that melt will raise sea levels. Or whatever else context will let you interpret that number appropriately for the story at hand. But by itself, unless you’re an expert in this field or have a particularly good sense of how large ice sheets are (or, at least, how large a cubic kilometer really is), the author might as well have not given you this number at all.
Another example, taken from news of yesterday’s “Rally to Restore Sanity and/or Fear”: some of the media coverage was giving the breakdown of Daily Show fans’ political leanings. It was something like 40% liberal, 38% independent, and 19% conservative. So why is this a problem? Well, it’s more subtle than the last example since we all know what 40% is, but ask yourself: what is the point of these stats? If the only point it to know what Stewart’s audience thinks, politically, they’re fine as they are. But if the author is trying (perhaps surripticiously) to suggest that Stewart and/or his audience is more liberal than normal, we need more information to interpret these statistics. Specifically, how representative is this of the demographic the audience is drawn from. Other studies have shown that the Daily Show audience is younger than normal, so you can’t compare their politics or other habits with the entire adult population and be really fair about it. You need to tell us how this compares with the background population that they more specifically belong to. (Similarly, you never see anyone compare the outcomes of a political survey like this with world-wide leanings because it’s not really helpful to know if an American sub-population is more or less conservative than China or South Africa.)
I suspect that often times, reports fall into this trap unintentionally because they’re not necessarily well-trained in the meaning to numerical data and how to interpret it. But I also suspect (yes, this is me ascribing motivations; take from it what you will) that this is done some of the time to gloss over inconvenient contexts and use numbers of pure shock-and-awe.