Build Your Own Baloney Detector

A tool-kit for avoiding being fooled

Monday, March 21, 2011

Out of Expertise, Thanks to Ann Coulter

To begin with, I’d like to state that I’m no fan of Ann Coulter. In fact, I’ll be even more blunt: I know of nothing about her that makes me like her in any way, unless you could the even-more-horrible things that she hasn’t said. So feel free to take this post with a grain of salt. (Or, if you think watching me attack her sources will irritate you, please just skip this post. I won’t be offended, it’s reasonable.)

Anyway. Submitted for your consideration.

There’s a lot wrong with this, actually. (She weirdly cites her sources as the New York Times and Times of London, but then claims that the media won’t cover it, for example.) My gripe is going to be short and simple: she’s evidently basing her comments on studies by physicists. Here’s an out-of-expertise flaw: physicists know a lot about radiation, as such. It’s what a lot of us do. A lot of us even know a good bit about radiation safety and illnesses. But we’re not (at least the overwhelming majority of us) in the business of researching the health effects of radiation on humans. We might, I suppose, be involved in a study by sharing our expertise on the radiation’s physics. But if you want to study the epidemiology of radiation sicknesses, you want a physician (or other medical researcher) or a biologist. Health and medicine are biological fields; physicists generally have little real expertise there. I’ll grant you that the average physicist knows more biology than the average layman, most likely; also, some of us are biophysicists and may have a much better idea of this sort of thing to the point of expertise. But in general? Don’t trust a physicist regarding medicine.

(I’d like to add that I know of several examples of scientists famously going out of their way to may pronouncements well out of their fields about topics of public interest. Most of them are physicists. I’m not sure if that’s a selection bias at play or if it’s really the case that we’re more prone to over-estimating our expertise.)

posted by John Weiss at 22:59  

Monday, March 14, 2011

Apples and Oranges

It’s easy to lie with statistics. This is a well-worn truth (or at least something that is generally believed to be true). One popular way to do this is to compare two different things. Ideally, you try to arrange to compare two things that look similar, even if they really aren’t.

Take, for example, the recent public unions debates in Wisconsin. Without judging the merits of unions (such judgments are not our purpose in this blog), some of the statistics being shot around were highly suspect. For example, some people were quoting data that said that teachers are paid more (on the average) than the average person in the country. This is probably true, but it’s not a very useful comparison because the populations in question don’t match very well. Teachers are almost certainly more educated than the average employed individual in the US, for a start. Expecting them to make no more than someone working a job that requires only a high-school diploma is unrealistic. (This isn’t to denigrate those jobs by any means; they’re very important and I’m thankful that people do them. But the reality is that they get paid less than jobs that require more training, at least under our current system.) Other factors one needs to consider are things like time in the current profession. (Are teachers more or less likely to have more experience than the average worker? I honestly don’t know, but it surely matters for comparisons of pay.) In order to make a valid comparison, you need to control for as many of these factors as you can. In general, you can’t control for every single factor, but to just blithely compare two very different populations and attempt to infer conclusions is reckless at best. At worst, it’s intentionally misleading.

Incidentally, attempts to control for these factors suggests that teachers make around 5% less than private sector workers with comparable backgrounds. You’re welcome to question the source of the study; it doesn’t appear to be as neutral as I’d like to see when making a policy decision. And you’re also welcome to feel that 5% isn’t enough difference to worry about. But this suggests that the teacher are not as over-paid as commentators have claimed, once you compare them to a more equivalent sample.

Another example I heard recently was on The Daily Show. Rand Paul was on and he claimed that the richest 1% of Americans pay 30% of our total income taxes and therefore are doing more than their fair share. Seems pretty striking, doesn’t it? One percent of us paying 30%? Wait a minute, though: if they’re the richest 1%, they should surely be paying more than the average since they, by definition, are richer than average. Even with a flat tax rate (rather than the progressive tax rate we nominally have), that’s to be expected. So the question is, how much do the richest 1% own? The answer apparently is around 35% of the wealth in this country. Oh, dear. That means that they’re underpaying their taxes relative to you and I. (I’ll assume that you’re in the poorest 99%, like I am.) In fact, that means that our actual, effective tax rate is regressive: the rich appear to be paying lower rates than the poor. (There are a number of reasons to think this is true just based on various loopholes in the tax code, of course. For one thing, capital gains are taxed at a lower rate than their income would require otherwise.)

This is not meant to judge the tax rates or who should pay how much. That debate ultimately requires judgments that go beyond simple facts into philosophy and ethics. But to get to that point, we need data that tells us what’s really going on. Tossing out misleading facts may help you win the argument, but it doesn’t really help craft a truly informed policy. Part of telling the truth means comparing apples to apples. Only then can people judge your case for themselves, and that should be the goal.

posted by John Weiss at 23:00  

Tuesday, November 23, 2010

Context Free Stats in the Wild

Here’s an example of context-free statistics that I came across a few days after my last post.

Without commenting on the underlying statistics or political conclusions (really, even if I wanted to, there’s not enough information for me to fairly do so, I think), consider what we’re told here:

For example, Marissa Mayer, known as “the face of Google,” gave $30,400 to the Democratic Congressional Campaign Committee in 2009.

OK… so what? Do we know who else she donated to? In the case of political donations, it’s certainly not uncommon for the wealthy to donate to more than one party, so knowing only that she donated to this campaign doesn’t tell us much even about this one individual. And that last word is also important: it’s just one individual. What about others?

In fact, of the top 10 contributions made by Google in 2009, only one — by CEO Eric Schmidt — was to the Republican National Committee.

On the surface, this seems damning. Maybe it is. But I can’t tell! To whom did the other nine contributions go? All Democrats? The DNC? Or was this only contribution to the RNC and the others were distributed to various candidates (possibly form both parties)? It’s entirely possible that the implied meaning of this statistic really represents the underlying data set, but I can’t tell from this stat. (It definitely has an implied meaning, though.)

It gets weird from here.

Steve Ballmer donated $5,000 to the Democratic Congressional Campaign Committee as well, but other Microsoft donations show an even split between Democrats and Republicans.

So… why single out that one contribution? (It could have been the largest or just the one from the most prominent Microsoft exec, but there’s no explanation.)

And then,

However, according to data collected by Consumer Watchdog, a consumer and advocacy group, Google has made more financial contributions to Republicans lately than to Democrats. The company has contributed 55 percent to Republicans and 45 percent to Democrats.

Wait, what? OK, just for the record, this article is contradicting is major claim by suggesting the opposite of what the headline proclaims. But I can’t even tell that, to be honest: has Google made more contributions in terms of number or dollar amounts? And are talking Google or Google employees/execs. (There’s a difference, although not necessarily from a perspective of how the money influences policy makers.)

From here, the article mostly devolves into a lot of interpretation from pundits. (Few of the claims are supported and there are a few more context-free stats, like how much Steve Jobs has given to Democrats, sans info about whether/how much he has given to Republicans.)

The sad thing is that if you follow the link to Adam Bonica’s site, he seems to have a good analysis going on. (I’m not an expert in political science, so I can’t judge terribly well. Some of the aspects of the graphs are worrisome, but we’re in very different fields with very different statistical norms.) But you can’t really tell from this story because the statistics aren’t given context.

posted by John Weiss at 19:15  

Friday, August 20, 2010

Useless Statistic, Caught in the Wild

Schweyen and city traffic engineer Gary Shannon said a traffic signal is no panacea. A city study of pedestrian accidents between 2002 and 2007 recorded only one accident at Broadway and Third Street, in 2005, Shannon said. Several signalized intersections had many more pedestrian accidents in the study period — including six at South Broadway and Fourth Street and five at South Broadway and Second Street, Shannon said.

— From a Rochester Post-Bulletin newspaper article

Presumably, this article paragraph is saying that a signal may not be needed on Third Street because some other intersections with signals have more accidents than this one. But hold up: does this matter? Is this statistic even meaningful?

An intersection can have a high rate of accidents for a variety of reasons, ranging from usage, to bad lines of sight, to user error. A traffic light can help with some problems, but a very busy intersection will still have accidents.

In fact, this is a sort of a false dichotomy. The choice being made isn’t where to spend money between these specific intersections, it’s simply whether to add a light on one of them. Whether other intersections that already have lights have more accidents or not isn’t really a useful datum.

What we really ought to be told is how much the accident rate has improved (we hope it’s improved) at those intersections after they added the signals and how much it would help (projected) to add one on Third Street. Or, comparing apples to apples, what other intersections are problems where that same money might be spent to alleviate accidents. (Either by adding or by upgrading lights.)

posted by John Weiss at 13:23  

Tuesday, June 8, 2010

Weasel Words, Caught in the Wild

But like it or not, lots and lots and lots of Americans need large vehicles for their jobs, their families, and their lives.

— From an article on gas mileage in cars

Here’s a great example of weasel words in action. “Lots and lots”? How many is that? Or, a better question: what percentage of vehicles? Sure, a million vehicles (to pick a number) is a lot by almost anyone’s standards, but that’s less than 1% of Americans. So, since the author is arguing the case for people having big cars, what’s the percentage and how does it compare to how many large vehicles are actually out there? I don’t know the answer and, in fact, I don’t even know how to adequately quantify “need”. I’ll bet the author doesn’t, either, but it’s not stopping him from using this would-be datum to make a point.

posted by John Weiss at 21:25  

Powered by WordPress