Build Your Own Baloney Detector

A tool-kit for avoiding being fooled

Friday, June 7, 2013

Confirms My Biases

I’ve noticed a lot of bogs/satirical stories lately getting picked up by the media or by social media lately. It might just be my own perception, of course, but things seem to be on the uptick. Especially the satirical stories. Perhaps it’s because there are more and more places online to find satirical news. Many people know not to believe The Onion, but other sites are harder to know about.

Here’s a rule I try to follow: if a story seems to confirm my political/ethical/musical/culinary beliefs, be more skeptical. Especially if it seems like it’s a little too on the nose. It seems unlikely that all of my political opposites are horrible people who do really crass or cruel things, so any story suggesting that this is the case probably needs more caution. Then I trace back to the source and try to figure out if it’s a legit media source. Often, the site is either clearly a satire site or clearly a highly biased site. Either way, not good.

(Why should you check before sharing these kinds of stories? Apart from wanting to be right for its own sake, you don’t want to be the person who spreads crazy untruths. If you really want to deliver a blow to your ideological foes, you want your best shot to really count. So make sure it’s accurate.)

posted by John Weiss at 18:00  

Thursday, February 7, 2013

Numbers Not Provided (And They Really Should Be)

I was listening to NPR on my commute earlier this week when they ran a story about FMLA leave. As good journalists, they got different views. One view (against the act) was that “it gets abused” followed by a statistic that the greatest FMLA leave day of the year is the day after the Superbowl. Now, it’s true that this statistic is useful and interesting (although I’m curious as to how much more common FMLA leave on that day), but what’s really lacking is how often is this leave abused? The very fact of the abuse isn’t really interesting. It would be very newsworthy indeed if you could find any such law that was never abused, in fact. What’s missing is how big a problem the abuse is. Surely most of us would agree that a single case of abuse per year (almost certainly too low, of course) is essentially irrelevant to policy around the law. On the other hand, if 80% of all leaves taken were found to be fraudulent cases, most would likely agree that that’s too many and that the law needs to be changed. Now, the real number is almost certainly well between those extremes. But without it, it’s difficult to know what to make of this case. So then we ask: why were numbers not provided? It’s probably not the case that he’s hiding them (although in other situations, that might seem more likely than here). Does the speaker lack the statistics we need? Then why isn’t he trying to find them, perhaps with research? If it’s important enough to be speaking out for a policy change, it seems like it’s important enough to get the numbers.

posted by John Weiss at 12:39  

Friday, December 14, 2012

Are These the Right Statistics?

Not all statistics are created equal. And not all statistics are right for analyzing a given problem. I’m reminding of that today, in the wake of the latest mass shooting. While I have not intention of weighing in on the gun control debate (seriously, I don’t know the right statistics to say much, so I’d be dishonest to try), I do want to warn everyone about what statistics we pay attention to in the coming days and weeks. In particular, I’ve already seen a lot of statistics about violent crime in this country and how it’s down. Which is true, of course. But it’s not necessarily the right set of statistics for mass-shooting prevention. Those statistics include a majority of things that are not mass shootings: robberies, fights, domestic violence, and so forth. And while I can’t say that mass shootings have different root causes and enabling factors than these crimes, I suspect that they do. (If I’m wrong, I’d like to see an analysis that shows that we can lump them all together, anyway.) That being the case, looking at the bulk statistics where the mass-shootings are a small minority probably won’t give us a lot insight into these shootings.

So what should you do? Ask yourself whether you think that the data presented are what you really need to reach the conclusions suggested and, if you can, complain if not.

posted by John Weiss at 19:33  

Wednesday, December 12, 2012

I’ve Discovered Something Revolutionary!

One of the biggest flags of someone talking nonsense is when they claim to have discovered something revolutionary that no one else has found before. Of course, every new thing has to be discovered at some point, but it’s always worth asking yourself whether the person who found it was who you would expect.

For example, many crackpots will claim to have discovered a revolutionary new scientific theory, in spite of having little or no training as a scientist. This doesn’t mean that they might not have an insight, but it sure is a serious flag. In any given field of science, many scientists are spending much of their lives (more than the 40 hours a week that they nominally work, in fact) thinking about their subject. They certainly can’t have every possible insight or find every useful way of looking at the universe, but with all of that time, odds are in favor of any major discoveries coming from that community. So when the revolutionary discovery comes from someone outside the field, be wary.

It’s also worth remember that most scientific and engineering breakthroughs really aren’t revolutionary. If you think carefully about it, most things are incremental: small steps forward all add up to major progress, but it isn’t just one person or team in one fell swoop that makes the advancement. It’s a community of people over a long period of time. And even when there’s an apparent breakthrough, it’s often been anticipated by other teams. Usually, other teams were competing for the same objective. Most major ideas in science or in technology come out of their eras when people were thinking about certain things that led them to those ideas. It’s seldom just one person who has the entire idea by themselves.

For example, Newton may have invented the Calculus, but so did Leibniz, it appears. Newton’s laws of motion were distinctly discussed in Galileo’s work, although perhaps less clearly and certainly with less result. Newton’s law of universal gravitation was being considered by other contemporaries like Hooke, Wren, and Halley, although they couldn’t show that it yielded the right planetary motion. Most parts of Einstein’s theory of Special Relativity were being toss around by other physicists before he published. (Einstein himself was not an outsider, either. He was a patent-clerk, but he was also a physicist who simply couldn’t find another job at the time.)

So when someone claims to have discovered something revolutionary, be skeptical. Be skeptical when it’s an abstract discovery that asks for nothing from you — apart from you attention — and be extra skeptical if they want money or something else.

posted by John Weiss at 14:41  

Thursday, August 9, 2012

The Illusion of Balance 2: Are Both Sides Equally Valid?

The Illusion of Balance problem has another facet: The notion that there are two sides to every issue and that both sides are equally valid. This is usually manifest in the news by interviewing people of “both” sides of a given issue and giving them equal time and weight. Doing so implies that both sides are just as valid as the other. This is not necessarily the case.

This illusion of balance stems from the fact that most of our political and social debates do have two sides, if only by fiat. (It often seems like there might be more than two sides, doesn’t it? But we have a two-party system and two sides generally seems to make for a tidier debate in any case.) Fair enough. If we’re discussing, say, immigration reform, getting a Democrat and a Republican to discuss their parties’ views on the issue is just good reporting.

The problem is that sometimes there really aren’t two sides (or, rather, the two sides aren’t equally valid). If, for example, an astronomer was being interviewed and the shape of the Earth came up, you’d be appalled if the media interviewed a flat-earther for a contrary view. It’d be daft. The shape of the Earth isn’t really in doubt and even giving equal time (and thus,implicitly, equal weight) to the counter-argument is a best bad practice and worst dishonest.

So that was an extreme example, but this comes up all the time in less extreme cases. Consider the perpetual “debate” the country seems to have over teaching evolution. There’s little scientific debate about whether species evolve in time; the evidence for that is much too compelling for anything short of extraordinary counter-evidence to reopen that issue. And yet the news media hardly ever seems to interview a biologist without getting someone from the Discovery Institute or some other group that’s not interested in the science. The same is true with Anthropogenic Global Warming, the connection between tobacco and lung cancer, etc.

What do all of these have in common? They’re questions of science. While a lot of scientific issues are unsettled and honesty requires talking to both sides. (Take the issue of how much water might have been on Mars at some point in the past. It’s not a settled issue and a good story on this would interview several people with different views.) But listen to the scientists involved: if the actual researchers are overwhelmingly saying the same thing, then odds are any “debate” you see is manufactured. This is doubly true if some other group has something to gain by there being a debate at all. (In the case of Evolution, the Creationists have made it clear that they want Christianity taught in public schools. In the case of Global Warming… well, you know who has an interest in not admitting that the science there is pretty much decided.)

posted by John Weiss at 16:46  

Monday, July 30, 2012

An Interesting Example of a Context-Free Statistic

Here’s an article about an out-of-court settlement between a firm that was collecting payments on behalf of a hospital system and the State of Minnesota. The details aren’t of interest, here, except this: the firm is paying $2.5 million to settle the suit. Got that?

So, I have a vague idea of how much $2.5 million is. You most likely to do, too. But it’s still a low-context (or context-free) statistic. Why? Because we don’t know where the money will go, first of all. Will it go to pay back people who were wrongfully harassed? To pay people who were wrongfully charged money that they then paid? Or is this a punitive payment to punish the firm for their actions? In any of these cases, we’ll need more information to assess how much $2.5 million really is.

Consider: if we’re in the first case, then I’d want to know how many people will get the money and how this average payment compares to other such payment. In the second case, how does the total compare to how much people were wrongly charged? If $2.5 million is much less than that figure, this payment is symbolic at best. If it’s a lot more, then clearly — even with interest — this money is for more than just paying back those wronged.

And in the final case, I’d really want to know what this firm’s profits were like (at least in Minnesota). If $2.5 million is compared to $1 billion in profits, well, that seems like a very different punishment than $2.5 million compared to $10 million in profits, doesn’t it?

I can’t think of a case where I don’t need to know what this money is for and how it compares to some other related figure. We need context!

posted by John Weiss at 18:17  

Sunday, July 22, 2012

The Illusion of Balance (Part 1): Did they Check these Facts?

One of the ideals of reporting has long been “balance”, reporting the facts and giving the audience or reader a chance to decide the truth for themselves. This is a laudable ideal, of course, except that sometimes the pursuit of the appearance of balance (and fairness) gets in the way of actual careful reporting of the facts. I’ve seen this happen in two ways, so I’m covering it in two posts. Today, I’ll look at the problem with treating everyone’s facts equally. In my next post, we’ll look at the problem with giving equal time to all sides.

News media (and I’m not saying “reporters” because this also involves their editors and everyone else up and down the line) can choose to report facts in a few ways. A book I was reading recently (The Inquisition of Climate Science by James Lawrence Powell) suggested that this is best viewed as a continuum. I’m going to argue that there are at least two dimensions. The first is neutrality. The story can take a strong stance or it can be entirely neutral. On the former extreme, you have editorials, which aren’t usually considered news pieces per se. (Although in some cases, they seem to get slipped into the news without comment or label.) We tend to think that news stories should be on the neutral extreme, and with good reason. We don’t want them too neutral, however. We want the reporters to make logical inferences from the facts they collect and to some degree, taking a stand on some issues is perfectly warranted. (“Terrorist attacks are bad,” is probably not a viewpoint that almost anyone would object to, for example.) This is the continuum that most people think about when they think about news balance. It’s not the one I want to mostly talk about, though.

The dimension I’m interested in is the trust dimension. Reporters interview experts, witnesses, and parties concerned with stories on a daily basis. They also read countless press releases and studies. They do these things in order to report the facts or views that they collect in their stories. But are their sources accurate? There’s a fact-checking continuum, too. On the one extreme, you can fact-check nothing at all and just report statements (perhaps with attribution to save yourself from lawsuits). On the other, you can fact-check every single thing. The latter is clearly not economically feasible, nor is it reasonable. When the family of an accident victim reports their reaction, there’s no need to verify the relationship (a little trust is in order, in general) or to check on their credibility. In fact, doing so would probably cause unnecessary added insult and injury to already miserable people. So we can’t fact-check everything.

Sadly, it often seems that the news media has gone the other way lately and fact-checks nothing at all, simply reporting whatever either political party claims is true. The problems with this extreme are obvious when we think about them, so why does it happen? In part, because of time and money. Media outlets are on limited budgets and are being squeezed all the time. But there’s also an illusion of balance at play: to fact-check a politician might seem like taking sides against them. If the media appears to overwhelmingly check on one side or the other, that is seen as bias. So I wonder if the media also avoids checking on politicians’ facts for fear of seeming hostile to one side or the other. (This is also true of areas other than politics. When reporting a medical breakthrough, a good reporter will contact other experts in the field and get their views of the discovery, for example.)

Of course, fact-checking isn’t hostility, it’s the media’s job. I take it as a warning sign when I read a fact attributed to someone, especially a politician or pundit, without having been checked.

posted by John Weiss at 10:45  

Thursday, June 28, 2012

Headlines that Disagree with their Stories

I recently read an article about workers using medical leave as a way of avoiding performance improvement guides. At least, that’s what the headline said. And admittedly, the first few paragraphs said that, too. At least, they were about one specific case of this. But as I read down the article, I was surprised to see that when it presented the hard data on medical leave and why people have been taking it, the reporter admitted that no one knows why the leave is specifically taken. In other words: there is no real data, other than anecdotes, to support the claim.

This happens not infrequently (weasel word! I know) in journalism. The headlines aren’t necessarily written by the reporter. The editor could well slap a title on there and may not understand the story well enough to summarize it accurately or may even not care about accuracy as much as getting attention. In fact, even an accurate headline is often rather hyperbolic about what the story actually contains, probably for this reason

So what? Read the entire story and don’t trust the headlines.

posted by John Weiss at 16:21  

Monday, March 21, 2011

Out of Expertise, Thanks to Ann Coulter

To begin with, I’d like to state that I’m no fan of Ann Coulter. In fact, I’ll be even more blunt: I know of nothing about her that makes me like her in any way, unless you could the even-more-horrible things that she hasn’t said. So feel free to take this post with a grain of salt. (Or, if you think watching me attack her sources will irritate you, please just skip this post. I won’t be offended, it’s reasonable.)

Anyway. Submitted for your consideration.

There’s a lot wrong with this, actually. (She weirdly cites her sources as the New York Times and Times of London, but then claims that the media won’t cover it, for example.) My gripe is going to be short and simple: she’s evidently basing her comments on studies by physicists. Here’s an out-of-expertise flaw: physicists know a lot about radiation, as such. It’s what a lot of us do. A lot of us even know a good bit about radiation safety and illnesses. But we’re not (at least the overwhelming majority of us) in the business of researching the health effects of radiation on humans. We might, I suppose, be involved in a study by sharing our expertise on the radiation’s physics. But if you want to study the epidemiology of radiation sicknesses, you want a physician (or other medical researcher) or a biologist. Health and medicine are biological fields; physicists generally have little real expertise there. I’ll grant you that the average physicist knows more biology than the average layman, most likely; also, some of us are biophysicists and may have a much better idea of this sort of thing to the point of expertise. But in general? Don’t trust a physicist regarding medicine.

(I’d like to add that I know of several examples of scientists famously going out of their way to may pronouncements well out of their fields about topics of public interest. Most of them are physicists. I’m not sure if that’s a selection bias at play or if it’s really the case that we’re more prone to over-estimating our expertise.)

posted by John Weiss at 22:59  

Monday, March 14, 2011

Apples and Oranges

It’s easy to lie with statistics. This is a well-worn truth (or at least something that is generally believed to be true). One popular way to do this is to compare two different things. Ideally, you try to arrange to compare two things that look similar, even if they really aren’t.

Take, for example, the recent public unions debates in Wisconsin. Without judging the merits of unions (such judgments are not our purpose in this blog), some of the statistics being shot around were highly suspect. For example, some people were quoting data that said that teachers are paid more (on the average) than the average person in the country. This is probably true, but it’s not a very useful comparison because the populations in question don’t match very well. Teachers are almost certainly more educated than the average employed individual in the US, for a start. Expecting them to make no more than someone working a job that requires only a high-school diploma is unrealistic. (This isn’t to denigrate those jobs by any means; they’re very important and I’m thankful that people do them. But the reality is that they get paid less than jobs that require more training, at least under our current system.) Other factors one needs to consider are things like time in the current profession. (Are teachers more or less likely to have more experience than the average worker? I honestly don’t know, but it surely matters for comparisons of pay.) In order to make a valid comparison, you need to control for as many of these factors as you can. In general, you can’t control for every single factor, but to just blithely compare two very different populations and attempt to infer conclusions is reckless at best. At worst, it’s intentionally misleading.

Incidentally, attempts to control for these factors suggests that teachers make around 5% less than private sector workers with comparable backgrounds. You’re welcome to question the source of the study; it doesn’t appear to be as neutral as I’d like to see when making a policy decision. And you’re also welcome to feel that 5% isn’t enough difference to worry about. But this suggests that the teacher are not as over-paid as commentators have claimed, once you compare them to a more equivalent sample.

Another example I heard recently was on The Daily Show. Rand Paul was on and he claimed that the richest 1% of Americans pay 30% of our total income taxes and therefore are doing more than their fair share. Seems pretty striking, doesn’t it? One percent of us paying 30%? Wait a minute, though: if they’re the richest 1%, they should surely be paying more than the average since they, by definition, are richer than average. Even with a flat tax rate (rather than the progressive tax rate we nominally have), that’s to be expected. So the question is, how much do the richest 1% own? The answer apparently is around 35% of the wealth in this country. Oh, dear. That means that they’re underpaying their taxes relative to you and I. (I’ll assume that you’re in the poorest 99%, like I am.) In fact, that means that our actual, effective tax rate is regressive: the rich appear to be paying lower rates than the poor. (There are a number of reasons to think this is true just based on various loopholes in the tax code, of course. For one thing, capital gains are taxed at a lower rate than their income would require otherwise.)

This is not meant to judge the tax rates or who should pay how much. That debate ultimately requires judgments that go beyond simple facts into philosophy and ethics. But to get to that point, we need data that tells us what’s really going on. Tossing out misleading facts may help you win the argument, but it doesn’t really help craft a truly informed policy. Part of telling the truth means comparing apples to apples. Only then can people judge your case for themselves, and that should be the goal.

posted by John Weiss at 23:00  
Next Page »

Powered by WordPress