Build Your Own Baloney Detector

A tool-kit for avoiding being fooled

Thursday, February 7, 2013

Numbers Not Provided (And They Really Should Be)

I was listening to NPR on my commute earlier this week when they ran a story about FMLA leave. As good journalists, they got different views. One view (against the act) was that “it gets abused” followed by a statistic that the greatest FMLA leave day of the year is the day after the Superbowl. Now, it’s true that this statistic is useful and interesting (although I’m curious as to how much more common FMLA leave on that day), but what’s really lacking is how often is this leave abused? The very fact of the abuse isn’t really interesting. It would be very newsworthy indeed if you could find any such law that was never abused, in fact. What’s missing is how big a problem the abuse is. Surely most of us would agree that a single case of abuse per year (almost certainly too low, of course) is essentially irrelevant to policy around the law. On the other hand, if 80% of all leaves taken were found to be fraudulent cases, most would likely agree that that’s too many and that the law needs to be changed. Now, the real number is almost certainly well between those extremes. But without it, it’s difficult to know what to make of this case. So then we ask: why were numbers not provided? It’s probably not the case that he’s hiding them (although in other situations, that might seem more likely than here). Does the speaker lack the statistics we need? Then why isn’t he trying to find them, perhaps with research? If it’s important enough to be speaking out for a policy change, it seems like it’s important enough to get the numbers.

posted by John Weiss at 12:39  

Wednesday, December 12, 2012

I’ve Discovered Something Revolutionary!

One of the biggest flags of someone talking nonsense is when they claim to have discovered something revolutionary that no one else has found before. Of course, every new thing has to be discovered at some point, but it’s always worth asking yourself whether the person who found it was who you would expect.

For example, many crackpots will claim to have discovered a revolutionary new scientific theory, in spite of having little or no training as a scientist. This doesn’t mean that they might not have an insight, but it sure is a serious flag. In any given field of science, many scientists are spending much of their lives (more than the 40 hours a week that they nominally work, in fact) thinking about their subject. They certainly can’t have every possible insight or find every useful way of looking at the universe, but with all of that time, odds are in favor of any major discoveries coming from that community. So when the revolutionary discovery comes from someone outside the field, be wary.

It’s also worth remember that most scientific and engineering breakthroughs really aren’t revolutionary. If you think carefully about it, most things are incremental: small steps forward all add up to major progress, but it isn’t just one person or team in one fell swoop that makes the advancement. It’s a community of people over a long period of time. And even when there’s an apparent breakthrough, it’s often been anticipated by other teams. Usually, other teams were competing for the same objective. Most major ideas in science or in technology come out of their eras when people were thinking about certain things that led them to those ideas. It’s seldom just one person who has the entire idea by themselves.

For example, Newton may have invented the Calculus, but so did Leibniz, it appears. Newton’s laws of motion were distinctly discussed in Galileo’s work, although perhaps less clearly and certainly with less result. Newton’s law of universal gravitation was being considered by other contemporaries like Hooke, Wren, and Halley, although they couldn’t show that it yielded the right planetary motion. Most parts of Einstein’s theory of Special Relativity were being toss around by other physicists before he published. (Einstein himself was not an outsider, either. He was a patent-clerk, but he was also a physicist who simply couldn’t find another job at the time.)

So when someone claims to have discovered something revolutionary, be skeptical. Be skeptical when it’s an abstract discovery that asks for nothing from you — apart from you attention — and be extra skeptical if they want money or something else.

posted by John Weiss at 14:41  

Monday, March 14, 2011

Apples and Oranges

It’s easy to lie with statistics. This is a well-worn truth (or at least something that is generally believed to be true). One popular way to do this is to compare two different things. Ideally, you try to arrange to compare two things that look similar, even if they really aren’t.

Take, for example, the recent public unions debates in Wisconsin. Without judging the merits of unions (such judgments are not our purpose in this blog), some of the statistics being shot around were highly suspect. For example, some people were quoting data that said that teachers are paid more (on the average) than the average person in the country. This is probably true, but it’s not a very useful comparison because the populations in question don’t match very well. Teachers are almost certainly more educated than the average employed individual in the US, for a start. Expecting them to make no more than someone working a job that requires only a high-school diploma is unrealistic. (This isn’t to denigrate those jobs by any means; they’re very important and I’m thankful that people do them. But the reality is that they get paid less than jobs that require more training, at least under our current system.) Other factors one needs to consider are things like time in the current profession. (Are teachers more or less likely to have more experience than the average worker? I honestly don’t know, but it surely matters for comparisons of pay.) In order to make a valid comparison, you need to control for as many of these factors as you can. In general, you can’t control for every single factor, but to just blithely compare two very different populations and attempt to infer conclusions is reckless at best. At worst, it’s intentionally misleading.

Incidentally, attempts to control for these factors suggests that teachers make around 5% less than private sector workers with comparable backgrounds. You’re welcome to question the source of the study; it doesn’t appear to be as neutral as I’d like to see when making a policy decision. And you’re also welcome to feel that 5% isn’t enough difference to worry about. But this suggests that the teacher are not as over-paid as commentators have claimed, once you compare them to a more equivalent sample.

Another example I heard recently was on The Daily Show. Rand Paul was on and he claimed that the richest 1% of Americans pay 30% of our total income taxes and therefore are doing more than their fair share. Seems pretty striking, doesn’t it? One percent of us paying 30%? Wait a minute, though: if they’re the richest 1%, they should surely be paying more than the average since they, by definition, are richer than average. Even with a flat tax rate (rather than the progressive tax rate we nominally have), that’s to be expected. So the question is, how much do the richest 1% own? The answer apparently is around 35% of the wealth in this country. Oh, dear. That means that they’re underpaying their taxes relative to you and I. (I’ll assume that you’re in the poorest 99%, like I am.) In fact, that means that our actual, effective tax rate is regressive: the rich appear to be paying lower rates than the poor. (There are a number of reasons to think this is true just based on various loopholes in the tax code, of course. For one thing, capital gains are taxed at a lower rate than their income would require otherwise.)

This is not meant to judge the tax rates or who should pay how much. That debate ultimately requires judgments that go beyond simple facts into philosophy and ethics. But to get to that point, we need data that tells us what’s really going on. Tossing out misleading facts may help you win the argument, but it doesn’t really help craft a truly informed policy. Part of telling the truth means comparing apples to apples. Only then can people judge your case for themselves, and that should be the goal.

posted by John Weiss at 23:00  

Wednesday, December 29, 2010

Speaking out of Your Area of Expertise

I’m pleased to say that while this trap is seductive, it’s usually easy to guard against if you watch out for it.

What is this trap? It’s simply that just because someone is an expert in one field, it doesn’t follow that they’re an expert in another (even related) field. For example, I’m an astrophysicist by training. I study Saturn’s rings. While I can’t claim my pronouncements on the topic are authoritative, I’d hope you might listen carefully to my thoughts. But as you get farther away from that topic, my expertise wanes. If I explained the geology of Venus, you might want to take it with a grain of salt. If I gave you medical advice, you’d be well-advised to take it with a heaping spoonful of salt. Now, there are other areas that I have at least minor expertise in, but you can’t know what they are from just my professional degree. For example, I think I do know more than average about migraines, but that has nothing to do with my degree so I’d entirely support you asking me about that before taking what tell you as valid.

Does this really happen? Yes! I just finished reading Merchants of Doubt (which I recommend to everyone, everywhere). One common theme in the book was scientists talking about issues from the medical dangers of cigarette smoke to the environmental dangers of ozone depletion and global warming. What’s interesting about these scientists is that nearly all of them were (and are) speaking well outside their expertise area. Usually, it was physicists (solid state of nuclear) talking about medical effects of smoke or the effects of CO2 on climate. While it’s possible that they did, in fact, do the work needed to become experts in those areas, it’s by no means assured. (I know of no reason to think that they did and, in fact, their pronouncements on the issues suggest that they didn’t. Or that they were willfully misleading, which is even more depressing.) And yet they were (and still are) taken very seriously by congress, the White House, and by the press. People see “scientist” and assume that our fields are interchangeable. They’re not.

Similarly, an engineer is not qualified to diagnose your sore arm. Even a dentist, a medical professional, is probably well outside of her qualifications. (In fact, a lot of MDs are probably unqualified to give a solid diagnosis. A skin specialist, for example, is probably out of his depth with joint problems and you’d be better off talking to either your GP/family medicine/internalist or a specialist in joints.)

So what’s the solution? Check the qualifications of the speaker and think carefully about whether or not they match up with the claims being made. The internet has made this a lot easier, although generally if someone is touting their qualifications, they tell you what you need to know when they do. And if you’re not sure what an impressive-sounding qualification means, check. Not everyone practicing a medical field is well-verses in biochemistry, for example, so not everyone with a degree in a medical field should necessarily be trust if they claim that they have a new theory about some aspect of biochemistry. (They might be right, of course, but don’t rely on it! Check!)

posted by John Weiss at 17:51  

Sunday, October 31, 2010

Context-Free Statistics

I’ve been noticing this more of late, although I realize it’s been with us for quite a long time. You’ve seen it, too. A new report or an opinion piece will quote some statistic, whether it be a percent of people who watch the Daily Show who lean left or how many cubic kilometers of ice sheet we’ve lost this year. But what they won’t tell you is some context in which to interpret those statistics.

Take the ice sheets: if I told you that Antarctica loses 100 cubic kilometers of ice annually, what would you make of that number? Large? Small? Cause of worry? Or just an interesting datum? Honestly, by itself it’s impossible to tell. What you need to know is how many cubic kilometers of ice are in Antarctica, for example. Or how much that melt will raise sea levels. Or whatever else context will let you interpret that number appropriately for the story at hand. But by itself, unless you’re an expert in this field or have a particularly good sense of how large ice sheets are (or, at least, how large a cubic kilometer really is), the author might as well have not given you this number at all.

Another example, taken from news of yesterday’s “Rally to Restore Sanity and/or Fear”: some of the media coverage was giving the breakdown of Daily Show fans’ political leanings. It was something like 40% liberal, 38% independent, and 19% conservative. So why is this a problem? Well, it’s more subtle than the last example since we all know what 40% is, but ask yourself: what is the point of these stats? If the only point it to know what Stewart’s audience thinks, politically, they’re fine as they are. But if the author is trying (perhaps surripticiously) to suggest that Stewart and/or his audience is more liberal than normal, we need more information to interpret these statistics. Specifically, how representative is this of the demographic the audience is drawn from. Other studies have shown that the Daily Show audience is younger than normal, so you can’t compare their politics or other habits with the entire adult population and be really fair about it. You need to tell us how this compares with the background population that they more specifically belong to. (Similarly, you never see anyone compare the outcomes of a political survey like this with world-wide leanings because it’s not really helpful to know if an American sub-population is more or less conservative than China or South Africa.)

I suspect that often times, reports fall into this trap unintentionally because they’re not necessarily well-trained in the meaning to numerical data and how to interpret it. But I also suspect (yes, this is me ascribing motivations; take from it what you will) that this is done some of the time to gloss over inconvenient contexts and use numbers of pure shock-and-awe.

posted by John Weiss at 14:05  

Saturday, October 16, 2010

Free Lunches

“There’s no such thing as a free lunch,” goes an old saying. I’m not sure I can agree that it’s strictly true, but it’s still wise advice. We seldom get something for nothing, but people often let themselves be suckered into thinking that someone, often an anonymous stranger, really is giving them free stuff. It’s true that humans can be and often are altruistic and help each other, but that seldom occurs between strangers or between businesses and people.

The “Free Lunch” flag takes many forms, some of which are kind of subtle, some not so much. For example, there are sites that let you play “free” games on the internet. Now, granted, some of these are amateur games that really are being shared free of charge. Mostly, though, something is driving the business model. Often, it’s advertising for either the games’ creator or a third party. This may be an acceptable price to pay to you, in which case: have at it. But remember that there is a price.

More subtle examples of free lunches abound, however. Consider customer loyalty cards. You get discounts (often) for using them, but is that really something for nothing? Nope. Generally, stores are collecting data on your shopping in exchange for the discounts. Again, it might be worth it (the data is usually used only in aggregate, they don’t care about your purchasing patterns particularly), but it’s a price none-the-less.

In the end, the free-lunch problem doesn’t mean that we should refuse offers that look good. But it does mean that anytime we hear about something that sounds too good to be true, we should think about the details and what the (often hidden) costs might be.

posted by John Weiss at 18:33  

Tuesday, June 8, 2010

Weasel Words

When we’re writing or — especially — speaking, it’s so much easier for us to avoid looking up actual statistics we’re talking about. Instead, we often (see how I’m not quoting a frequency?) just use words like “a lot”, “often”, most”, “many”, and so forth. By itself, this isn’t a bad thing: we’d never have enough time in the day to constantly look up all of the statistics we’d need. But they do cause problems.

You see, it’s easy to abuse this words as weasel words, phrases we throw in not to simplify our lives but to give a mistaken impression while maintaining dependability. If I say something like, “a lot of people want X,” you would probably walk away thinking there’s a large percentage of people who want that. But really, what does “a lot” mean? More than 5? 10? 100? 1000? A thousand people is, by most standards, “a lot”, after all. I wouldn’t want that many in my class, for example. But for national politics, it’s a tiny number. Of course, if you called me on my statement, I could just hide behind the ambiguity of the phrase, “a lot”.

I’m not saying you should question the exact stats every time you encounter these phrases. But certainly when reading (or listening to) formal communication, be it a corporate memo or an op-ed, it’s worth asking what the values actually are and why the writer/speaker didn’t give them to you. Did he not know/couldn’t be bothered to check? Did she want to hide them? Or did the stats just not exist? (In which case, how appropriate is the quantifier at all?)

posted by John Weiss at 21:14  

Saturday, February 20, 2010

Argument by Innuendo

I’m adding this one because I just heard it on the radio yesterday. You actually hear this sort of thing a lot, but what really disturbed me was that I heard this argument on NPR during All Things Considered.

What is “Argument by Innuendo”? It’s an argument made by vaguely referencing some perceived lack of reliability in someone or some group. It’s generally not stated outright, probably because that might tip you off.

Here’s an example: the speaker on NPR, arguing for why the federal government shouldn’t regulate something (never mind what) said that they shouldn’t, “because they’re politicians”. This isn’t an argument for not doing something. What about politicians make them unsuited to regulate? There may, in fact, be reasons and those reasons may make a valid (or even convincing) argument. But that’s not presented. We’re just to assume that our prejudices against politicians make them unsuitable to regulate this.

What makes this argument particularly nasty is that it plays on prejudices most of us have. Most of us don’t trust politicians. Or lawyers. Or (insert political party here). And, of course, it doesn’t outright state the prejudice, so we can each fill in our own personal interpretation. It therefore goes with the grain of our views and subtly wins us over without every making a case. Do not be fooled!

posted by John Weiss at 16:21  

Sunday, September 13, 2009


One of the most common examples of baloney is the conspiracy theory. A conspiracy theory is a claim that someone or some group of people is trying to effect change, hide the truth, or spread a falsehood, usually maliciously. (The latter point may seem silly, but there are all kinds of benevolent reasons to hide the truth from people, starting with birthday surprises.)

The trouble here is that conspiracies do happen. They happen all the time, in fact. History is full of examples: Caesar’s assassination. The Pazzi Conspiracy (to assassinate the Medici brothers). Guy Fawkes. The conspiracy that put Jane Grey on the throne of England for about nine days. There are scores more. If you read about the reign of almost any king or queen you care to name (certainly before the modern era) and you’ll find that no matter how popular and how capable they were, there were conspiracies against them or their governments.

But here’s the key thing about this abundance of conspiracies: you’ve probably never heard of more than a few of them. Why is that so important? You’ve never heard of most of them because they didn’t succeed. In quick (admittedly not scientific) look at conspiracies and history suggests that the vast majority of conspiracies fail. Often, they fail before the conspirators have a chance even to try their plan.

Why do conspiracies generally fail? Because most conspiracies are a built on two demands: secrecy and resources. Conspiracies, by their natures, are clandestine. In order to work, they almost always are required to surprise their targets. This is because the target is usually someone in a position of greater power, necessitating surprise to level the field.

Unfortunately, conspiracies also need resources. Resources range from time to plan and make arrangements to materials, to people in the right places to make things happen. And here’s where the intrinsic tension in conspiracies comes in. All of these resources are at odds with the secrecy requirement. Gathering materials draws notice. The more people you involve, the more likely someone will leak the information. And the more time the secret is kept, the more chances there are for leaks or discoveries. So every conspiracy has to balance its need for surprise against its need for resources. Getting the balance right (and since each situation is unique, it has its own balance) is difficult. Most conspiracies either fail to gather enough resources to succeed or are found out before they can move effectively.

OK, so most conspiracies fail. But they don’t all fail, right? Sure. But when you hear about a supposed conspiracy, ask yourself a few question:

Is it likely that the would-be conspirators have the resources to pull this off?
Can the secret be kept for as long as is claimed?
Most conspiracy theories that have gained traction claim conspiracies lasting decades.
Are the people who would be able to check on the conspiracy and have the motive to do so the ones claiming that the conspiracy exists?
Conspiracy theorists often overlook that the people with the most to gain from exposing the conspiracy (and are most able to expose it) are not doing so. If the US never landed on the Moon, why didn’t the Russians or the Chinese point it out?
Is it likely that no one on the inside is talking?
Tens (if not hundreds) of thousands of people were involved in the Apollo missions, many of whom would have known if the missions were fake. What are the chances that none of them has spoken out? Bear in mind that the United States can’t even keep its nuclear secrets (under higher security than NASA musters) secret for very long.

So while conspiracies happen, if someone tells you that one is happening and succeeding, that’s a warning flag and it’s time to ask more questions. (more…)

posted by John Weiss at 10:31  

Friday, July 10, 2009

Prominent Use of Title

I’m sure we’re all aware that one fairly common form of logical fallacy is the argument from authority. In this fallacy, it is assumed that because some person in authority says something, that thing must be true. We’ve all encountered this in it’s obvious form, especially as children. (“My dad says….”) But there are more subtle ways to exercise this technique without giving it away. Some of these are even legitimate things to do. Use of a title is one such example.

By “use of title” I mean titles like “Doctor” or “Professor”. Titles that carry weight with most people, titles that make you suspect that the person at least has some experience or a clue as to what they’re talking about. (Of course, many would argue that this just means that the general public hasn’t met enough PhDs to realize that we don’t have the faintest notion most of the time.) It’s fair to be proud of these titles. They take a lot of work and at least some talent to earn, after all. But they can also be abused. I’m actually a little bit guilty of this myself: when I’m writing annoyed letters to customer support, I’ll usually select “Dr” just in case that gives my complaints more weight. (Forgive me?)

More pernicious is when people use “Dr.” or “PhD” to introduce themselves before selling you on an idea or, worse still, a product. I first noticed this pattern when reading a book on health that I’ll discuss later. One thing that immediately concerned me was that the author’s name on the cover read, “XXXX XXXXXXX, PhD”. I hadn’t really thought about including the “PhD” before then. I realized suddenly that what bothered me was the fact that usually people don’t include the title in such works. For a popular work, it doesn’t really matter if the author has a doctorate or not, just that they know enough to explain the material clearly. For scholarly work, it’s pretty much assumed that the author has an advanced degree. So really, any time I see “Dr” or “PhD” right in the by-line, I start to wonder why the author is touting that.

If you’re inclined to delve further, check what the degree or title actually applies to. It’s pretty common for people to point to their PhD and try to stay quiet about what it’s in. But if you think about, a PhD in Astrophysics (say) is not really any more qualified to make medical claims than a car mechanic or a grocery clerk. So it is quite important to know what the degree is in. (Or what kind of professor someone is. Or what field of medicine an MD specializes in, even.)

Of course, it’s true that an advanced degree in the relevant discipline (or being a professor in the field, or a doctor of that specialty, or whatever) means you probably should give at least some extra attention to what the person is saying. They’ve most likely learned more about the topic than you have, after all, thanks to years of dedicated study. But that doesn’t carry over to other fields. Titles like “doctor”, “professor”, or “PhD” indicate a high degree of specialization in a topic and it’s exactly this that means that we need to ensure that the specialty matches the claims.

In the end, titles are dangerous and you’re probably better regarding people who use theirs with caution, at least until you can verify that it’s the right title for the claim. And remember, when dealing with factual claims, there are no authorities, only experts. And experts, even in their own discipline, are wrong every day.

posted by John Weiss at 21:44  

Powered by WordPress