Here be dragons: the hidden dangers of suggestive correlations

I know that we both agree on this point: correlation does not mean causation. It’s an adage easy to remind ourselves whenever we see spurious correlations (as this pretty awesome site demonstrates).

But how about suggestive correlations? Those times when we can make a narrative about our data, when we can effortlessly turn that correlation into a causation?

Well, that’s a completely different ball game.

It is just so tempting to take that next step to causation. It is just such an easy rabbit hole to fall into. Trust me, we’ve all done it.

Many thanks to xkcd.com for providing one of my favorite examples of the dangers of suggestive correlations!

Many thanks to xkcd.com for providing one of my favorite examples of the dangers of suggestive correlations!

In thinking about the correlation-causation trap, I often think of Pacific cargo cults of World War II. Observing the arrival of vast quantities of goods on airstrips, the indigenous people began to worship the soldiers as gods who would provide them with cargo in the future. After the war (and thus cargo) stopped, the islanders started mimicking the behavior of the soldiers—performing parade ground drills, waving landing signals on the runways, even building life-size replicas of airplanes out of straw. All in the hopes of getting the cargo to arrive once again.

In other words, the cult fell into the classic correlation-causation fallacy. The behaviors of the soldiers (parade ground drills, landing signals, airplanes) correlated with the arrival of cargo, and so it was a short hop to think that the behaviors, being pleasing to the gods, caused the arrival of cargo.

Yeah, yeah, yeah, I hear you say, but it’s not like WE would ever build life-size straw replicas of airplanes.[1] That’s just absurd.

But here’s the thing: if you don’t know that your causal relationship is built on flawed interpretations, you don’t think you’re building an airplane out of straw. No, you think you’re building an actual airplane and that waving landing signals is directing the arrival of actual cargo!

It’s easy to diagnose a cargo cult from the outside; it’s nearly impossible to do so from the inside.

Okay, but so what? If a relationship is “only” correlated and it’s likely causal, what’s the harm in proceeding as if we already have proven the causality? Does actually proving that two things are causal really matter if we have good correlations?

In short: there can be a lot of harm in this. Flawed causality isn’t just some logical gimmick, and pointing it out isn’t just a party trick to break out at summer BBQs.

Flawed causality can impact medical decisions: like the now-famous case of thinking that hormone replacement therapy (HRT) decreased the risk of coronary heart disease. This conclusion was driven both by correlations in epidemiological studies and because it seemed plausible there was a narrative to explain the data. However, subsequent studies (with proper controls and patient randomizations) demonstrated that HRT increased the risk of coronary heart disease.

Scientists began proposing the benefits of HRT in the early 1990s. And the randomized control experiments? Those came out a decade later, in 2002. A full decade of medical treatments based on that old-as-time correlation-causation trap.

Flawed causality can affect every aspect of our lives. Hiring decisions. Firing decisions. Promotion criteria. Investment strategies. Teaching practices. Business goals. Airport screenings. Stop-and-frisk practices. Public policy.

Until we understand the causality of a situation, we’re in unknown, treacherous waters. Because, despite best intentions, all of us (even “the professionals”) are in serious danger of chasing of the symptoms, rather than the cause.

That’s is the real trouble with correlations: when we should be most on our guard is exactly when we are least inclined to be.


[1] If you’re into this sort of thing, you might even be inclined to call this is a “straw airplane” argument.

One thought on “Here be dragons: the hidden dangers of suggestive correlations

  1. Pingback: Defining ‘the best’: A scientific approach to evaluating success (Part 1) | How to Be a Scientist

Leave a comment