Observational Studies and causation

There’s a problem with observational studies.

Let’s say you tell people to do something – eat less red meat, for example – you are hoping to change their behavior. You end up with some people who totally avoid red meat, some people who reduce the amount of red meat they eat, and some people who just ignore you.

Then you come back 10 or 20 years later and do an observational study and look at how much red meat people eat and how healthy they are, and – lo and behold – you find out that those people who eat less red meat are healthier.

So, you publish your study and a bunch of other people publish their studies.

Unfortunately, there’s a problem; the act of telling people what to do is messing with your results. The people who listened to your advice to give up red meat are fundamentally more interested in their health than those that didn’t listen in a myriad of ways. Those differences are known as “confounders”, and studies use statistical techniques to reduce the impact of confounders on the results, but they can never get rid of all the confounders. Which leaves us with a problem: we don’t know big the residual confounders are in comparison with any real effect we might be seeing.

Residual confounding is why those studies can never show causality; if you look at the studies themselves, they will say there is an association with red meat consumption and increased mortality.

But in the press releases from the research groups or universities, causality is often assumed.



So, what do you think ?