Correlation and Causality

The Misconception

The correlation between two variables, X and Y, never reveals anything about a possible causal relationship between the two variables. Simply stated: correlation ≠ cause.


Evidence That This Misconception Exists

The following statements come from three different online blogs:

"Correlation NEVER implies causation!"

"The fact that correlation never implies causation, which may be lost on many people, is extremely important."

"In the words of my college psychology professor, correlation never, never, never, never, never, never, never, never, never, never, never, never, never, never, never, never, equals causation. NEVER!"

Why This Misconception Is Dangerous

The oft-heard admonition against inferring cause from correlation is dangerous for two reasons. First, this warning implies that causal arguments are legitimate or not depending on the kind of statistical tool(s) used to analyze data. That notion of what’s needed to illuminate causal forces totally disregards the importance of research design. Statistical procedures can be helpful in studies that investigate cause; however, researchers concerned with causality must take care to reduce or eliminate alternative hypotheses for any causal connections suggested by their data.

The warning that correlation ≠ cause is also dangerous because it functions to keep the logical and mathematical equivalence of certain statistical procedures hidden from view. Data can sometimes be analyzed in different ways and yet produce the exact same results. It’s important for researchers (and for the readers of their research reports) to know about these equivalencies. Otherwise, they will think that different analyses are accomplishing different objectives, when in fact those different analyses are doing exactly the same thing.


Undoing the Misconception

To show that correlation can speak to the issue of causality, let’s consider a hypothetical medical investigation. This study involves a new drug that’s being tested to see if it can relieve headache pain. We’ll first consider the setup of the study. Then, we’ll look at two different ways the study’s data could be analyzed so as to justify a cause-and-effect claim.

Suppose we randomly assign 100 people with headaches to two groups, a treatment group (n = 50) and a placebo group (n = 50). Those in the first of these groups are given a pill that contains a new drug that supposedly reduces headache pain, while those in the placebo group are given a look-alike pill that contains no medicine. Let’s further assume that this is a double-blind clinical trial, that we ask people to rate their headache pain on a 0-to-20 scale one hour after taking their pills, that we statistically compare the mean ratings of the two groups, and that there are no plausible reasons why the means of the two groups might be different (or similar), other than the possibility that the “active” pills do (or don’t) have a differential impact compared to the placebo pills.

Many researchers would choose to analyze the data from this headache study with an independent-samples t-test. So, let’s imagine that this is how the data are treated statistically. Let’s also imagine that the t-test’s result is statistically significant, with the rated headache pain being lower, on average, for those who took the active pills than for those who took the placebo pill. Given that the t-test showed a statistically significant difference between the means of the two groups, the researchers legitimately could make the claim that the drug in the active pills most likely had a causal impact on the headache ratings.

What’s important to realize is that the data of this hypothetical study could be examined using a correlation coefficient. If this had been done, the X variable would correspond to group membership (with a 1 or 2 assigned to each person to indicate whether an active pill or a placebo pill was swallowed), while the Y variable would correspond to headache pain level, as indicated by the self-ratings collected one hour after the pills were taken. Stated differently, X would be the independent variable while Y would be the dependent variable.

The data for this study might look like.























Had the results from this hypothetical study been correlated, they would have been identical to the results of the t-test. (See Note 1.) In other words, if the t-test indicated that a statistically significant difference existed between the two sample means, then the correlation would be statistically significant as well. (Similarly, if the t-test turned out to be nonsignificant, so too would the test of the correlation coefficient yield a nonsignificant result.)

The t-test and the test of the correlation coefficient would produce identical results because these two statistical procedures are mathematically equivalent. This equivalence shows up in the data-based p-level that’s examined to see if the independent variable (active versus placebo pill) has a statistically significant connection to the dependent variable (headache pain one hour after taking the pill). Whether the data are analyzed via an independent-samples t-test or a test on r, the p-level is the same.

Later in this book, we will consider misconceptions concerning p-levels. Those misconceptions notwithstanding, it is still the case that our hypothetical study would yield the same p regardless of whether the two group means are compared or the headache ratings are correlated with group membership. The first of these statistical procedures would produce findings considered by researchers to address the causal impact of the new drug on headache relief. Because the correlational analysis is identical to the t-test (and because the study involved a manipulated independent variable and no plausible threats to internal validity), we are justified here in saying that the correlation coefficient, r, speaks to the issue of cause and effect. (See Note 2.)


Internet Assignment

Would you like to see some convincing evidence that a correlation coefficient can speak to the issue of cause and effect? Would you like the evidence to be connected to a small set of raw scores that you can analyze using an Internet-based interactive Java applet? If so, you can easily generate and examine such evidence.

To locate the scores you will be analyzing, visit this book’s companion Web site ( Once there, open the folder for Chapter 3 and click on the link called “Correlation and Causality.” There you will find the data along with detailed instructions (prepared by this book’s author) on how to enter a small set of raw scores into an online Java applet and how to get the applet to analyze the data in two different ways. You will also be given a link to that Java applet. The results provided by these two analyses you perform may surprise you!

Note 1:  The resulting correlation could be referred to as Pearson’s r or as the point-biserial correlation, rpb. Both yield identical results, for the formula used to obtain rpb is nothing more than a simplification of r in the situation where one of the two variables is dichotomous.

Note 2: A study’s internal validity is considered to be high if no confounding variables exist that might make (1) inert treatments appear to be potent or (2) potent treatments appear to be inert. When random assignment is used to form comparison groups, many (but not all) potential threats to internal validity vanish.