Echoing the sentiment of Robert Coe’s 2002 paper, It’s the Effect Size, Stupid, one of my former Stats professors always commented, “You call that significant?” when reported results did not include an effect size (regardless of the P value). As Coe and many others have argued for years, reporting effect size with margin of error has many advantages over tests of statistical significance alone.
Remember that the P value, which is reported as statistical significance, is a measure of the likelihood (or probability) that an observed difference between two groups is due to chance. Keep in mind that the P value depends upon both effect size and sample size. Sometimes, a statistically significant result may mean only that a huge sample was used. Statistical significance tells us only if there is a difference between two groups. If reported in absence of effect size it does not provide adequate information for the consumer to completely assess and understand the results.
So, how does effect size complete the story? Effect size is a scale-free index that is independent of sample size. It is a simple way to quantify the size of the differences between two groups. Statistical significance tells us if the difference between two groups is likely due to chance and effect size tells us the magnitude of the difference. Depending on the type of comparisons made, effect size is estimated with different indices. The two main indices used are (1) differences between the means of two groups and (2) measures of the associations or correlations between variables. Many online effect size calculators with explanations of their use for calculation are readily available.
Reporting effect sizes not only assists readers in understanding the magnitude of the differences reported but also contributes essential information to any field of study. When you include effect sizes in results, you contribute in planning future studies as well as analyzing and reporting results from research studies. How so? Remember that accurate effect size is needed when conducting:
- sample size estimation
- power analysis
We’ll leave the final word about the “significance” of effect size with Jacob Cohen in “Things I have learned (so far) Am Psychol. 1990;45:1304–1312.”:
“The primary product of a research inquiry is one or more measures of effect size, not P values.”