Compute effect size indices for standardized differences: Cohen's d,
Hedges' g and Glass’s delta. (This function returns the population
estimate.)
Both Cohen's d and Hedges' g are the estimated the standardized
difference between the means of two populations. Hedges' g provides a bias
correction to Cohen's d for small sample sizes. For sample sizes > 20, the
results for both statistics are roughly equivalent. Glass’s delta is
appropriate when the standard deviations are significantly different between
the populations, as it uses only the second group's standard deviation.
cohens_d( x, y = NULL, data = NULL, pooled_sd = TRUE, mu = 0, paired = FALSE, ci = 0.95, verbose = TRUE, ..., correction ) hedges_g( x, y = NULL, data = NULL, correction = 1, pooled_sd = TRUE, mu = 0, paired = FALSE, ci = 0.95, verbose = TRUE, ... ) glass_delta( x, y = NULL, data = NULL, mu = 0, ci = 0.95, iterations = 200, verbose = TRUE, ..., correction )
x | A formula, a numeric vector, or a character name of one in |
---|---|
y | A numeric vector, a grouping (character / factor) vector, a or a
character name of one in |
data | An optional data frame containing the variables. |
pooled_sd | If |
mu | a number indicating the true value of the mean (or difference in means if you are performing a two sample test). |
paired | If |
ci | Confidence Interval (CI) level |
verbose | Toggle warnings and messages on or off. |
... | Arguments passed to or from other methods. |
correction | Type of small sample bias correction to apply to produce
Hedges' g. Can be |
iterations | The number of bootstrap replicates for computing confidence intervals. Only applies when |
A data frame with the effect size ( Cohens_d
, Hedges_g
,
Glass_delta
) and their CIs (CI_low
and CI_high
).
Confidence Intervals for Glass' delta are estimated using the bootstrap method.
Confidence Intervals for Glass' delta are estimated using the bootstrap method.
The indices here give the population estimated standardized difference. Some statistical packages give the sample estimate instead (without applying Bessel's correction).
Unless stated otherwise, confidence intervals are estimated using the
Noncentrality parameter method; These methods searches for a the best
non-central parameters (ncp
s) of the noncentral t-, F- or Chi-squared
distribution for the desired tail-probabilities, and then convert these
ncp
s to the corresponding effect sizes. (See full effectsize-CIs for
more.)
Keep in mind that ncp
confidence intervals are inverted significance tests,
and only inform us about which values are not significantly different than
our sample estimate. (They do not inform us about which values are
plausible, likely or compatible with our data.) Thus, when CIs contain the
value 0, this should not be taken to mean that a null effect size is
supported by the data; Instead this merely reflects a non-significant test
statistic - i.e. the p-value is greater than alpha (Morey et al., 2016).
For positive only effect sizes (Eta squared, Cramer's V, etc.; Effect sizes
associated with Chi-squared and F distributions), this applies also to cases
where the lower bound of the CI is equal to 0. Even more care should be taken
when the upper bound is equal to 0 - this occurs when p-value is greater
than 1−alpha/2 making, the upper bound unestimatable, and the upper bound is
arbitrarily sets to 0 (Steiger, 2004). For example:
eta_squared(aov(mpg ~ factor(gear) + factor(cyl), mtcars[1:7, ]))
## Parameter | Eta2 (partial) | 90% CI ## -------------------------------------------- ## factor(gear) | 0.58 | [0.00, 0.84] ## factor(cyl) | 0.46 | [0.00, 0.78]
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Ed.). New York: Routledge.
Hedges, L. V. & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Sage.
McGrath, R. E., & Meyer, G. J. (2006). When effect sizes disagree: the case of r and d. Psychological methods, 11(4), 386.
d_to_common_language()
sd_pooled()
Other effect size indices:
effectsize()
,
eta_squared()
,
phi()
,
rank_biserial()
,
standardize_parameters()
# two-sample tests ----------------------- # using formula interface cohens_d(mpg ~ am, data = mtcars)#> Cohen's d | 95% CI #> -------------------------- #> -1.48 | [-2.27, -0.67] #> #> - Estimated using pooled SD.cohens_d(mpg ~ am, data = mtcars, pooled_sd = FALSE)#> Cohen's d | 95% CI #> -------------------------- #> -1.41 | [-2.17, -0.51] #> #> - Estimated using un-pooled SD.cohens_d(mpg ~ am, data = mtcars, mu = -5)#> Cohen's d | 95% CI #> ------------------------- #> -0.46 | [-1.17, 0.26] #> #> - Deviation from a difference of -5. #> - Estimated using pooled SD.hedges_g(mpg ~ am, data = mtcars)#> Hedges' g | 95% CI #> -------------------------- #> -1.44 | [-2.21, -0.65] #> #> - Estimated using pooled SD. #> - Bias corrected using Hedges and Olkin's method.#>#> Glass' delta | 95% CI #> ----------------------------- #> -1.17 | [-2.14, -0.70]#> Cohen's d | 95% CI #> -------------------------- #> -1.48 | [-2.27, -0.67] #> #> - Estimated using pooled SD. #> #> # Common Language Effect Sizes #> #> Cohen's U3 | Overlap | Probability of superiority #> ------------------------------------------------- #> 6.97% | 45.99% | 14.80%# other acceptable ways to specify arguments cohens_d(sleep$extra, sleep$group)#> Cohen's d | 95% CI #> ------------------------- #> -0.83 | [-1.74, 0.10] #> #> - Estimated using pooled SD.hedges_g("extra", "group", data = sleep)#> Hedges' g | 95% CI #> ------------------------- #> -0.80 | [-1.67, 0.09] #> #> - Estimated using pooled SD. #> - Bias corrected using Hedges and Olkin's method.cohens_d(sleep$extra[sleep$group == 1], sleep$extra[sleep$group == 2], paired = TRUE)#> Cohen's d | 95% CI #> -------------------------- #> -1.28 | [-2.23, -0.44]# one-sample tests ----------------------- cohens_d("wt", data = mtcars, mu = 3)#> Cohen's d | 95% CI #> ------------------------- #> 0.22 | [-0.13, 0.58] #> #> - Deviation from a difference of 3.hedges_g("wt", data = mtcars, mu = 3)#> Hedges' g | 95% CI #> ------------------------- #> 0.22 | [-0.13, 0.57] #> #> - Deviation from a difference of 3. #> - Bias corrected using Hedges and Olkin's method.#> [1] "small" #> (Rules: cohen1988) #>#> $`Cohen's U3` #> [1] 0.6554217 #> #> $Overlap #> [1] 0.8414806 #> #> $`Probability of superiority` #> [1] 0.6113513 #>#> [1] "small" #> (Rules: sawilowsky2009) #>#> [1] "small" #> (Rules: gignac2016) #>