Compute Confidence/Credible/Compatibility Intervals (CI) for Bayesian and frequentist models. The Documentation is accessible for:

ci(x, ...) # S3 method for numeric ci(x, ci = 0.89, method = "ETI", verbose = TRUE, ...) # S3 method for data.frame ci(x, ci = 0.89, method = "ETI", verbose = TRUE, ...) # S3 method for emmGrid ci(x, ci = 0.89, method = "ETI", verbose = TRUE, ...) # S3 method for sim.merMod ci(x, ci = 0.89, method = "ETI", effects = c("fixed", "random", "all"), parameters = NULL, verbose = TRUE, ...) # S3 method for sim ci(x, ci = 0.89, method = "ETI", parameters = NULL, verbose = TRUE, ...) # S3 method for stanreg ci(x, ci = 0.89, method = "ETI", effects = c("fixed", "random", "all"), parameters = NULL, verbose = TRUE, ...) # S3 method for brmsfit ci(x, ci = 0.89, method = "ETI", effects = c("fixed", "random", "all"), component = c("conditional", "zi", "zero_inflated", "all"), parameters = NULL, verbose = TRUE, ...) # S3 method for BFBayesFactor ci(x, ci = 0.89, method = "ETI", verbose = TRUE, ...) # S3 method for MCMCglmm ci(x, ci = 0.89, method = "ETI", verbose = TRUE, ...)

x | A |
---|---|

... | Currently not used. |

ci | Value or vector of probability of the CI (between 0 and 1)
to be estimated. Default to |

method | |

verbose | Toggle off warnings. |

effects | Should results for fixed effects, random effects or both be returned? Only applies to mixed models. May be abbreviated. |

parameters | Regular expression pattern that describes the parameters that
should be returned. Meta-parameters (like |

component | Should results for all parameters, parameters for the conditional model or the zero-inflated part of the model be returned? May be abbreviated. Only applies to brms-models. |

A data frame with following columns:

`Parameter`

The model parameter(s), if`x`

is a model-object. If`x`

is a vector, this column is missing.`CI`

The probability of the credible interval.`CI_low`

,`CI_high`

The lower and upper credible interval limits for the parameters.

When it comes to interpretation, we recommend thinking of the CI in terms of an "uncertainty" or "compatibility" interval, the latter being defined as “Given any value in the interval and the background assumptions, the data should not seem very surprising” (Gelman & Greenland 2019).

Gelman A, Greenland S. Are confidence intervals better termed "uncertainty intervals"? BMJ 2019;l5381. doi: 10.1136/bmj.l5381

#> # Equal-Tailed Interval #> #> 89% ETI #> [-1.55, 1.57] #>ci(posterior, method = "HDI")#> # Highest Density Interval #> #> 89% HDI #> [-1.36, 1.69] #>#> # Equal-Tailed Intervals #> #> Parameter 80% ETI #> X1 [-1.38, 1.14] #> X2 [-1.27, 1.28] #> X3 [-1.27, 1.24] #> X4 [-1.09, 1.43] #> #> #> Parameter 89% ETI #> X1 [-1.66, 1.36] #> X2 [-1.43, 1.64] #> X3 [-1.43, 1.58] #> X4 [-1.56, 1.53] #> #> #> Parameter 95% ETI #> X1 [-1.79, 1.60] #> X2 [-1.81, 2.04] #> X3 [-1.70, 1.76] #> X4 [-1.75, 2.13] #> #>#> # Highest Density Intervals #> #> Parameter 80% HDI #> X1 [-1.41, 1.07] #> X2 [-1.37, 1.11] #> X3 [-1.31, 1.15] #> X4 [-0.74, 1.53] #> #> #> Parameter 89% HDI #> X1 [-1.41, 1.59] #> X2 [-1.54, 1.51] #> X3 [-1.48, 1.49] #> X4 [-1.46, 1.61] #> #> #> Parameter 95% HDI #> X1 [-1.74, 1.63] #> X2 [-2.11, 1.86] #> X3 [-1.56, 1.92] #> X4 [-1.98, 1.61] #> #>#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: WARNING: There aren't enough warmup iterations to fit the #> Chain 1: three stages of adaptation as currently configured. #> Chain 1: Reducing each adaptation stage to 15%/75%/10% of #> Chain 1: the given number of warmup iterations: #> Chain 1: init_buffer = 15 #> Chain 1: adapt_window = 75 #> Chain 1: term_buffer = 10 #> Chain 1: #> Chain 1: Iteration: 1 / 200 [ 0%] (Warmup) #> Chain 1: Iteration: 20 / 200 [ 10%] (Warmup) #> Chain 1: Iteration: 40 / 200 [ 20%] (Warmup) #> Chain 1: Iteration: 60 / 200 [ 30%] (Warmup) #> Chain 1: Iteration: 80 / 200 [ 40%] (Warmup) #> Chain 1: Iteration: 100 / 200 [ 50%] (Warmup) #> Chain 1: Iteration: 101 / 200 [ 50%] (Sampling) #> Chain 1: Iteration: 120 / 200 [ 60%] (Sampling) #> Chain 1: Iteration: 140 / 200 [ 70%] (Sampling) #> Chain 1: Iteration: 160 / 200 [ 80%] (Sampling) #> Chain 1: Iteration: 180 / 200 [ 90%] (Sampling) #> Chain 1: Iteration: 200 / 200 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.01 seconds (Warm-up) #> Chain 1: 0.011 seconds (Sampling) #> Chain 1: 0.021 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 0 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: WARNING: There aren't enough warmup iterations to fit the #> Chain 2: three stages of adaptation as currently configured. #> Chain 2: Reducing each adaptation stage to 15%/75%/10% of #> Chain 2: the given number of warmup iterations: #> Chain 2: init_buffer = 15 #> Chain 2: adapt_window = 75 #> Chain 2: term_buffer = 10 #> Chain 2: #> Chain 2: Iteration: 1 / 200 [ 0%] (Warmup) #> Chain 2: Iteration: 20 / 200 [ 10%] (Warmup) #> Chain 2: Iteration: 40 / 200 [ 20%] (Warmup) #> Chain 2: Iteration: 60 / 200 [ 30%] (Warmup) #> Chain 2: Iteration: 80 / 200 [ 40%] (Warmup) #> Chain 2: Iteration: 100 / 200 [ 50%] (Warmup) #> Chain 2: Iteration: 101 / 200 [ 50%] (Sampling) #> Chain 2: Iteration: 120 / 200 [ 60%] (Sampling) #> Chain 2: Iteration: 140 / 200 [ 70%] (Sampling) #> Chain 2: Iteration: 160 / 200 [ 80%] (Sampling) #> Chain 2: Iteration: 180 / 200 [ 90%] (Sampling) #> Chain 2: Iteration: 200 / 200 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.011 seconds (Warm-up) #> Chain 2: 0.012 seconds (Sampling) #> Chain 2: 0.023 seconds (Total) #> Chain 2:#> Warning: The largest R-hat is 1.06, indicating chains have not mixed. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#r-hat#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#bulk-ess#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable. #> Running the chains for more iterations may help. See #> http://mc-stan.org/misc/warnings.html#tail-ess#> # Equal-Tailed Intervals #> #> Parameter 80% ETI #> (Intercept) [34.79, 39.73] #> wt [-6.03, -4.58] #> #> #> Parameter 89% ETI #> (Intercept) [34.38, 40.26] #> wt [-6.14, -4.45] #> #>#> # Highest Density Intervals #> #> Parameter 80% HDI #> (Intercept) [34.35, 39.07] #> wt [-6.14, -4.70] #> #> #> Parameter 89% HDI #> (Intercept) [34.24, 40.02] #> wt [-6.15, -4.45] #> #>if (FALSE) { library(brms) model <- brms::brm(mpg ~ wt + cyl, data = mtcars) ci(model, method = "ETI") ci(model, method = "HDI") library(BayesFactor) bf <- ttestBF(x = rnorm(100, 1, 1)) ci(bf, method = "ETI") ci(bf, method = "HDI") library(emmeans) model <- emtrends(model, ~1, "wt") ci(model, method = "ETI") ci(model, method = "HDI") }