Compute indices relevant to describe and characterize the posterior distributions.
Usage
describe_posterior(posteriors, ...)
# S3 method for numeric
describe_posterior(
posteriors,
centrality = "median",
dispersion = FALSE,
ci = 0.95,
ci_method = "eti",
test = c("p_direction", "rope"),
rope_range = "default",
rope_ci = 0.95,
keep_iterations = FALSE,
bf_prior = NULL,
BF = 1,
...
)
# S3 method for stanreg
describe_posterior(
posteriors,
centrality = "median",
dispersion = FALSE,
ci = 0.95,
ci_method = "eti",
test = c("p_direction", "rope"),
rope_range = "default",
rope_ci = 0.95,
keep_iterations = FALSE,
bf_prior = NULL,
diagnostic = c("ESS", "Rhat"),
priors = FALSE,
effects = c("fixed", "random", "all"),
component = c("location", "all", "conditional", "smooth_terms", "sigma",
"distributional", "auxiliary"),
parameters = NULL,
BF = 1,
...
)
# S3 method for brmsfit
describe_posterior(
posteriors,
centrality = "median",
dispersion = FALSE,
ci = 0.95,
ci_method = "eti",
test = c("p_direction", "rope"),
rope_range = "default",
rope_ci = 0.95,
keep_iterations = FALSE,
bf_prior = NULL,
diagnostic = c("ESS", "Rhat"),
effects = c("fixed", "random", "all"),
component = c("conditional", "zi", "zero_inflated", "all", "location",
"distributional", "auxiliary"),
parameters = NULL,
BF = 1,
priors = FALSE,
...
)
Arguments
- posteriors
A vector, data frame or model of posterior draws. bayestestR supports a wide range of models (see
methods("describe_posterior")
) and not all of those are documented in the 'Usage' section, because methods for other classes mostly resemble the arguments of the.numeric
method.- ...
Additional arguments to be passed to or from methods.
- centrality
The point-estimates (centrality indices) to compute. Character (vector) or list with one or more of these options:
"median"
,"mean"
,"MAP"
or"all"
.- dispersion
Logical, if
TRUE
, computes indices of dispersion related to the estimate(s) (SD
andMAD
formean
andmedian
, respectively).- ci
Value or vector of probability of the CI (between 0 and 1) to be estimated. Default to
.95
(95%
).- ci_method
The type of index used for Credible Interval. Can be
"ETI"
(default, seeeti()
),"HDI"
(seehdi()
),"BCI"
(seebci()
),"SPI"
(seespi()
), or"SI"
(seesi()
).- test
The indices of effect existence to compute. Character (vector) or list with one or more of these options:
"p_direction"
(or"pd"
),"rope"
,"p_map"
,"equivalence_test"
(or"equitest"
),"bayesfactor"
(or"bf"
) or"all"
to compute all tests. For each "test", the corresponding bayestestR function is called (e.g.rope()
orp_direction()
) and its results included in the summary output.- rope_range
ROPE's lower and higher bounds. Should be a list of two values (e.g.,
c(-0.1, 0.1)
) or"default"
. If"default"
, the bounds are set tox +- 0.1*SD(response)
.- rope_ci
The Credible Interval (CI) probability, corresponding to the proportion of HDI, to use for the percentage in ROPE.
- keep_iterations
If
TRUE
, will keep all iterations (draws) of bootstrapped or Bayesian models. They will be added as additional columns namediter_1, iter_2, ...
. You can reshape them to a long format by runningreshape_iterations()
.- bf_prior
Distribution representing a prior for the computation of Bayes factors / SI. Used if the input is a posterior, otherwise (in the case of models) ignored.
- BF
The amount of support required to be included in the support interval.
- diagnostic
Diagnostic metrics to compute. Character (vector) or list with one or more of these options:
"ESS"
,"Rhat"
,"MCSE"
or"all"
.- priors
Add the prior used for each parameter.
- effects
Should results for fixed effects, random effects or both be returned? Only applies to mixed models. May be abbreviated.
- component
Should results for all parameters, parameters for the conditional model or the zero-inflated part of the model be returned? May be abbreviated. Only applies to brms-models.
- parameters
Regular expression pattern that describes the parameters that should be returned. Meta-parameters (like
lp__
orprior_
) are filtered by default, so only parameters that typically appear in thesummary()
are returned. Useparameters
to select specific parameters for the output.
Details
One or more components of point estimates (like posterior mean or median),
intervals and tests can be omitted from the summary output by setting the
related argument to NULL
. For example, test = NULL
and centrality = NULL
would only return the HDI (or CI).
References
Makowski, D., Ben-Shachar, M. S., Chen, S. H. A., and Lüdecke, D. (2019). Indices of Effect Existence and Significance in the Bayesian Framework. Frontiers in Psychology 2019;10:2767. doi:10.3389/fpsyg.2019.02767
Examples
library(bayestestR)
if (require("logspline")) {
x <- rnorm(1000)
describe_posterior(x)
describe_posterior(x, centrality = "all", dispersion = TRUE, test = "all")
describe_posterior(x, ci = c(0.80, 0.90))
df <- data.frame(replicate(4, rnorm(100)))
describe_posterior(df)
describe_posterior(df, centrality = "all", dispersion = TRUE, test = "all")
describe_posterior(df, ci = c(0.80, 0.90))
df <- data.frame(replicate(4, rnorm(20)))
head(reshape_iterations(describe_posterior(df, keep_iterations = TRUE)))
}
#> Warning: Could not estimate a good default ROPE range. Using 'c(-0.1, 0.1)'.
#> Warning: Could not estimate a good default ROPE range. Using 'c(-0.1, 0.1)'.
#> Warning: Prior not specified! Please specify a prior (in the form 'prior =
#> distribution_normal(1000, 0, 1)') to get meaningful results.
#> Warning: Bayes factors might not be precise.
#> For precise Bayes factors, sampling at least 40,000 posterior samples is
#> recommended.
#> Warning: Could not estimate a good default ROPE range. Using 'c(-0.1, 0.1)'.
#> Warning: Prior not specified! Please specify priors (with column order matching
#> 'posterior') to get meaningful results.
#> Warning: Bayes factors might not be precise.
#> For precise Bayes factors, sampling at least 40,000 posterior samples is
#> recommended.
#> Parameter Median CI CI_low CI_high pd ROPE_CI ROPE_low ROPE_high
#> 1 X1 -0.6515244 0.95 -1.630517 0.2687579 0.70 0.95 -0.1 0.1
#> 2 X2 -0.1329377 0.95 -1.176432 1.0924115 0.60 0.95 -0.1 0.1
#> 3 X3 -0.3181801 0.95 -1.612736 1.2170517 0.70 0.95 -0.1 0.1
#> 4 X4 0.1420190 0.95 -1.120196 1.3832350 0.55 0.95 -0.1 0.1
#> 5 X1 -0.6515244 0.95 -1.630517 0.2687579 0.70 0.95 -0.1 0.1
#> 6 X2 -0.1329377 0.95 -1.176432 1.0924115 0.60 0.95 -0.1 0.1
#> ROPE_Percentage iter_index iter_group iter_value
#> 1 0.11111111 1 1 0.1401798
#> 2 0.11111111 2 1 -0.5956548
#> 3 0.05555556 3 1 0.9770296
#> 4 0.16666667 4 1 0.3966213
#> 5 0.11111111 1 2 -0.8382385
#> 6 0.11111111 2 2 -0.8799540
# \dontrun{
# rstanarm models
# -----------------------------------------------
if (require("rstanarm") && require("emmeans")) {
model <- stan_glm(mpg ~ wt + gear, data = mtcars, chains = 2, iter = 200, refresh = 0)
describe_posterior(model)
describe_posterior(model, centrality = "all", dispersion = TRUE, test = "all")
describe_posterior(model, ci = c(0.80, 0.90))
# emmeans estimates
# -----------------------------------------------
describe_posterior(emtrends(model, ~1, "wt"))
}
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#tail-ess
#> Warning: Bayes factors might not be precise.
#> For precise Bayes factors, sampling at least 40,000 posterior samples is
#> recommended.
#> Summary of Posterior Distribution
#>
#> Parameter | Median | 95% CI | pd | ROPE | % in ROPE
#> ----------------------------------------------------------------------
#> overall | -5.43 | [-7.02, -4.02] | 100% | [-0.10, 0.10] | 0%
# brms models
# -----------------------------------------------
if (require("brms")) {
model <- brms::brm(mpg ~ wt + cyl, data = mtcars)
describe_posterior(model)
describe_posterior(model, ci = c(0.80, 0.90))
}
#> Compiling Stan program...
#> Start sampling
#>
#> SAMPLING FOR MODEL '2d19b3a372313df641edf05db5e9f303' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 1.3e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.031012 seconds (Warm-up)
#> Chain 1: 0.026926 seconds (Sampling)
#> Chain 1: 0.057938 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL '2d19b3a372313df641edf05db5e9f303' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 8e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 0.031231 seconds (Warm-up)
#> Chain 2: 0.034426 seconds (Sampling)
#> Chain 2: 0.065657 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL '2d19b3a372313df641edf05db5e9f303' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 9e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 0.035164 seconds (Warm-up)
#> Chain 3: 0.026728 seconds (Sampling)
#> Chain 3: 0.061892 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL '2d19b3a372313df641edf05db5e9f303' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 9e-06 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 0.032822 seconds (Warm-up)
#> Chain 4: 0.029802 seconds (Sampling)
#> Chain 4: 0.062624 seconds (Total)
#> Chain 4:
#> Summary of Posterior Distribution
#>
#> Parameter | Median | 80% CI | 90% CI | pd | ROPE | % in ROPE | Rhat | ESS
#> -------------------------------------------------------------------------------------------------------------
#> (Intercept) | 39.64 | [37.40, 41.95] | [36.73, 42.55] | 100% | [-0.60, 0.60] | 0% | 1.000 | 5744.00
#> wt | -3.17 | [-4.20, -2.13] | [-4.49, -1.80] | 100% | [-0.60, 0.60] | 0% | 1.001 | 2180.00
#> cyl | -1.52 | [-2.08, -0.96] | [-2.23, -0.79] | 99.92% | [-0.60, 0.60] | 0% | 1.000 | 2202.00
# BayesFactor objects
# -----------------------------------------------
if (require("BayesFactor")) {
bf <- ttestBF(x = rnorm(100, 1, 1))
describe_posterior(bf)
describe_posterior(bf, centrality = "all", dispersion = TRUE, test = "all")
describe_posterior(bf, ci = c(0.80, 0.90))
}
#> Summary of Posterior Distribution
#>
#> Parameter | Median | 80% CI | 90% CI | pd | ROPE | % in ROPE | BF | Prior
#> --------------------------------------------------------------------------------------------------------------------
#> Difference | 0.85 | [0.71, 0.99] | [0.68, 1.03] | 100% | [-0.10, 0.10] | 0% | 1.36e+10 | Cauchy (0 +- 0.71)
# }