Become a Bayesian master you will
Existing R packages allow users to easily fit a large variety of models and extract and visualize the posterior draws. However, most of these packages only return a limited set of indices (e.g., point-estimates and CIs). bayestestR provides a comprehensive and consistent set of functions to analyze and describe posterior distributions generated by a variety of models objects, including popular modeling packages such as rstanarm, brms or BayesFactor.
You can reference the package and its documentation as follows:
Click on the buttons above to access the package documentation and the easystats blog, and check-out these vignettes:
In the Bayesian framework, parameters are estimated in a probabilistic fashion as distributions. These distributions can be summarised and described by reporting 4 types of indices:
mean()
, median()
or map_estimate()
for an estimation of the mode.point_estimate()
can be used to get them at once and can be run directly on models.p_direction()
for a Bayesian equivalent of the frequentist p-value (see Makowski et al., 2019)p_pointnull()
represents the odds of null hypothesis (h0 = 0) compared to the most likely hypothesis (the MAP).bf_pointnull()
for a classic Bayes Factor (BF) assessing the likelihood of effect presence against its absence (h0 = 0).p_rope()
is the probability of the effect falling inside a Region of Practical Equivalence (ROPE).bf_rope()
computes a Bayes factor against the null as defined by a region (the ROPE).p_significance()
that combines a region of equivalence with the probability of direction.describe_posterior()
is the master function with which you can compute all of the indices cited below at once.
describe_posterior(
rnorm(10000),
centrality = "median",
test = c("p_direction", "p_significance")
)
## # Description of Posterior Distributions
##
## Parameter | Median | 89% CI | pd | ps
## --------------------------------------------------------
## Posterior | 4.510e-05 | [-1.654, 1.529] | 50.00% | 0.463
describe_posterior()
works for many objects, including more complex brmsfit-models. For better readability, the output is separated by model components:
zinb <- read.csv("http://stats.idre.ucla.edu/stat/data/fish.csv")
set.seed(123)
model <- brm(
bf(
count ~ child + camper + (1 | persons),
zi ~ child + camper + (1 | persons)
),
data = zinb,
family = zero_inflated_poisson(),
chains = 1,
iter = 500
)
describe_posterior(
model,
effects = "all",
component = "all",
test = c("p_direction", "p_significance"),
centrality = "all"
)
## # Description of Posterior Distributions
##
## # Fixed Effects (Conditional Model)
##
## Parameter | Median | Mean | MAP | 89% CI | pd | ps | Rhat | ESS
## -------------------------------------------------------------------------------------------
## Intercept | 0.962 | 0.963 | 0.964 | [-0.341, 2.293] | 90.00% | 0.884 | 1.011 | 109.680
## child | -1.159 | -1.156 | -1.155 | [-1.320, -0.995] | 100.00% | 1.000 | 0.996 | 278.161
## camper | 0.725 | 0.722 | 0.729 | [ 0.568, 0.870] | 100.00% | 1.000 | 0.996 | 271.166
##
## # Fixed Effects (Zero-Inflated Model)
##
## Parameter | Median | Mean | MAP | 89% CI | pd | ps | Rhat | ESS
## -------------------------------------------------------------------------------------------
## Intercept | -0.480 | -0.507 | -0.220 | [-1.694, 0.427] | 78.00% | 0.732 | 0.997 | 137.695
## child | 1.850 | 1.863 | 1.813 | [ 1.371, 2.468] | 100.00% | 1.000 | 0.996 | 302.735
## camper | -0.883 | -0.860 | -0.988 | [-1.474, -0.225] | 98.40% | 0.964 | 0.996 | 292.499
##
## # Random Effects (Conditional Model)
##
## Parameter | Median | Mean | MAP | 89% CI | pd | ps | Rhat | ESS
## --------------------------------------------------------------------------------------------------------
## persons 1 | -0.990 | -1.013 | -0.841 | [-2.250, 0.384] | 92.00% | 0.904 | 1.007 | 105.679
## persons 2 | -0.005 | -0.035 | 0.026 | [-1.642, 0.950] | 50.00% | 0.452 | 1.013 | 108.942
## persons 3 | 0.693 | 0.662 | 0.686 | [-0.581, 2.078] | 79.60% | 0.776 | 1.010 | 113.791
## persons 4 | 1.574 | 1.560 | 1.561 | [ 0.094, 2.720] | 96.80% | 0.960 | 1.009 | 113.676
## SD persons (Intercept) | 1.422 | 1.584 | 1.067 | [ 0.663, 2.488] | 100.00% | 1.000 | 1.010 | 126.414
##
## # Random Effects (Zero-Inflated Model)
##
## Parameter | Median | Mean | MAP | 89% CI | pd | ps | Rhat | ESS
## --------------------------------------------------------------------------------------------------------
## persons 1 | 1.096 | 1.109 | 1.076 | [-0.064, 2.067] | 94.80% | 0.932 | 0.997 | 166.262
## persons 2 | 0.183 | 0.176 | 0.219 | [-0.826, 1.118] | 63.20% | 0.544 | 0.996 | 153.965
## persons 3 | -0.301 | -0.309 | -0.539 | [-1.442, 0.572] | 64.00% | 0.588 | 0.997 | 153.793
## persons 4 | -1.452 | -1.465 | -1.440 | [-2.438, -0.090] | 98.00% | 0.972 | 1.000 | 189.301
## SD persons (Intercept) | 1.303 | 1.494 | 0.989 | [ 0.614, 2.668] | 100.00% | 1.000 | 0.996 | 129.124
bayestestR also includes many other features useful for your Bayesian analsyes. Here are some more examples:
library(bayestestR)
posterior <- distribution_gamma(10000, 1.5) # Generate a skewed distribution
centrality <- point_estimate(posterior) # Get indices of centrality
centrality
## # Point Estimates
##
## Median | Mean | MAP
## --------------------
## 1.18 | 1.50 | 0.51
As for other easystats packages, plot()
methods are available from the see package for many functions:
While the median and the mean are available through base R functions, map_estimate()
in bayestestR can be used to directly find the Highest Maximum A Posteriori (MAP) estimate of a posterior, i.e., the value associated with the highest probability density (the “peak” of the posterior distribution). In other words, it is an estimation of the mode for continuous parameters.
hdi()
computes the Highest Density Interval (HDI) of a posterior distribution, i.e., the interval which contains all points within the interval have a higher probability density than points outside the interval. The HDI can be used in the context of Bayesian posterior characterization as Credible Interval (CI).
Unlike equal-tailed intervals (see eti()
) that typically exclude 2.5% from each tail of the distribution, the HDI is not equal-tailed and therefore always includes the mode(s) of posterior distributions.
By default, hdi()
returns the 89% intervals (ci = 0.89
), deemed to be more stable than, for instance, 95% intervals. An effective sample size of at least 10.000 is recommended if 95% intervals should be computed (Kruschke, 2015). Moreover, 89 indicates the arbitrariness of interval limits - its only remarkable property is being the highest prime number that does not exceed the already unstable 95% threshold (McElreath, 2018).
posterior <- distribution_chisquared(10000, 4)
hdi(posterior, ci = .89)
## # Highest Density Interval
##
## 89% HDI
## ------------
## [0.18, 7.63]
eti(posterior, ci = .89)
## # Equal-Tailed Interval
##
## 89% ETI
## ------------
## [0.75, 9.25]
p_direction()
computes the Probability of Direction (pd, also known as the Maximum Probability of Effect - MPE). It varies between 50% and 100% (i.e., 0.5
and 1
) and can be interpreted as the probability (expressed in percentage) that a parameter (described by its posterior distribution) is strictly positive or negative (whichever is the most probable). It is mathematically defined as the proportion of the posterior distribution that is of the median’s sign. Although differently expressed, this index is fairly similar (i.e., is strongly correlated) to the frequentist p-value.
Relationship with the p-value: In most cases, it seems that the pd corresponds to the frequentist one-sided p-value through the formula p-value = (1-pd/100)
and to the two-sided p-value (the most commonly reported) through the formula p-value = 2*(1-pd/100)
. Thus, a pd
of 95%
, 97.5%
99.5%
and 99.95%
corresponds approximately to a two-sided p-value of respectively .1
, .05
, .01
and .001
. See the reporting guidelines.
posterior <- distribution_normal(10000, 0.4, 0.2)
p_direction(posterior)
## pd = 97.73%
rope()
computes the proportion (in percentage) of the HDI (default to the 89% HDI) of a posterior distribution that lies within a region of practical equivalence.
Statistically, the probability of a posterior distribution of being different from 0 does not make much sense (the probability of it being different from a single point being infinite). Therefore, the idea underlining ROPE is to let the user define an area around the null value enclosing values that are equivalent to the null value for practical purposes (Kruschke & Liddell, 2018, p. @kruschke2018rejecting).
Kruschke suggests that such null value could be set, by default, to the -0.1 to 0.1 range of a standardized parameter (negligible effect size according to Cohen, 1988). This could be generalized: For instance, for linear models, the ROPE could be set as 0 +/- .1 * sd(y)
. This ROPE range can be automatically computed for models using the rope_range function.
Kruschke suggests using the proportion of the 95% (or 90%, considered more stable) HDI that falls within the ROPE as an index for “null-hypothesis” testing (as understood under the Bayesian framework, see equivalence_test).
posterior <- distribution_normal(10000, 0.4, 0.2)
rope(posterior, range = c(-0.1, 0.1))
## # Proportion of samples inside the ROPE [-0.10, 0.10]:
##
## inside ROPE
## -----------
## 1.33 %
bayesfactor_parameters()
computes Bayes factors against the null (either a point or an interval), bases on prior and posterior samples of a single parameter. This Bayes factor indicates the degree by which the mass of the posterior distribution has shifted further away from or closer to the null value(s) (relative to the prior distribution), thus indicating if the null value has become less or more likely given the observed data.
When the null is an interval, the Bayes factor is computed by comparing the prior and posterior odds of the parameter falling within or outside the null; When the null is a point, a Savage-Dickey density ratio is computed, which is also an approximation of a Bayes factor comparing the marginal likelihoods of the model against a model in which the tested parameter has been restricted to the point null (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010).
prior <- distribution_normal(10000, mean = 0, sd = 1)
posterior <- distribution_normal(10000, mean = 1, sd = 0.7)
bayesfactor_parameters(posterior, prior, direction = "two-sided", null = 0)
## # Bayes Factor (Savage-Dickey density ratio)
##
## BF
## -----
## 1.946
##
## * Evidence Against The Null: [0]
^{The lollipops represent the density of a point-null on the prior distribution (the blue lollipop on the dotted distribution) and on the posterior distribution (the red lollipop on the yellow distribution). The ratio between the two - the Savage-Dickey ratio - indicates the degree by which the mass of the parameter distribution has shifted away from or closer to the null.}
For more info, see the Bayes factors vignette.
rope_range()
: This function attempts at automatically finding suitable “default” values for the Region Of Practical Equivalence (ROPE). Kruschke (2018) suggests that such null value could be set, by default, to a range from -0.1
to 0.1
of a standardized parameter (negligible effect size according to Cohen, 1988), which can be generalised for linear models to -0.1 * sd(y), 0.1 * sd(y)
. For logistic models, the parameters expressed in log odds ratio can be converted to standardized difference through the formula sqrt(3)/pi
, resulting in a range of -0.05
to 0.05
.
rope_range(model)
estimate_density()
: This function is a wrapper over different methods of density estimation. By default, it uses the base R density
with by default uses a different smoothing bandwidth ("SJ"
) from the legacy default implemented the base R density
function ("nrd0"
). However, Deng & Wickham suggest that method = "KernSmooth"
is the fastest and the most accurate.
distribution()
: Generate a sample of size n with near-perfect distributions.
distribution(n = 10)
## [1] -1.28 -0.88 -0.59 -0.34 -0.11 0.11 0.34 0.59 0.88 1.28
density_at()
: Compute the density of a given point of a distribution.
density_at(rnorm(1000, 1, 1), 1)
## [1] 0.39
Kruschke, J. K. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan (2. ed). Amsterdam: Elsevier, Academic Press.
Kruschke, J. K. (2018). Rejecting or accepting parameter values in Bayesian estimation. Advances in Methods and Practices in Psychological Science, 1(2), 270–280. https://doi.org/10.1177/2515245918771304
Kruschke, J. K., & Liddell, T. M. (2018). The Bayesian new statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin & Review, 25(1), 178–206. https://doi.org/10.3758/s13423-016-1221-4
McElreath, R. (2018). Statistical rethinking. https://doi.org/10.1201/9781315372495
Wagenmakers, E.-J., Lodewyckx, T., Kuriyal, H., & Grasman, R. (2010). Bayesian hypothesis testing for psychologists: A tutorial on the SavageDickey method. Cognitive Psychology, 60(3), 158–189. https://doi.org/10.1016/j.cogpsych.2009.12.001