Reasons to prefer this approach are **reliability**, **accuracy** (in noisy data and small samples), the possibility of introducing **prior knowledge** into the analysis and, critically, **results intuitiveness** and their **straightforward interpretation** (Andrews and Baguley 2013; Etz and Vandekerckhove 2016; Kruschke 2010; Kruschke, Aguinis, and Joo 2012; Wagenmakers et al. 2018).

In general, the frequentist approach has been associated with the focus on null hypothesis testing, and the misuse of *p* values has been shown to critically contribute to the reproducibility crisis of psychological science (Chambers et al. 2014; Szucs and Ioannidis 2016). There is a general agreement that the generalization of the Bayesian approach is one way of overcoming these issues (Benjamin et al. 2018; Etz and Vandekerckhove 2016).

Once we agreed that the Bayesian framework is the right way to go, you might wonder *what* is the Bayesian framework.

**What’s all the fuss about?**

Adopting the Bayesian framework is more of a shift in the paradigm than a change in the methodology. Indeed, all the common statistical procedures (t-tests, correlations, ANOVAs, regressions, …) can be achieved using the Bayesian framework. One of the core difference is that in the **frequentist view** (the “classic” statistics, with *p* and *t* values, as well as some weird *degrees of freedom*), **the effects are fixed** (but unknown) and **data are random**. On the contrary, the Bayesian inference process computes the **probability** of different effects *given the observed data*. Instead of having one estimated value of the “true effect”, this probabilistic approach gives a distribution of values, called the **“posterior” distribution**.

Bayesian’s uncertainty can be summarized, for instance, by giving the **median** of the distribution, as well as a range of values on the posterior distribution that includes the 95% most probable values (the 95% ** Credible Interval**). To illustrate the difference of interpretation, the Bayesian framework allows to say

In other words, omitting the maths behind it, we can say that:

- The frequentist bloke tries to estimate “the
**real effect**”. For instance, the “real” value of the correlation between*x*and*y*. Hence, frequentist models return a “**point-estimate**” (*i.e.*, a single value) of the “real” correlation (*e.g.*, r = 0.42) estimated under a number of obscure assumptions (at a minimum, considering that the data is sampled at random from a “parent”, usually normal distribution). -
**The Bayesian master assumes no such thing**. The data are what they are. Based on this observed data (and a**prior**belief about the result), the Bayesian sampling algorithm (sometimes referred to as**MCMC**sampling) returns a probability distribution (called**the posterior**) of the effect that is compatible with the observed data. For the correlation between*x*and*y*, it will return a distribution that says, for example, “the most probable effect is 0.42, but this data is also compatible with correlations of 0.12 and 0.74”. - To characterize our effects,
**no need of**or other cryptic indices. We simply describe the posterior distribution of the effect. For example, we can report the median, the 90%*p*values*Credible*Interval or other indices.

*Note: Altough the very purpose of this package is to advocate for the use of Bayesian statistics, please note that there are serious arguments supporting frequentist indices (see for instance this thread). As always, the world is not black and white (p < .001).*

**So… how does it work?**

You can install `bayestestR`

along with the whole **easystats** suite by running the following:

```
install.packages("devtools")
devtools::install_github("easystats/easystats")
```

Let’s also install and load the `rstanarm`

, that allows fitting Bayesian models, as well as `bayestestR`

, to describe them.

```
install.packages("rstanarm")
library(rstanarm)
```

Let’s start by fitting a simple frequentist linear regression (the `lm()`

function stands for *linear model*) between two numeric variables, `Sepal.Length`

and `Petal.Length`

from the famous `iris`

dataset, included by default in R.

```
Call:
lm(formula = Sepal.Length ~ Petal.Length, data = iris)
Residuals:
Min 1Q Median 3Q Max
-1.2468 -0.2966 -0.0152 0.2768 1.0027
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.3066 0.0784 54.9 <2e-16 ***
Petal.Length 0.4089 0.0189 21.6 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.41 on 148 degrees of freedom
Multiple R-squared: 0.76, Adjusted R-squared: 0.758
F-statistic: 469 on 1 and 148 DF, p-value: <2e-16
```

This analysis suggests that there is a **significant** (*whatever that means*) and **positive** (with a coefficient of `0.41`

) linear relationship between the two variables.

*Fitting and interpreting frequentist models is so easy that it is obvious that people use it instead of the Bayesian framework… right?*

**Not anymore.**

Parameter | Median | CI | CI_low | CI_high | pd | ROPE_CI | ROPE_low | ROPE_high | ROPE_Percentage | ESS | Rhat | Prior_Distribution | Prior_Location | Prior_Scale |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

(Intercept) | 4.31 | 89 | 4.18 | 4.43 | 100 | 89 | -0.08 | 0.08 | 0 | 4056 | 1 | normal | 0 | 8.3 |

Petal.Length | 0.41 | 89 | 0.38 | 0.44 | 100 | 89 | -0.08 | 0.08 | 0 | 4311 | 1 | normal | 0 | 1.2 |

**That’s it!** You fitted a Bayesian version of the model by simply using `stan_glm()`

instead of `lm()`

and described the posterior distributions of the parameters. The conclusion that we can drawn, for this example, are very similar. The effect (*the median of the effect’s posterior distribution*) is about `0.41`

, and it can be also be considered as *significant* in the Bayesian sense (more on that later).

**So, ready to learn more?** Check out the **next tutorial**!

Andrews, Mark, and Thom Baguley. 2013. “Prior Approval: The Growth of Bayesian Methods in Psychology.” *British Journal of Mathematical and Statistical Psychology* 66 (1): 1–7.

Benjamin, Daniel J, James O Berger, Magnus Johannesson, Brian A Nosek, E-J Wagenmakers, Richard Berk, Kenneth A Bollen, et al. 2018. “Redefine Statistical Significance.” *Nature Human Behaviour* 2 (1): 6.

Chambers, Christopher D, Eva Feredoes, Suresh Daniel Muthukumaraswamy, and Peter Etchells. 2014. “Instead of ’Playing the Game’ It Is Time to Change the Rules: Registered Reports at Aims Neuroscience and Beyond.” *AIMS Neuroscience* 1 (1): 4–17.

Etz, Alexander, and Joachim Vandekerckhove. 2016. “A Bayesian Perspective on the Reproducibility Project: Psychology.” *PloS One* 11 (2): e0149794.

Kruschke, John K. 2010. “What to Believe: Bayesian Methods for Data Analysis.” *Trends in Cognitive Sciences* 14 (7): 293–300.

Kruschke, John K, Herman Aguinis, and Harry Joo. 2012. “The Time Has Come: Bayesian Methods for Data Analysis in the Organizational Sciences.” *Organizational Research Methods* 15 (4): 722–52.

Szucs, Denes, and John PA Ioannidis. 2016. “Empirical Assessment of Published Effect Sizes and Power in the Recent Cognitive Neuroscience and Psychology Literature.” *BioRxiv*, 071530.

Wagenmakers, Eric-Jan, Maarten Marsman, Tahira Jamil, Alexander Ly, Josine Verhagen, Jonathon Love, Ravi Selker, et al. 2018. “Bayesian Inference for Psychology. Part I: Theoretical Advantages and Practical Ramifications.” *Psychonomic Bulletin & Review* 25 (1): 35–57.