The model_parameters() function (also accessible via the shortcut parameters()) can also be used to calculate standardized model parameters via the standardize-argument. Recall that standardizing data/variable (z-scoring), i.e. centering and scaling, involves expressing data in terms of standard deviation (i.e., mean = 0, SD = 1). That is, it the process of subtracting the mean and dividing the quantity by standard deviation. Standardization can help avoid multicollinearity issues when more complex (polynomial, for instance) terms are included in the model.

There are different methods of standardizing model parameters (see also ?effectsize::standardize_parameters):

  • "refit",
  • "posthoc"
  • "smart"
  • "basic"

If you are interested in more statistical and technical details, and how standardization methods relate to different (standardized) effect size measures, read the following vignette from effectsize package, from whence this functionality comes:

Standardization by re-fitting the model

standardize = "refit" is based on a complete model re-fit with a standardized version of data. Hence, this method is equal to standardizing the variables before fitting the model. It is the most accurate (Neter et al., 1989), but it is also the most computationally costly and long (especially for heavy models such as, for instance, Bayesian models). This method is particularly recommended for complex models that include interactions or transformations (e.g., polynomial or spline terms).

When standardize = "refit", model_parameters() internally calls effectsize::standardize() to standardize the data that was used to fit the model and updates the model with the standardized data. Note that effectsize::standardize() tries to detect which variables should be standardized and which not. For instance, having a log(x) in the model formula would exclude x from being standardized, because x might get negative values, and thus log(x) would no longer be defined. Factors or dates will also not be standardized. Response variables will be standardized, if appropriate.

iris$grp <- as.factor(sample(1:3, nrow(iris), replace = TRUE))

# fit example model
model <- lme4::lmer(
  Sepal.Length ~ Species * Sepal.Width + Petal.Length + (1 | grp),
  data = iris

# classic model parameters

# standardized model parameters
model_parameters(model, standardize = "refit")

The second output is identical to following:

# standardize continuous variables manually
model2 <- lme4::lmer(
  scale(Sepal.Length) ~ Species * scale(Sepal.Width) + scale(Petal.Length) + (1 | grp),
  data = iris


Post-hoc standardization

standardize = "posthoc" aims at emulating the results obtained by "refit" without refitting the model. The coefficients are divided by the standard deviation of the outcome (which becomes their expression unit). Then, the coefficients related to numeric variables are additionally multiplied by the standard deviation of the related terms, so that they correspond to changes of 1 SD of the predictor (e.g., “a change in 1 SD of x is related to a change of 0.24 of the SD of y”). This does not apply to binary variables or factors, so the coefficients are still related to changes in levels.

This method is not accurate and tends to give aberrant results when interactions are specified. However, this method of standardization is the “classic” result obtained by many statistical packages when standardized coefficients are requested.

When standardize = "posthoc", model_parameters() internally calls effectsize::standardize_parameters(method = "posthoc"). Test statistic and p-values are not affected, i.e. they are the same as if no standardization would be applied.

model_parameters(model, standardize = "posthoc")

standardize = "basic" also applies post-hoc standardization, however, factors are converted to numeric, which means that it also scales the coefficient by the standard deviation of model’s matrix’ parameter of factor levels (transformed to integers) or binary predictors.

model_parameters(model, standardize = "basic")

Compare the two outputs above and notice how coefficient estimates, standard errors, confidence intervals, and p-values change for main effect and interaction effect terms containing Species variable, the only factor variable in our model.

This method is the one implemented by default in other software packages, such as lm.beta::lm.beta():

model3 <- lm(Sepal.Length ~ Species * Sepal.Width + Petal.Length, data = iris)
mp <- model_parameters(model3, standardize = "basic")
out <- lm.beta(model3)

data.frame(model_parameters = mp$Std_Coefficient, lm.beta = coef(out))

Smart standardization

standardize = "smart" is similar to standardize = "posthoc" in that it does not involve model re-fitting. The difference is that the SD of the response is computed on the relevant section of the data. For instance, if a factor with 3 levels A (the intercept), B and C is entered as a predictor, the effect corresponding to B versus A will be scaled by the variance of the response at the intercept only. As a results, the coefficients for effects of factors are similar to a Glass’ delta.

model_parameters(model, standardize = "smart")