is_converged() provides an alternative convergence
test for merMod-objects.
Usage
is_converged(x, tolerance = 0.001, ...)
# S3 method for class 'merMod'
is_converged(x, tolerance = 0.001, verbose = TRUE, ...)Value
TRUE if convergence is fine and FALSE if convergence is
suspicious. Additionally, the convergence value is returned as attribute.
For merMod models, if the model is singular, convergence is determined by
the optimizer's convergence code. For non-singular models where derivatives
are unavailable, FALSE is returned and a message is printed to indicate
that convergence cannot be assessed through the usual gradient-based checks.
Convergence and log-likelihood
Convergence problems typically arise when the model hasn't converged to a solution where the log-likelihood has a true maximum. This may result in unreliable and overly complex (or non-estimable) estimates and standard errors.
Inspect model convergence
lme4 performs a convergence-check (see ?lme4::convergence), however, as
discussed here and suggested by
one of the lme4-authors in this comment,
this check can be too strict. is_converged() (and its wrapper function,
performance::check_convergence()) thus provides an alternative convergence
test for merMod-objects.
Resolving convergence issues
Convergence issues are not easy to diagnose. The help page on
?lme4::convergence provides most of the current advice about how to resolve
convergence issues. In general, convergence issues may be addressed by one or
more of the following strategies: 1. Rescale continuous predictors; 2. try a
different optimizer; 3. increase the number of iterations; or, if everything
else fails, 4. simplify the model. Another clue might be large parameter
values, e.g. estimates (on the scale of the linear predictor) larger than 10
in (non-identity link) generalized linear model might indicate complete
separation, which can be addressed by regularization, e.g. penalized
regression or Bayesian regression with appropriate priors on the fixed
effects.
Convergence versus Singularity
Note the different meaning between singularity and convergence: singularity indicates an issue with the "true" best estimate, i.e. whether the maximum likelihood estimation for the variance-covariance matrix of the random effects is positive definite or only semi-definite. Convergence is a question of whether we can assume that the numerical optimization has worked correctly or not. A convergence failure means the optimizer (the algorithm) could not find a stable solution (Bates et. al 2015).
For singular models (see ?lme4::isSingular), convergence is determined
based on the optimizer's convergence code. If the optimizer reports
successful convergence (convergence code 0) for a singular model,
is_converged() returns TRUE. For non-singular models, in cases where the
gradient and Hessian are not available, is_converged() returns FALSE and
prints a message to indicate that convergence cannot be assessed through the
usual gradient-based checks. Note that performance::check_convergence() is
a wrapper around insight::is_converged().
References
Bates, D., Mächler, M., Bolker, B., and Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1-48. doi:10.18637/jss.v067.i01
Examples
library(lme4)
data(cbpp)
set.seed(1)
cbpp$x <- rnorm(nrow(cbpp))
cbpp$x2 <- runif(nrow(cbpp))
model <- glmer(
cbind(incidence, size - incidence) ~ period + x + x2 + (1 + x | herd),
data = cbpp,
family = binomial()
)
#> boundary (singular) fit: see help('isSingular')
is_converged(model)
#> [1] TRUE
#> attr(,"gradient")
#> [1] NA
# \donttest{
library(glmmTMB)
model <- glmmTMB(
Sepal.Length ~ poly(Petal.Width, 4) * poly(Petal.Length, 4) +
(1 + poly(Petal.Width, 4) | Species),
data = iris
)
#> Warning: Model convergence problem; non-positive-definite Hessian matrix. See vignette('troubleshooting')
#> Warning: Model convergence problem; false convergence (8). See vignette('troubleshooting'), help('diagnose')
is_converged(model)
#> [1] FALSE
# }
