-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
describe_posterior()
triggers model recompiling & sampling
#997
Comments
This seems to happen when Reprex: library(brms)
m <- brm(
formula = bf(
hp ~ mpg + (1 | am / cyl) + (1 + mpg | cyl),
shape ~ 1 + (1 | cyl)
), data = mtcars,
family = negbinomial(),
save_pars = save_pars(group = TRUE, latent = FALSE, all = TRUE),
init = "random",
chains = 2, iter = 1000, warmup = 500, thin = 1,
cores = 2,
normalize = TRUE, algorithm = "sampling",
seed = 1234
)
insight::get_sigma(m)
#> The desired updates require recompiling the model
#> Compiling Stan program... |
This is happening in Lines 81 to 94 in 3ae8f48
Refitting a model, especially an MCMC one, can be very costly. This should probably almost never happen without the user requesting this. |
What would you suggest, returning |
I don't know enough about what's going on here, I just tracked this down. What is this function trying to do that it (sometimes?) needs an empty model? |
@mattansb why is rope range even getting called when: |
We define the rope range depending on 0.1 * sigma for some distributions. If we can't extract sigma via If sigma is not available, we fall back to |
For that model: sigma(m)
#> numeric(0) |
@JWiley it shouldn't. See potential fix in easystats/bayestestR#695 |
@strengejacke why does |
I'm not particularly up-to-date in this space, but last I read about estimating variances for GLMMs used for effect sizes, R2, and the like, you could define R2 something like: I don't recall exact equation around the term with lambda, some ratios I think. Anyways, and There's probably some errors in what I've just said, but that is the gist that I recall, so that when trying to get effect sizes for models like What I'm guessing on is that since On the plus side, that explain why it only took 10 minutes, not 6 hours like my full model when it did recompile and run. I was very confused because I thought it was running the same model and as slow as it was, it was still way faster than my models. If I'm on track with that, some thoughts from a user perspective that would be nice:
|
The easy solution is to add And @JWiley is right with his explanation why we need the null model. It's required in the sub-function |
Can we close this, since it's fixed in bayestestR? |
Let's keep this open - we should think of a better option than re-fitting a null model - especially a Bayesian null-model, since "recycling" the priors from the full model to the null model might not make any sense. |
Discussed in easystats/bayestestR#693
Originally posted by JWiley January 27, 2025
I am experiencing some behaviour that I think is unexpected, but perhaps I am missing something.
I fit some models in
brms
and save them. Later I read them back in and calldescribe_posterior()
, mostly because I want to report 99% CIs which is not the default calculated inbrms
.This is the code:
The part that is unexpected to me is that this seems to trigger recompiling and resampling the model.
Which really slows it all down.
It's fairly easy to side step this, given all I want that's not already in the model results I saved is the 99% CIs and maybe some of the probability of direction, but I just expected this call to
describe_posterior()
to be virtually "free" computationally.As far as I know, I'm not trying to calculate bayes factors or anything where you'd need prior draws, which is the only thing I can think of that's not already drawn and saved in the model.
Any thoughts or insight would be welcome.
I'm not entirely sure what all is needed in the model to trigger this
so my best guess is that it is something to do with a random slope + the negative binomial distribution --- it may also be to do with correlations amongst random effects
with most models, this behaviour is not exhibited, which I think is why I thought it may be unexpected, but it also may be expected and I don't understand what is happening or needs to happen for the posterior summaries that are calculated
Reproducible example:
The text was updated successfully, but these errors were encountered: