If you know simulation based calibration checking (SBC), you will enjoy our new paper "Posterior SBC: Simulation-Based Calibration Checking Conditional on Data" with Teemu Säilynoja, @marvinschmitt.com and @paulbuerkner.com
https://arxiv.org/abs/2502.03279 1/5
=> More informations about this toot | View the thread
For example, for hierarchical models, MCMC can have problems either with centered or non-centered parameterization depending on the data. Given one of the parameterizations, prior SBC observes both failing and non-failing inference. Posterior SBC focuses on the posterior conditional on the data, and can assess which parameterization works better for that specific data. 3/5
=> More informations about this toot | View the thread
The original SBC checks whether the inference works for all possible data sets generated using the model and parameter draws from the prior. Priors are usually wider than posteriors and may contain regions where the computation fails. Illustration: Regions 1 and 3 exhibit bias in opposite directions, while inference is well calibrated within region 2. Prior SBC will not suggest calibration issues, while posterior SBC can assess inference for a posterior contained in one of the regions. 2/5
=> More informations about this toot | View the thread
@MarvinSchmitt started collaborating on this while visiting Aalto University as @ELLISforEurope PhD student. ELLIS PhD student program has been great for increasing research visits and collaboration! 5/5
=> More informations about this toot | View the thread
We illustrate with a hierarchical normal and a Lotka-Volterra models using MCMC, and a drift diffusion model using amortized Bayesian inference. Posterior SBC is specifically useful for amortized inference, as the repeated inference has negligible cost. 4/5
=> More informations about this toot | View the thread
Postdoc and doctoral student positions in developing Bayesian methods! The positions are funded by Finnish Center for Artificial Intelligence FCAI and there are many other topics, too, but if you specify me as the preferred supervisor then it's going to be Bayesian methods: See more at https://fcai.fi/winter-2025-researcher-positions-in-ai-and-machine-learning
=> More informations about this toot | View the thread
Spectacular sunset a few days ago
=> More informations about this toot | View the thread
Call for StanCon 2025+ https://discourse.mc-stan.org/t/call-for-stancon-2025/37171
StanCons have been the best conferences where I ever have been and suitable also for non-Stan people
[#]Bayesian #Stan #StanCon
=> More informations about this toot | View the thread
My StanCon 2024 talk titled "Pareto-k diagnostic and sample size needed for CLT to hold" (the title is an approximation, but a more accurate title would have been too long) https://www.youtube.com/watch?v=12OMXQFbW6I&list=PLCrWEzJgSUqzNzh6mjWsWUu-lSK59VXP6&index=32
[#]Bayesian
=> More informations about this toot | View the thread
=> View attached media | View attached media
=> More informations about this toot | View the thread
=> More informations about this toot | View the thread
The latest brms
CRAN release added support for priorsense
for easy prior and likelihood sensitivity analysis https://doi.org/10.1007/s11222-023-10366-5
> fit |> powerscale_plot_dens(variable='b_doseg', help_text=FALSE) + labs(x='Dose (g) coefficient', y=NULL) > powerscale_sensitivity(fit, variable='b_doseg') Sensitivity based on cjs_dist: variable prior likelihood diagnosis b_doseg 0.236 0.219 prior-data conflict
[#]Bayesian #rstats
=> More informations about this toot | View the thread
The most recent brms
CRAN version added support for loo_epred()
and moment matching for LOO-CV predictions, which makes it easy to make, for example, predictive probability calibration plots using the LOO-CV predictions
rd<-reliabilitydiag(EMOS = loo_epred(fit), y = df$y) autoplot(rd) + labs(x = "Predicted (LOO)", y = "Conditional event probabilities")
[#]Bayesian #rstats
=> More informations about this toot | View the thread
Somehow I had missed, noticed, forgot, and now remembered again that brms
CRAN version supports Stan's Pathfinder algorithm https://jmlr.org/papers/v23/21-0889.html when using cmdstanr
backend
[#]Bayesian
=> More informations about this toot | View the thread
I'm looking for a post-doc and doctoral student to join my group at Aalto to work on Bayesian workflow, cross-validation, model checking, projection predictive model selection, inference diagnostics, priors, and survival analysis (I have plenty of research ideas, pick any combination you like)
[#]Bayesian
=> More informations about this toot | View the thread
Arr! Today is international talk like a pirate day, and the pirates' favorite prior is The ARR2 prior: flexible predictive prior definition for Bayesian auto-regressions https://arxiv.org/abs/2405.19920
[#]Bayesian
=> More informations about this toot | View the thread
David Kohns talking at StanCon about our paper "The ARR2 prior: flexible predictive prior definition for Bayesian auto-regressions" https://arxiv.org/abs/2405.19920
=> More informations about this toot | View the thread
Anna Riha at StanCon talking about our paper "Supporting Bayesian modelling workflows with iterative filtering for multiverse analysis" https://arxiv.org/abs/2404.01688
=> More informations about this toot | View the thread
StanCon 2nd day opened by @vianey's keynote "From the Depths to the Stars: How Modeling Shark Movements Illuminates Star Behavior"
=> More informations about this toot | View the thread
It's amazing how all 12 speakers today in StanCon were so accurate with their timing and everyone did get full 5 mins of questions!
=> More informations about this toot | View the thread
=> This profile with reblog | Go to avehtari@bayes.club account This content has been proxied by September (3851b).Proxy Information
text/gemini