Modelling non-Gaussian data
We will return to running generalised mixed-effects models (glmer
) in rstanarm
as we did in the first practical, this time considering non-Gaussian distributed-data, specified with the argument family
.
Example 1: Roaches
First we examine a dataset with a count response variable, roaches
(Gelman & Hill 2007), which is available in the rstanarm
package and for which there is a vignette for the glm
analysis. We will adapt this for our exercise to make this a hierarchical model using glmer
.
The dataset is intended to explore the efficacy of roach pest management treatment. Our response \(y~{i}\) is the count of the number of roaches caught in traps, in apartments \(i\) . Either a treatment (1
) or control (0
) was applied to each apartment, and the variable roaches
gives the number of roaches caught pre-treatment. There is also a variable indicating whether the apartment is in a building restricted to elderly residents:�senior
. Because the number of days for which the roach traps were used is not the same for all apartments in the sample (i.e. there were different levels of opportunity to catch the roaches), we also include that as an ‘exposure’ term, and it can be specified using the�offset
�argument to�stan_glm
.
Following the vignette example, we rescale the roaches
variable. We additionally generate and add a new grouping variable to denote clustered locations of the apartments: loc
so that we can demonstrate a hierarchical model. Spatial clustering of apartments could have an effect if roach population sizes vary over space, (e.g. due to habitat or breeding success factors) thereby having an effect on the number of roaches caught beyond the experimental treatments we are trying to observe.
library (ggplot2)
library (bayesplot) # you may need to install this one!
This is bayesplot version 1.10.0
- Online documentation and vignettes at mc-stan.org/bayesplot
- bayesplot theme set to bayesplot::theme_default()
* Does _not_ affect other ggplot2 plots
* See ?bayesplot_theme_set for details on theme setting
Loading required package: Rcpp
This is rstanarm version 2.26.1
- See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
- Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
- For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores())
data (roaches)
head (roaches)
y roach1 treatment senior exposure2
1 153 308.00 1 0 0.800000
2 127 331.25 1 0 0.600000
3 7 1.67 1 0 1.000000
4 7 3.00 1 0 1.000000
5 0 2.00 1 0 1.142857
6 0 0.00 1 0 1.000000
# Rescale
roaches$ roach1 <- roaches$ roach1 / 100
# Randomly assign 'location' number as a new grouping term
set.seed (2 ) # to ensure we get the same numbers each time
loc <- sample (1 : 12 , size= 262 , replace= TRUE )
loc
[1] 5 6 6 8 1 1 12 9 2 11 1 3 6 2 3 7 8 7 1 6 9 4 11 6 9
[26] 8 6 3 9 7 8 6 2 7 2 3 4 3 1 7 9 1 2 8 4 5 12 2 5 6
[51] 7 2 6 12 4 12 4 9 10 2 6 6 3 3 8 6 1 5 6 1 10 9 5 8 4
[76] 1 1 5 11 9 10 5 11 1 12 9 2 9 7 12 4 1 4 6 8 9 6 7 12 7
[101] 3 5 4 5 9 5 10 3 11 1 7 9 11 5 3 3 6 1 12 2 10 4 11 9 1
[126] 7 3 5 8 4 12 2 8 1 5 4 7 8 3 8 10 1 11 1 4 10 2 4 2 12
[151] 6 6 12 3 5 11 5 2 2 6 3 5 2 4 11 2 7 2 5 8 12 12 11 11 3
[176] 6 9 8 6 9 6 7 11 6 8 6 11 2 12 3 3 11 6 10 7 2 8 7 4 1
[201] 4 10 3 1 9 1 4 8 10 10 11 2 11 5 3 4 11 5 7 4 8 10 8 6 2
[226] 12 4 6 6 2 9 3 12 10 5 4 5 12 6 10 4 9 6 9 7 10 4 5 8 7
[251] 10 6 3 10 3 10 7 5 9 1 5 12
roaches_new <- cbind (roaches, loc)
head (roaches_new)
y roach1 treatment senior exposure2 loc
1 153 3.0800 1 0 0.800000 5
2 127 3.3125 1 0 0.600000 6
3 7 0.0167 1 0 1.000000 6
4 7 0.0300 1 0 1.000000 8
5 0 0.0200 1 0 1.142857 1
6 0 0.0000 1 0 1.000000 1
First, let’s run the models using the function glm
(without the grouping term) and glmer
(with the grouping term loc
). Notice, in comparison to the first practical, we have specified the Poisson distribution using a log link function family = poisson(link = "log")
, one of the families suitable for count data, but where there is quite a stringent assumption that the mean and variance are equal.
As with the example in the first practical, we can choose how to specify the random effects. Do we expect location to just shift the intercepts of the relationships between pre-treatment roaches and post-treatment roaches, or could the slopes of those relationships also vary across locations? We produce two versions of the glmer model. Other combinations could be possible, for example treatment efficacy having a different effect across locations.
Loading required package: Matrix
# Estimate original model
glm1 <- glm (y ~ roach1 + treatment + senior, offset = log (exposure2),
data = roaches_new, family = poisson (link = "log" ))
summary (glm1)
Call:
glm(formula = y ~ roach1 + treatment + senior, family = poisson(link = "log"),
data = roaches_new, offset = log(exposure2))
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.089246 0.021234 145.49 <2e-16 ***
roach1 0.698289 0.008874 78.69 <2e-16 ***
treatment -0.516726 0.024739 -20.89 <2e-16 ***
senior -0.379875 0.033418 -11.37 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 16954 on 261 degrees of freedom
Residual deviance: 11429 on 258 degrees of freedom
AIC: 12192
Number of Fisher Scoring iterations: 6
# Estimate candidate mixed-effects models
glm2 <- glmer (y ~ roach1 + treatment + senior + (1 | loc), offset = log (exposure2),
data = roaches_new, family = poisson (link = "log" ))
summary (glm2)
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: poisson ( log )
Formula: y ~ roach1 + treatment + senior + (1 | loc)
Data: roaches_new
Offset: log(exposure2)
AIC BIC logLik deviance df.resid
11646.4 11664.2 -5818.2 11636.4 257
Scaled residuals:
Min 1Q Median 3Q Max
-11.724 -3.885 -2.556 0.106 45.003
Random effects:
Groups Name Variance Std.Dev.
loc (Intercept) 0.1192 0.3453
Number of obs: 262, groups: loc, 12
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.024587 0.102076 29.63 <2e-16 ***
roach1 0.747000 0.009907 75.40 <2e-16 ***
treatment -0.534992 0.025625 -20.88 <2e-16 ***
senior -0.456364 0.034395 -13.27 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) roach1 trtmnt
roach1 -0.120
treatment -0.106 -0.050
senior -0.061 0.169 -0.124
$loc
(Intercept)
1 -0.228774671
2 -0.010197753
3 0.003867704
4 0.410392656
5 0.262560697
6 -0.004889565
7 -0.424295619
8 0.279652319
9 -0.720059832
10 0.397487783
11 -0.275254019
12 0.321418322
with conditional variances for "loc"
glm3 <- glmer (y ~ roach1 + treatment + senior + (roach1| loc), offset = log (exposure2),
data = roaches_new, family = poisson (link = "log" ))
summary (glm3)
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: poisson ( log )
Formula: y ~ roach1 + treatment + senior + (roach1 | loc)
Data: roaches_new
Offset: log(exposure2)
AIC BIC logLik deviance df.resid
11097.0 11122.0 -5541.5 11083.0 255
Scaled residuals:
Min 1Q Median 3Q Max
-10.591 -3.662 -2.624 0.753 44.072
Random effects:
Groups Name Variance Std.Dev. Corr
loc (Intercept) 0.1211 0.3480
roach1 0.0604 0.2458 -0.58
Number of obs: 262, groups: loc, 12
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.91617 0.10347 28.18 <2e-16 ***
roach1 0.82681 0.07215 11.46 <2e-16 ***
treatment -0.48798 0.02823 -17.29 <2e-16 ***
senior -0.38332 0.03602 -10.64 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr) roach1 trtmnt
roach1 -0.574
treatment -0.124 -0.006
senior -0.063 0.016 -0.124
$loc
(Intercept) roach1
1 -0.38402768 0.16364324
2 0.12792800 -0.11872988
3 -0.13448053 0.09682039
4 0.61808312 -0.33536780
5 0.16989974 0.06698500
6 0.15750258 -0.12988557
7 0.31699952 -0.39349310
8 0.35363146 -0.08569081
9 -0.48875948 -0.20979175
10 0.03440598 0.33279303
11 -0.49886039 0.24391234
12 -0.25280461 0.37264568
with conditional variances for "loc"
For simplicity, let’s proceed with model 2, this time using stan_glmer
. The below code just uses the default settings for chains (4) and iterations (2000) because we have not explicitly stated them. What information could be used to help inform our priors?
# Estimate Bayesian version with stan_glm
stan_glm2 <- stan_glmer (y ~ roach1 + treatment + senior + (1 | loc), offset = log (exposure2),
data = roaches_new, family = poisson (link = "log" ),
prior = normal (0 , 2.5 ),
prior_intercept = normal (0 , 5 ),
seed = 12345 )
SAMPLING FOR MODEL 'count' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.0001 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 2.999 seconds (Warm-up)
Chain 1: 2.063 seconds (Sampling)
Chain 1: 5.062 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'count' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 4.6e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 3.102 seconds (Warm-up)
Chain 2: 1.901 seconds (Sampling)
Chain 2: 5.003 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'count' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 4.5e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.45 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 2.554 seconds (Warm-up)
Chain 3: 1.968 seconds (Sampling)
Chain 3: 4.522 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'count' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 4.6e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 2.385 seconds (Warm-up)
Chain 4: 2.054 seconds (Sampling)
Chain 4: 4.439 seconds (Total)
Chain 4:
Priors for model 'stan_glm2'
------
Intercept (after predictors centered)
~ normal(location = 0, scale = 5)
Coefficients
~ normal(location = [0,0,0], scale = [2.5,2.5,2.5])
Covariance
~ decov(reg. = 1, conc. = 1, shape = 1, scale = 1)
------
See help('prior_summary.stanreg') for more details
Interpreting the model
$loc
(Intercept) roach1 treatment senior
1 2.795812 0.7470005 -0.5349923 -0.4563642
2 3.014389 0.7470005 -0.5349923 -0.4563642
3 3.028454 0.7470005 -0.5349923 -0.4563642
4 3.434979 0.7470005 -0.5349923 -0.4563642
5 3.287147 0.7470005 -0.5349923 -0.4563642
6 3.019697 0.7470005 -0.5349923 -0.4563642
7 2.600291 0.7470005 -0.5349923 -0.4563642
8 3.304239 0.7470005 -0.5349923 -0.4563642
9 2.304527 0.7470005 -0.5349923 -0.4563642
10 3.422074 0.7470005 -0.5349923 -0.4563642
11 2.749332 0.7470005 -0.5349923 -0.4563642
12 3.346005 0.7470005 -0.5349923 -0.4563642
attr(,"class")
[1] "coef.mer"
$loc
(Intercept) roach1 treatment senior
1 2.794187 0.7469251 -0.5354725 -0.4567546
2 3.012590 0.7469251 -0.5354725 -0.4567546
3 3.027114 0.7469251 -0.5354725 -0.4567546
4 3.434737 0.7469251 -0.5354725 -0.4567546
5 3.287452 0.7469251 -0.5354725 -0.4567546
6 3.020929 0.7469251 -0.5354725 -0.4567546
7 2.596957 0.7469251 -0.5354725 -0.4567546
8 3.302607 0.7469251 -0.5354725 -0.4567546
9 2.302270 0.7469251 -0.5354725 -0.4567546
10 3.420861 0.7469251 -0.5354725 -0.4567546
11 2.747876 0.7469251 -0.5354725 -0.4567546
12 3.345756 0.7469251 -0.5354725 -0.4567546
attr(,"class")
[1] "coef.mer"
print (stan_glm2) # for further information on interpretation: ?print.stanreg
stan_glmer
family: poisson [log]
formula: y ~ roach1 + treatment + senior + (1 | loc)
observations: 262
------
Median MAD_SD
(Intercept) 3.0 0.1
roach1 0.7 0.0
treatment -0.5 0.0
senior -0.5 0.0
Error terms:
Groups Name Std.Dev.
loc (Intercept) 0.41
Num. levels: loc 12
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
summary (stan_glm2, digits = 5 )
Model Info:
function: stan_glmer
family: poisson [log]
formula: y ~ roach1 + treatment + senior + (1 | loc)
algorithm: sampling
sample: 4000 (posterior sample size)
priors: see help('prior_summary')
observations: 262
groups: loc (12)
Estimates:
mean sd 10% 50% 90%
(Intercept) 3.02173 0.12043 2.86954 3.02512 3.16624
roach1 0.74705 0.00997 0.73422 0.74693 0.75981
treatment -0.53525 0.02552 -0.56787 -0.53547 -0.50242
senior -0.45665 0.03422 -0.50108 -0.45675 -0.41292
b[(Intercept) loc:1] -0.22873 0.12850 -0.38917 -0.23093 -0.06586
b[(Intercept) loc:2] -0.00838 0.12446 -0.16153 -0.01253 0.14689
b[(Intercept) loc:3] 0.00585 0.12465 -0.14655 0.00200 0.16083
b[(Intercept) loc:4] 0.41307 0.12396 0.25913 0.40962 0.57061
b[(Intercept) loc:5] 0.26494 0.12476 0.11129 0.26234 0.42183
b[(Intercept) loc:6] -0.00232 0.12406 -0.15581 -0.00419 0.15298
b[(Intercept) loc:7] -0.42440 0.12765 -0.58091 -0.42816 -0.26659
b[(Intercept) loc:8] 0.28250 0.12571 0.12889 0.27749 0.43998
b[(Intercept) loc:9] -0.72250 0.13294 -0.88592 -0.72285 -0.55357
b[(Intercept) loc:10] 0.39948 0.12621 0.24870 0.39574 0.55630
b[(Intercept) loc:11] -0.27605 0.13146 -0.43677 -0.27724 -0.11032
b[(Intercept) loc:12] 0.32339 0.12386 0.16967 0.32064 0.47884
Sigma[loc:(Intercept),(Intercept)] 0.16750 0.09280 0.08105 0.14436 0.27775
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD 25.63888 0.43514 25.09160 25.62977 26.21031
The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
MCMC diagnostics
mcse Rhat n_eff
(Intercept) 0.00507 1.01114 564
roach1 0.00017 1.00086 3562
treatment 0.00048 0.99964 2773
senior 0.00061 0.99938 3129
b[(Intercept) loc:1] 0.00512 1.01046 630
b[(Intercept) loc:2] 0.00500 1.00979 619
b[(Intercept) loc:3] 0.00507 1.01047 604
b[(Intercept) loc:4] 0.00496 1.01022 624
b[(Intercept) loc:5] 0.00500 1.00942 624
b[(Intercept) loc:6] 0.00501 1.01017 614
b[(Intercept) loc:7] 0.00508 1.01038 633
b[(Intercept) loc:8] 0.00496 1.00920 643
b[(Intercept) loc:9] 0.00524 1.00841 643
b[(Intercept) loc:10] 0.00501 1.00960 634
b[(Intercept) loc:11] 0.00517 1.00904 646
b[(Intercept) loc:12] 0.00495 1.00986 626
Sigma[loc:(Intercept),(Intercept)] 0.00331 1.00562 784
mean_PPD 0.00658 1.00001 4369
log-posterior 0.14613 1.00328 758
For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
plot (stan_glm2, "trace" ) # nice hairy catepillars!
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
yrep <- posterior_predict (stan_glm2)
# yrep will be an S x N matrix, where S is the size of the posterior sample and N is the number of data points. Each row of yrep represents a full dataset generated from the posterior predictive distribution.
dim (yrep)
pp_check (stan_glm2) # this does not look great! Can you describe what is happening here?
prop_zero <- function (y) mean (y == 0 )
(prop_zero_test1 <- pp_check (stan_glm2, plotfun = "stat" , stat = "prop_zero" , binwidth = .005 ))
The proportion of zeros computed from the sample�y
�is the dark blue vertical line (>35% zeros) and the light blue bars are those from the replicated datasets from the model, what do you we think of our model?
We should consider a model that more accurately accounts for the large proportion of zeros in the data - one option is to use a negative binomial distribution for the data which is often used for zero-inflated or overdispersed count data. It is more flexible than Poisson because the (conditional) mean and variance of \(y\) can differ. We update the model with the new family, as follows:
stan_glm_nb <- update (stan_glm2, family = neg_binomial_2)
SAMPLING FOR MODEL 'count' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000108 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.08 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 2.306 seconds (Warm-up)
Chain 1: 2.605 seconds (Sampling)
Chain 1: 4.911 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'count' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 0.0001 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 2.415 seconds (Warm-up)
Chain 2: 2.365 seconds (Sampling)
Chain 2: 4.78 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'count' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 8.5e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.85 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 2.4 seconds (Warm-up)
Chain 3: 1.535 seconds (Sampling)
Chain 3: 3.935 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'count' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 8.6e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.86 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 2.519 seconds (Warm-up)
Chain 4: 2.553 seconds (Sampling)
Chain 4: 5.072 seconds (Total)
Chain 4:
prop_zero_test2 <- pp_check (stan_glm_nb, plotfun = "stat" , stat = "prop_zero" ,
binwidth = 0.01 )
prior_summary (stan_glm_nb)
Priors for model 'stan_glm_nb'
------
Intercept (after predictors centered)
~ normal(location = 0, scale = 5)
Coefficients
~ normal(location = [0,0,0], scale = [2.5,2.5,2.5])
Auxiliary (reciprocal_dispersion)
~ exponential(rate = 1)
Covariance
~ decov(reg. = 1, conc. = 1, shape = 1, scale = 1)
------
See help('prior_summary.stanreg') for more details
$loc
(Intercept) roach1 treatment senior
1 2.844233 1.311689 -0.7989194 -0.4060952
2 2.843958 1.311689 -0.7989194 -0.4060952
3 2.826025 1.311689 -0.7989194 -0.4060952
4 2.980595 1.311689 -0.7989194 -0.4060952
5 2.877064 1.311689 -0.7989194 -0.4060952
6 2.859595 1.311689 -0.7989194 -0.4060952
7 2.844754 1.311689 -0.7989194 -0.4060952
8 2.868821 1.311689 -0.7989194 -0.4060952
9 2.802926 1.311689 -0.7989194 -0.4060952
10 2.869418 1.311689 -0.7989194 -0.4060952
11 2.817917 1.311689 -0.7989194 -0.4060952
12 2.855540 1.311689 -0.7989194 -0.4060952
attr(,"class")
[1] "coef.mer"
# Show graphs for Poisson and negative binomial side by side - we use this function from the bayesplot package
bayesplot_grid (prop_zero_test1 + ggtitle ("Poisson" ),
prop_zero_test2 + ggtitle ("Negative Binomial" ),
grid_args = list (ncol = 2 ))
We can see clearly that the updated model is doing a better job. At this point
Example 2: Climbing expeditions (…or other success/survival outcomes)
For this part of the practical we make use of an example from Johnson, Ott and Dogucu (2021), which presents a subset of The Himalayan Database (2020) https://www.himalayandatabase.com/ .
The climbers.csv represents the outcomes (success of reaching the destination) from 200 climbing expeditions. Each row represents a climber (member_id
) and their individual success outcome (true or false) for a given expedition_id
. The other predictors of outcomes include climber age, season, expedition_role
and oxygen_used
. expedition_id
groups members across the dataset, because each expedition has multiple climbers - this grouping variable can be considered structure within the dataset, because members within the team share similar conditions (same weather, same destination, same instructors.
Let’s read in the data and visually explore.
library (bayesrules)
climbers <- read.csv (url ("https://raw.githubusercontent.com/NERC-CEH/beem_data/main/climbers.csv" ))
head (climbers)
X expedition_id member_id success year season age expedition_role
1 1 AMAD81101 AMAD81101-03 TRUE 1981 Spring 28 Climber
2 2 AMAD81101 AMAD81101-04 TRUE 1981 Spring 27 Exp Doctor
3 3 AMAD81101 AMAD81101-02 TRUE 1981 Spring 35 Deputy Leader
4 4 AMAD81101 AMAD81101-05 TRUE 1981 Spring 37 Climber
5 5 AMAD81101 AMAD81101-06 TRUE 1981 Spring 43 Climber
6 6 AMAD81101 AMAD81101-07 FALSE 1981 Spring 38 Climber
oxygen_used
1 FALSE
2 FALSE
3 FALSE
4 FALSE
5 FALSE
6 FALSE
-- Attaching core tidyverse packages ------------------------ tidyverse 2.0.0 --
v dplyr 1.1.4 v readr 2.1.4
v forcats 1.0.0 v stringr 1.5.1
v lubridate 1.9.3 v tibble 3.2.1
v purrr 1.0.2 v tidyr 1.3.0
-- Conflicts ------------------------------------------ tidyverse_conflicts() --
x tidyr::expand() masks Matrix::expand()
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
x tidyr::pack() masks Matrix::pack()
x tidyr::unpack() masks Matrix::unpack()
i Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
expedition_success <- climbers %>%
group_by (expedition_id) %>%
summarise (success_rate= mean (success))
ggplot (expedition_success, aes (x= success_rate))+
geom_histogram (color= "white" )
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
What type of distribution would be appropriate for modelling the outcomes?
climb_stan_glm1 <- stan_glmer (
success ~ age + oxygen_used + (1 | expedition_id),
data = climbers, family = binomial,
prior_intercept = normal (0 , 2.5 , autoscale = TRUE ),
prior = normal (0 , 2.5 , autoscale = TRUE ),
prior_covariance = decov (reg = 1 , conc = 1 , shape = 1 , scale = 1 ),
chains = 4 , iter = 2500 , seed = 84735
)
SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 0.000542 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.42 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 2500 [ 0%] (Warmup)
Chain 1: Iteration: 250 / 2500 [ 10%] (Warmup)
Chain 1: Iteration: 500 / 2500 [ 20%] (Warmup)
Chain 1: Iteration: 750 / 2500 [ 30%] (Warmup)
Chain 1: Iteration: 1000 / 2500 [ 40%] (Warmup)
Chain 1: Iteration: 1250 / 2500 [ 50%] (Warmup)
Chain 1: Iteration: 1251 / 2500 [ 50%] (Sampling)
Chain 1: Iteration: 1500 / 2500 [ 60%] (Sampling)
Chain 1: Iteration: 1750 / 2500 [ 70%] (Sampling)
Chain 1: Iteration: 2000 / 2500 [ 80%] (Sampling)
Chain 1: Iteration: 2250 / 2500 [ 90%] (Sampling)
Chain 1: Iteration: 2500 / 2500 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 27.839 seconds (Warm-up)
Chain 1: 20.689 seconds (Sampling)
Chain 1: 48.528 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 0.000521 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 5.21 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 2500 [ 0%] (Warmup)
Chain 2: Iteration: 250 / 2500 [ 10%] (Warmup)
Chain 2: Iteration: 500 / 2500 [ 20%] (Warmup)
Chain 2: Iteration: 750 / 2500 [ 30%] (Warmup)
Chain 2: Iteration: 1000 / 2500 [ 40%] (Warmup)
Chain 2: Iteration: 1250 / 2500 [ 50%] (Warmup)
Chain 2: Iteration: 1251 / 2500 [ 50%] (Sampling)
Chain 2: Iteration: 1500 / 2500 [ 60%] (Sampling)
Chain 2: Iteration: 1750 / 2500 [ 70%] (Sampling)
Chain 2: Iteration: 2000 / 2500 [ 80%] (Sampling)
Chain 2: Iteration: 2250 / 2500 [ 90%] (Sampling)
Chain 2: Iteration: 2500 / 2500 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 29.241 seconds (Warm-up)
Chain 2: 21.082 seconds (Sampling)
Chain 2: 50.323 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 0.000561 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 5.61 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 2500 [ 0%] (Warmup)
Chain 3: Iteration: 250 / 2500 [ 10%] (Warmup)
Chain 3: Iteration: 500 / 2500 [ 20%] (Warmup)
Chain 3: Iteration: 750 / 2500 [ 30%] (Warmup)
Chain 3: Iteration: 1000 / 2500 [ 40%] (Warmup)
Chain 3: Iteration: 1250 / 2500 [ 50%] (Warmup)
Chain 3: Iteration: 1251 / 2500 [ 50%] (Sampling)
Chain 3: Iteration: 1500 / 2500 [ 60%] (Sampling)
Chain 3: Iteration: 1750 / 2500 [ 70%] (Sampling)
Chain 3: Iteration: 2000 / 2500 [ 80%] (Sampling)
Chain 3: Iteration: 2250 / 2500 [ 90%] (Sampling)
Chain 3: Iteration: 2500 / 2500 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 29.441 seconds (Warm-up)
Chain 3: 19.956 seconds (Sampling)
Chain 3: 49.397 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'bernoulli' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 0.000503 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 5.03 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 2500 [ 0%] (Warmup)
Chain 4: Iteration: 250 / 2500 [ 10%] (Warmup)
Chain 4: Iteration: 500 / 2500 [ 20%] (Warmup)
Chain 4: Iteration: 750 / 2500 [ 30%] (Warmup)
Chain 4: Iteration: 1000 / 2500 [ 40%] (Warmup)
Chain 4: Iteration: 1250 / 2500 [ 50%] (Warmup)
Chain 4: Iteration: 1251 / 2500 [ 50%] (Sampling)
Chain 4: Iteration: 1500 / 2500 [ 60%] (Sampling)
Chain 4: Iteration: 1750 / 2500 [ 70%] (Sampling)
Chain 4: Iteration: 2000 / 2500 [ 80%] (Sampling)
Chain 4: Iteration: 2250 / 2500 [ 90%] (Sampling)
Chain 4: Iteration: 2500 / 2500 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 27.457 seconds (Warm-up)
Chain 4: 20.315 seconds (Sampling)
Chain 4: 47.772 seconds (Total)
Chain 4:
Let’s check our model. Some of the outputs are quite long because our population of random effects is quite large.
library (bayesplot)
rhat (climb_stan_glm1) # ?rhat for further details
(Intercept)
1.0018448
age
1.0000334
oxygen_usedTRUE
0.9997087
b[(Intercept) expedition_id:AMAD03107]
1.0003086
b[(Intercept) expedition_id:AMAD03327]
1.0005068
b[(Intercept) expedition_id:AMAD05338]
1.0002458
b[(Intercept) expedition_id:AMAD06110]
0.9999798
b[(Intercept) expedition_id:AMAD06334]
1.0003857
b[(Intercept) expedition_id:AMAD07101]
1.0009892
b[(Intercept) expedition_id:AMAD07310]
1.0007615
b[(Intercept) expedition_id:AMAD07337]
1.0004600
b[(Intercept) expedition_id:AMAD07340]
0.9998295
b[(Intercept) expedition_id:AMAD09331]
0.9999139
b[(Intercept) expedition_id:AMAD09333]
1.0000361
b[(Intercept) expedition_id:AMAD11304]
1.0005047
b[(Intercept) expedition_id:AMAD11321]
1.0007473
b[(Intercept) expedition_id:AMAD11340]
1.0002149
b[(Intercept) expedition_id:AMAD12108]
1.0005014
b[(Intercept) expedition_id:AMAD12342]
0.9999984
b[(Intercept) expedition_id:AMAD14105]
0.9997125
b[(Intercept) expedition_id:AMAD14322]
1.0017389
b[(Intercept) expedition_id:AMAD16354]
0.9996245
b[(Intercept) expedition_id:AMAD17307]
1.0001240
b[(Intercept) expedition_id:AMAD81101]
1.0008294
b[(Intercept) expedition_id:AMAD84401]
1.0016051
b[(Intercept) expedition_id:AMAD89102]
0.9999494
b[(Intercept) expedition_id:AMAD93304]
1.0001194
b[(Intercept) expedition_id:AMAD98311]
0.9996667
b[(Intercept) expedition_id:AMAD98316]
1.0007527
b[(Intercept) expedition_id:AMAD98402]
1.0001565
b[(Intercept) expedition_id:ANN115106]
1.0002251
b[(Intercept) expedition_id:ANN115108]
1.0003475
b[(Intercept) expedition_id:ANN179301]
0.9994284
b[(Intercept) expedition_id:ANN185401]
1.0000314
b[(Intercept) expedition_id:ANN190101]
1.0005530
b[(Intercept) expedition_id:ANN193302]
0.9999866
b[(Intercept) expedition_id:ANN281302]
0.9999634
b[(Intercept) expedition_id:ANN283302]
1.0003008
b[(Intercept) expedition_id:ANN378301]
1.0007808
b[(Intercept) expedition_id:ANN402302]
1.0003731
b[(Intercept) expedition_id:ANN405301]
0.9993151
b[(Intercept) expedition_id:ANN409201]
0.9996152
b[(Intercept) expedition_id:ANN416101]
0.9998162
b[(Intercept) expedition_id:BARU01304]
1.0007590
b[(Intercept) expedition_id:BARU02101]
0.9997467
b[(Intercept) expedition_id:BARU02303]
1.0005265
b[(Intercept) expedition_id:BARU09102]
1.0000428
b[(Intercept) expedition_id:BARU10312]
0.9994632
b[(Intercept) expedition_id:BARU11303]
1.0005036
b[(Intercept) expedition_id:BARU92301]
1.0002243
b[(Intercept) expedition_id:BARU94301]
1.0001335
b[(Intercept) expedition_id:BARU94307]
1.0002206
b[(Intercept) expedition_id:BHRI86101]
0.9996689
b[(Intercept) expedition_id:CHAM18101]
0.9997030
b[(Intercept) expedition_id:CHAM91301]
1.0003891
b[(Intercept) expedition_id:CHOY02117]
1.0004286
b[(Intercept) expedition_id:CHOY03103]
1.0000209
b[(Intercept) expedition_id:CHOY03318]
1.0010955
b[(Intercept) expedition_id:CHOY04122]
1.0001818
b[(Intercept) expedition_id:CHOY04302]
1.0004477
b[(Intercept) expedition_id:CHOY06330]
0.9999257
b[(Intercept) expedition_id:CHOY07310]
0.9993640
b[(Intercept) expedition_id:CHOY07330]
0.9996828
b[(Intercept) expedition_id:CHOY07356]
0.9997243
b[(Intercept) expedition_id:CHOY07358]
1.0007683
b[(Intercept) expedition_id:CHOY10328]
0.9997566
b[(Intercept) expedition_id:CHOY10341]
1.0000211
b[(Intercept) expedition_id:CHOY93304]
1.0003652
b[(Intercept) expedition_id:CHOY96319]
1.0013078
b[(Intercept) expedition_id:CHOY97103]
1.0003511
b[(Intercept) expedition_id:CHOY98307]
0.9996994
b[(Intercept) expedition_id:CHOY99301]
1.0004922
b[(Intercept) expedition_id:DHA109113]
1.0005913
b[(Intercept) expedition_id:DHA116106]
1.0006935
b[(Intercept) expedition_id:DHA183302]
1.0006032
b[(Intercept) expedition_id:DHA193302]
0.9997310
b[(Intercept) expedition_id:DHA196104]
1.0000585
b[(Intercept) expedition_id:DHA197107]
0.9994545
b[(Intercept) expedition_id:DHA197307]
0.9997468
b[(Intercept) expedition_id:DHA198107]
1.0004862
b[(Intercept) expedition_id:DHA198302]
0.9999949
b[(Intercept) expedition_id:DHAM88301]
1.0004722
b[(Intercept) expedition_id:DZA215301]
0.9996065
b[(Intercept) expedition_id:EVER00116]
0.9995390
b[(Intercept) expedition_id:EVER00148]
1.0002310
b[(Intercept) expedition_id:EVER01108]
1.0010538
b[(Intercept) expedition_id:EVER02112]
1.0000480
b[(Intercept) expedition_id:EVER03124]
0.9996611
b[(Intercept) expedition_id:EVER03168]
0.9999918
b[(Intercept) expedition_id:EVER04150]
0.9998808
b[(Intercept) expedition_id:EVER04156]
0.9999310
b[(Intercept) expedition_id:EVER06133]
1.0007496
b[(Intercept) expedition_id:EVER06136]
1.0009019
b[(Intercept) expedition_id:EVER06152]
0.9999771
b[(Intercept) expedition_id:EVER06163]
1.0005589
b[(Intercept) expedition_id:EVER07107]
1.0002475
b[(Intercept) expedition_id:EVER07148]
0.9998716
b[(Intercept) expedition_id:EVER07175]
0.9995348
b[(Intercept) expedition_id:EVER07192]
1.0013237
b[(Intercept) expedition_id:EVER07194]
1.0005546
b[(Intercept) expedition_id:EVER08139]
1.0011994
b[(Intercept) expedition_id:EVER08152]
1.0003139
b[(Intercept) expedition_id:EVER08154]
0.9998517
b[(Intercept) expedition_id:EVER10121]
0.9995920
b[(Intercept) expedition_id:EVER10136]
0.9996064
b[(Intercept) expedition_id:EVER10154]
1.0005310
b[(Intercept) expedition_id:EVER11103]
1.0013172
b[(Intercept) expedition_id:EVER11134]
0.9997044
b[(Intercept) expedition_id:EVER11144]
0.9994995
b[(Intercept) expedition_id:EVER11152]
1.0012321
b[(Intercept) expedition_id:EVER12133]
0.9994375
b[(Intercept) expedition_id:EVER12141]
0.9997633
b[(Intercept) expedition_id:EVER14138]
0.9997465
b[(Intercept) expedition_id:EVER14165]
0.9995522
b[(Intercept) expedition_id:EVER15117]
0.9995710
b[(Intercept) expedition_id:EVER15123]
0.9996887
b[(Intercept) expedition_id:EVER16105]
1.0011103
b[(Intercept) expedition_id:EVER17123]
1.0006338
b[(Intercept) expedition_id:EVER17137]
1.0000341
b[(Intercept) expedition_id:EVER18118]
1.0000770
b[(Intercept) expedition_id:EVER18132]
0.9997298
b[(Intercept) expedition_id:EVER19104]
1.0006453
b[(Intercept) expedition_id:EVER19112]
1.0001536
b[(Intercept) expedition_id:EVER19120]
0.9999142
b[(Intercept) expedition_id:EVER81101]
0.9998778
b[(Intercept) expedition_id:EVER86101]
0.9996686
b[(Intercept) expedition_id:EVER89304]
0.9998455
b[(Intercept) expedition_id:EVER91108]
0.9997116
b[(Intercept) expedition_id:EVER96302]
0.9998131
b[(Intercept) expedition_id:EVER97105]
1.0010124
b[(Intercept) expedition_id:EVER98116]
1.0002137
b[(Intercept) expedition_id:EVER99107]
1.0008143
b[(Intercept) expedition_id:EVER99113]
0.9995943
b[(Intercept) expedition_id:EVER99116]
0.9997298
b[(Intercept) expedition_id:FANG97301]
0.9995174
b[(Intercept) expedition_id:GAN379301]
0.9999613
b[(Intercept) expedition_id:GAUR83101]
0.9995060
b[(Intercept) expedition_id:GURK03301]
0.9999320
b[(Intercept) expedition_id:GYAJ15101]
1.0002135
b[(Intercept) expedition_id:HIML03302]
1.0004177
b[(Intercept) expedition_id:HIML08301]
1.0006499
b[(Intercept) expedition_id:HIML08307]
0.9999279
b[(Intercept) expedition_id:HIML13303]
0.9996355
b[(Intercept) expedition_id:HIML13305]
0.9995927
b[(Intercept) expedition_id:HIML18310]
1.0014577
b[(Intercept) expedition_id:JANU02401]
1.0000901
b[(Intercept) expedition_id:JANU89401]
1.0008406
b[(Intercept) expedition_id:JANU97301]
1.0000234
b[(Intercept) expedition_id:JANU98102]
0.9995924
b[(Intercept) expedition_id:JETH78101]
1.0004295
b[(Intercept) expedition_id:KABD85101]
0.9999152
b[(Intercept) expedition_id:KAN180101]
1.0003114
b[(Intercept) expedition_id:KANG81101]
1.0007676
b[(Intercept) expedition_id:KANG94301]
1.0006779
b[(Intercept) expedition_id:KWAN82402]
1.0014053
b[(Intercept) expedition_id:LANG84101]
1.0001743
b[(Intercept) expedition_id:LEOE05301]
1.0002005
b[(Intercept) expedition_id:LHOT09108]
1.0003364
b[(Intercept) expedition_id:LHOT88301]
0.9993994
b[(Intercept) expedition_id:LNJU16301]
1.0006374
b[(Intercept) expedition_id:LSHR90301]
1.0014078
b[(Intercept) expedition_id:MAKA01104]
0.9997188
b[(Intercept) expedition_id:MAKA02102]
1.0007423
b[(Intercept) expedition_id:MAKA02103]
1.0000985
b[(Intercept) expedition_id:MAKA08112]
1.0002789
b[(Intercept) expedition_id:MAKA09108]
0.9996388
b[(Intercept) expedition_id:MAKA15106]
1.0010108
b[(Intercept) expedition_id:MAKA90306]
0.9999531
b[(Intercept) expedition_id:MANA06305]
0.9996784
b[(Intercept) expedition_id:MANA08324]
1.0003766
b[(Intercept) expedition_id:MANA09107]
0.9999423
b[(Intercept) expedition_id:MANA10319]
1.0003497
b[(Intercept) expedition_id:MANA11106]
0.9994299
b[(Intercept) expedition_id:MANA11318]
1.0003497
b[(Intercept) expedition_id:MANA17333]
1.0002696
b[(Intercept) expedition_id:MANA18306]
0.9999430
b[(Intercept) expedition_id:MANA80101]
1.0001237
b[(Intercept) expedition_id:MANA82401]
0.9994654
b[(Intercept) expedition_id:MANA84101]
0.9997110
b[(Intercept) expedition_id:MANA91301]
1.0001495
b[(Intercept) expedition_id:MANA97301]
1.0002435
b[(Intercept) expedition_id:NAMP79101]
1.0001548
b[(Intercept) expedition_id:NEMJ83101]
1.0002973
b[(Intercept) expedition_id:NILC79101]
0.9999808
b[(Intercept) expedition_id:NUPT99102]
0.9998861
b[(Intercept) expedition_id:OHMI89101]
1.0000549
b[(Intercept) expedition_id:PUMO06301]
0.9997869
b[(Intercept) expedition_id:PUMO80302]
1.0000105
b[(Intercept) expedition_id:PUMO92308]
1.0003964
b[(Intercept) expedition_id:PUMO96101]
1.0002076
b[(Intercept) expedition_id:PUTH02301]
1.0001081
b[(Intercept) expedition_id:PUTH97302]
0.9994193
b[(Intercept) expedition_id:SHAL05301]
0.9995123
b[(Intercept) expedition_id:SPHN93101]
0.9995739
b[(Intercept) expedition_id:SYKG12301]
0.9996607
b[(Intercept) expedition_id:TAWO83101]
0.9994268
b[(Intercept) expedition_id:TILI04301]
0.9999468
b[(Intercept) expedition_id:TILI05301]
1.0018522
b[(Intercept) expedition_id:TILI91302]
1.0008849
b[(Intercept) expedition_id:TILI92301]
0.9996451
b[(Intercept) expedition_id:TKPO02102]
0.9998304
b[(Intercept) expedition_id:TUKU05101]
0.9996945
b[(Intercept) expedition_id:TUKU16301]
1.0000238
Sigma[expedition_id:(Intercept),(Intercept)]
1.0035035
stan_glmer
family: binomial [logit]
formula: success ~ age + oxygen_used + (1 | expedition_id)
observations: 2076
------
Median MAD_SD
(Intercept) -1.4 0.5
age 0.0 0.0
oxygen_usedTRUE 5.8 0.5
Error terms:
Groups Name Std.Dev.
expedition_id (Intercept) 3.6
Num. levels: expedition_id 200
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
summary (climb_stan_glm1, digits = 5 )
Model Info:
function: stan_glmer
family: binomial [logit]
formula: success ~ age + oxygen_used + (1 | expedition_id)
algorithm: sampling
sample: 5000 (posterior sample size)
priors: see help('prior_summary')
observations: 2076
groups: expedition_id (200)
Estimates:
mean sd 10%
(Intercept) -1.42373 0.48891 -2.05685
age -0.04743 0.00927 -0.05911
oxygen_usedTRUE 5.81505 0.48623 5.20939
b[(Intercept) expedition_id:AMAD03107] 2.56631 1.09331 1.17037
b[(Intercept) expedition_id:AMAD03327] 6.09101 1.91442 3.92033
b[(Intercept) expedition_id:AMAD05338] 4.10843 0.74724 3.15691
b[(Intercept) expedition_id:AMAD06110] -2.06210 2.52766 -5.51239
b[(Intercept) expedition_id:AMAD06334] -2.71467 2.43450 -6.07925
b[(Intercept) expedition_id:AMAD07101] -2.55865 2.38922 -5.77722
b[(Intercept) expedition_id:AMAD07310] 4.97256 0.74949 4.03949
b[(Intercept) expedition_id:AMAD07337] 2.91381 0.88651 1.76229
b[(Intercept) expedition_id:AMAD07340] 2.98855 0.81692 1.97290
b[(Intercept) expedition_id:AMAD09331] 3.72349 1.00559 2.46773
b[(Intercept) expedition_id:AMAD09333] 3.36689 0.98541 2.11134
b[(Intercept) expedition_id:AMAD11304] 3.94378 1.05114 2.62108
b[(Intercept) expedition_id:AMAD11321] 6.64180 1.70661 4.70753
b[(Intercept) expedition_id:AMAD11340] 6.26932 1.81793 4.14075
b[(Intercept) expedition_id:AMAD12108] 4.50860 1.16663 3.11074
b[(Intercept) expedition_id:AMAD12342] 4.80424 1.09802 3.48921
b[(Intercept) expedition_id:AMAD14105] -2.16502 2.57328 -5.73299
b[(Intercept) expedition_id:AMAD14322] 2.61338 0.74245 1.66484
b[(Intercept) expedition_id:AMAD16354] 4.13388 1.02785 2.87450
b[(Intercept) expedition_id:AMAD17307] 2.37188 0.99420 1.09575
b[(Intercept) expedition_id:AMAD81101] 4.58305 0.86826 3.52165
b[(Intercept) expedition_id:AMAD84401] -2.36264 2.38109 -5.56937
b[(Intercept) expedition_id:AMAD89102] -2.23135 2.53347 -5.64334
b[(Intercept) expedition_id:AMAD93304] 3.02434 0.94344 1.81172
b[(Intercept) expedition_id:AMAD98311] 6.59464 1.77492 4.56553
b[(Intercept) expedition_id:AMAD98316] 3.39347 0.70031 2.50818
b[(Intercept) expedition_id:AMAD98402] 4.28671 1.16034 2.90235
b[(Intercept) expedition_id:ANN115106] -2.67306 2.47467 -6.03886
b[(Intercept) expedition_id:ANN115108] -2.59256 2.47513 -5.96586
b[(Intercept) expedition_id:ANN179301] -2.47548 2.46568 -5.80011
b[(Intercept) expedition_id:ANN185401] -2.88433 2.27874 -5.97272
b[(Intercept) expedition_id:ANN190101] -2.42991 2.45937 -5.64943
b[(Intercept) expedition_id:ANN193302] -2.66253 2.42121 -5.91006
b[(Intercept) expedition_id:ANN281302] -2.94502 2.35271 -6.09821
b[(Intercept) expedition_id:ANN283302] -2.47236 2.48394 -5.84155
b[(Intercept) expedition_id:ANN378301] -2.99759 2.36103 -6.23534
b[(Intercept) expedition_id:ANN402302] -2.14853 2.49161 -5.57829
b[(Intercept) expedition_id:ANN405301] -1.91009 2.68829 -5.56703
b[(Intercept) expedition_id:ANN409201] 6.49231 1.73253 4.49520
b[(Intercept) expedition_id:ANN416101] -2.17081 2.55502 -5.57775
b[(Intercept) expedition_id:BARU01304] -2.31284 2.45700 -5.60256
b[(Intercept) expedition_id:BARU02101] -2.51612 2.43010 -5.77726
b[(Intercept) expedition_id:BARU02303] 4.01364 0.97015 2.82060
b[(Intercept) expedition_id:BARU09102] -2.34982 2.47834 -5.64309
b[(Intercept) expedition_id:BARU10312] -2.62323 2.34064 -5.78522
b[(Intercept) expedition_id:BARU11303] 4.23721 0.72933 3.32171
b[(Intercept) expedition_id:BARU92301] 3.34119 0.79063 2.32879
b[(Intercept) expedition_id:BARU94301] -2.23583 2.44824 -5.55039
b[(Intercept) expedition_id:BARU94307] -2.66094 2.46882 -6.06835
b[(Intercept) expedition_id:BHRI86101] -2.81899 2.33850 -5.95076
b[(Intercept) expedition_id:CHAM18101] -2.19258 2.55345 -5.58348
b[(Intercept) expedition_id:CHAM91301] 2.14916 0.73772 1.19368
b[(Intercept) expedition_id:CHOY02117] 2.26969 1.02265 0.93770
b[(Intercept) expedition_id:CHOY03103] 0.86024 1.32186 -0.82678
b[(Intercept) expedition_id:CHOY03318] -2.71907 0.70210 -3.62292
b[(Intercept) expedition_id:CHOY04122] 1.72255 0.93720 0.53455
b[(Intercept) expedition_id:CHOY04302] 4.51189 1.09100 3.16734
b[(Intercept) expedition_id:CHOY06330] 3.56093 1.05463 2.25350
b[(Intercept) expedition_id:CHOY07310] 3.60080 0.89191 2.47428
b[(Intercept) expedition_id:CHOY07330] -2.59917 2.47333 -5.96082
b[(Intercept) expedition_id:CHOY07356] -2.41922 2.42680 -5.69435
b[(Intercept) expedition_id:CHOY07358] 2.35128 0.81244 1.31209
b[(Intercept) expedition_id:CHOY10328] -3.94243 2.21053 -6.89972
b[(Intercept) expedition_id:CHOY10341] -2.06354 2.51543 -5.52030
b[(Intercept) expedition_id:CHOY93304] -2.46113 2.43411 -5.92230
b[(Intercept) expedition_id:CHOY96319] 2.13572 0.83755 1.07101
b[(Intercept) expedition_id:CHOY97103] -1.61478 0.99037 -2.87586
b[(Intercept) expedition_id:CHOY98307] -0.62165 1.58419 -2.70344
b[(Intercept) expedition_id:CHOY99301] 3.20292 0.72688 2.30012
b[(Intercept) expedition_id:DHA109113] 2.42986 1.39515 0.67568
b[(Intercept) expedition_id:DHA116106] -2.75808 2.38012 -6.01454
b[(Intercept) expedition_id:DHA183302] -2.61289 2.43818 -5.88446
b[(Intercept) expedition_id:DHA193302] 1.37849 1.38843 -0.45068
b[(Intercept) expedition_id:DHA196104] -2.49663 2.42108 -5.80906
b[(Intercept) expedition_id:DHA197107] -2.33197 2.52674 -5.66538
b[(Intercept) expedition_id:DHA197307] -2.16894 2.52059 -5.62204
b[(Intercept) expedition_id:DHA198107] 1.49615 0.97072 0.26271
b[(Intercept) expedition_id:DHA198302] -2.26384 2.66018 -5.83579
b[(Intercept) expedition_id:DHAM88301] -2.27132 2.50002 -5.73674
b[(Intercept) expedition_id:DZA215301] 5.88335 1.80338 3.80752
b[(Intercept) expedition_id:EVER00116] -1.24027 1.19154 -2.74250
b[(Intercept) expedition_id:EVER00148] -2.22240 2.60899 -5.74663
b[(Intercept) expedition_id:EVER01108] -1.23580 0.97499 -2.44882
b[(Intercept) expedition_id:EVER02112] 0.12362 1.42465 -1.70149
b[(Intercept) expedition_id:EVER03124] 0.41269 1.30313 -1.25630
b[(Intercept) expedition_id:EVER03168] -1.82437 1.01307 -3.09094
b[(Intercept) expedition_id:EVER04150] 0.99999 1.36093 -0.73411
b[(Intercept) expedition_id:EVER04156] -0.03088 1.20064 -1.60095
b[(Intercept) expedition_id:EVER06133] -1.40434 0.96868 -2.61745
b[(Intercept) expedition_id:EVER06136] -0.52315 1.00124 -1.75436
b[(Intercept) expedition_id:EVER06152] -2.19890 1.50607 -4.09619
b[(Intercept) expedition_id:EVER06163] -0.45382 1.01539 -1.70858
b[(Intercept) expedition_id:EVER07107] 0.96643 1.41072 -0.81548
b[(Intercept) expedition_id:EVER07148] -1.13384 1.07269 -2.45992
b[(Intercept) expedition_id:EVER07175] -0.18602 1.04266 -1.50723
b[(Intercept) expedition_id:EVER07192] 2.57683 2.48827 -0.34053
b[(Intercept) expedition_id:EVER07194] -6.05464 1.84155 -8.45764
b[(Intercept) expedition_id:EVER08139] -0.53101 0.67693 -1.39493
b[(Intercept) expedition_id:EVER08152] 0.08320 0.98918 -1.14355
b[(Intercept) expedition_id:EVER08154] 1.06694 1.25310 -0.53295
b[(Intercept) expedition_id:EVER10121] -0.81030 1.24417 -2.34634
b[(Intercept) expedition_id:EVER10136] -1.14492 1.22280 -2.67703
b[(Intercept) expedition_id:EVER10154] -1.73998 0.71526 -2.63718
b[(Intercept) expedition_id:EVER11103] -0.44465 0.74650 -1.38770
b[(Intercept) expedition_id:EVER11134] 3.33845 2.27103 0.72641
b[(Intercept) expedition_id:EVER11144] -5.61218 1.90896 -8.13266
b[(Intercept) expedition_id:EVER11152] 3.45230 2.25984 0.86223
b[(Intercept) expedition_id:EVER12133] 2.58326 2.50600 -0.29048
b[(Intercept) expedition_id:EVER12141] 1.56002 1.57901 -0.43648
b[(Intercept) expedition_id:EVER14138] -2.11221 2.56120 -5.56541
b[(Intercept) expedition_id:EVER14165] 1.71419 1.53890 -0.19099
b[(Intercept) expedition_id:EVER15117] -3.41452 2.14646 -6.34706
b[(Intercept) expedition_id:EVER15123] -3.28660 2.32285 -6.45463
b[(Intercept) expedition_id:EVER16105] -1.30507 0.75256 -2.24164
b[(Intercept) expedition_id:EVER17123] -1.63644 0.76553 -2.59363
b[(Intercept) expedition_id:EVER17137] 1.86593 1.40564 0.15869
b[(Intercept) expedition_id:EVER18118] 1.10551 1.03599 -0.22818
b[(Intercept) expedition_id:EVER18132] 0.52483 1.47880 -1.34087
b[(Intercept) expedition_id:EVER19104] 3.47006 2.19740 0.94439
b[(Intercept) expedition_id:EVER19112] -7.11071 1.67324 -9.32947
b[(Intercept) expedition_id:EVER19120] 3.39126 2.23823 0.76695
b[(Intercept) expedition_id:EVER81101] -4.52401 2.29794 -7.53353
b[(Intercept) expedition_id:EVER86101] -2.51548 2.44349 -5.81490
b[(Intercept) expedition_id:EVER89304] -2.91246 2.28109 -5.95194
b[(Intercept) expedition_id:EVER91108] -5.18630 2.03153 -7.84418
b[(Intercept) expedition_id:EVER96302] -0.33475 1.09700 -1.75518
b[(Intercept) expedition_id:EVER97105] -4.96695 2.04742 -7.65583
b[(Intercept) expedition_id:EVER98116] -0.81657 1.32564 -2.38217
b[(Intercept) expedition_id:EVER99107] -4.85043 2.06776 -7.55567
b[(Intercept) expedition_id:EVER99113] -2.13699 1.05857 -3.47080
b[(Intercept) expedition_id:EVER99116] 0.61031 1.06653 -0.77889
b[(Intercept) expedition_id:FANG97301] -2.52188 2.45381 -5.85047
b[(Intercept) expedition_id:GAN379301] 4.83520 1.10673 3.52646
b[(Intercept) expedition_id:GAUR83101] -2.69045 2.34728 -5.94441
b[(Intercept) expedition_id:GURK03301] -2.33949 2.38396 -5.51414
b[(Intercept) expedition_id:GYAJ15101] -1.90481 2.67420 -5.50263
b[(Intercept) expedition_id:HIML03302] 1.86614 0.95205 0.64033
b[(Intercept) expedition_id:HIML08301] 1.63364 0.92149 0.44063
b[(Intercept) expedition_id:HIML08307] -2.07672 2.58658 -5.65611
b[(Intercept) expedition_id:HIML13303] 5.00457 0.89321 3.87241
b[(Intercept) expedition_id:HIML13305] -2.10961 2.54353 -5.53095
b[(Intercept) expedition_id:HIML18310] 6.13337 1.79309 4.09742
b[(Intercept) expedition_id:JANU02401] -2.34478 2.46340 -5.63439
b[(Intercept) expedition_id:JANU89401] -2.54742 2.45247 -5.97380
b[(Intercept) expedition_id:JANU97301] -2.63696 2.39476 -5.87994
b[(Intercept) expedition_id:JANU98102] -2.22381 2.44959 -5.54030
b[(Intercept) expedition_id:JETH78101] 3.03543 1.03275 1.72412
b[(Intercept) expedition_id:KABD85101] 3.22552 0.89147 2.10243
b[(Intercept) expedition_id:KAN180101] -2.32131 2.50541 -5.74922
b[(Intercept) expedition_id:KANG81101] -0.66768 0.95824 -1.92305
b[(Intercept) expedition_id:KANG94301] -2.89785 2.43588 -6.24764
b[(Intercept) expedition_id:KWAN82402] 6.05522 1.85363 3.93366
b[(Intercept) expedition_id:LANG84101] -2.56827 2.45926 -5.92602
b[(Intercept) expedition_id:LEOE05301] -2.49719 2.45604 -5.78614
b[(Intercept) expedition_id:LHOT09108] 0.51682 1.43858 -1.35265
b[(Intercept) expedition_id:LHOT88301] 0.96117 1.79538 -1.30334
b[(Intercept) expedition_id:LNJU16301] -2.23564 2.55639 -5.73834
b[(Intercept) expedition_id:LSHR90301] 1.44638 0.95361 0.18509
b[(Intercept) expedition_id:MAKA01104] -2.38139 2.50988 -5.81285
b[(Intercept) expedition_id:MAKA02102] 1.76807 0.94269 0.59613
b[(Intercept) expedition_id:MAKA02103] 4.00561 0.94799 2.83913
b[(Intercept) expedition_id:MAKA08112] -5.72526 1.85460 -8.18477
b[(Intercept) expedition_id:MAKA09108] 0.03632 1.29803 -1.66951
b[(Intercept) expedition_id:MAKA15106] -2.52851 2.41998 -5.77781
b[(Intercept) expedition_id:MAKA90306] -2.66024 2.37065 -5.81490
b[(Intercept) expedition_id:MANA06305] -2.12951 2.53404 -5.62565
b[(Intercept) expedition_id:MANA08324] -6.10309 1.79859 -8.48137
b[(Intercept) expedition_id:MANA09107] 1.12072 1.33038 -0.65047
b[(Intercept) expedition_id:MANA10319] 2.04274 0.76786 1.08377
b[(Intercept) expedition_id:MANA11106] 4.13448 1.25573 2.65773
b[(Intercept) expedition_id:MANA11318] 1.69220 1.53376 -0.20919
b[(Intercept) expedition_id:MANA17333] 4.69473 2.23623 1.96619
b[(Intercept) expedition_id:MANA18306] 0.55927 1.05299 -0.74863
b[(Intercept) expedition_id:MANA80101] -2.78406 1.47570 -4.64364
b[(Intercept) expedition_id:MANA82401] -5.68347 1.94677 -8.28344
b[(Intercept) expedition_id:MANA84101] 6.85470 1.78440 4.81092
b[(Intercept) expedition_id:MANA91301] -2.51048 2.40861 -5.84361
b[(Intercept) expedition_id:MANA97301] 2.36632 0.73375 1.40836
b[(Intercept) expedition_id:NAMP79101] -2.63287 2.47706 -5.95539
b[(Intercept) expedition_id:NEMJ83101] -3.05585 2.33474 -6.17981
b[(Intercept) expedition_id:NILC79101] 3.70706 0.90199 2.59253
b[(Intercept) expedition_id:NUPT99102] -2.07376 2.60845 -5.57843
b[(Intercept) expedition_id:OHMI89101] 1.04422 0.93397 -0.17959
b[(Intercept) expedition_id:PUMO06301] -2.49225 2.50152 -5.88004
b[(Intercept) expedition_id:PUMO80302] -2.32332 2.47686 -5.69965
b[(Intercept) expedition_id:PUMO92308] 0.85111 1.32183 -0.90831
b[(Intercept) expedition_id:PUMO96101] 2.39753 0.85436 1.29408
b[(Intercept) expedition_id:PUTH02301] 1.41537 0.94452 0.19232
b[(Intercept) expedition_id:PUTH97302] -2.15414 2.48989 -5.55543
b[(Intercept) expedition_id:SHAL05301] -2.20822 2.46968 -5.59296
b[(Intercept) expedition_id:SPHN93101] 6.98106 1.70414 5.05409
b[(Intercept) expedition_id:SYKG12301] 5.44388 1.11814 4.08691
b[(Intercept) expedition_id:TAWO83101] -2.37701 2.54478 -5.84788
b[(Intercept) expedition_id:TILI04301] -2.48843 2.43809 -5.81144
b[(Intercept) expedition_id:TILI05301] 2.98316 0.66833 2.13076
b[(Intercept) expedition_id:TILI91302] 1.64844 0.96326 0.39236
b[(Intercept) expedition_id:TILI92301] -2.74407 2.39456 -5.99771
b[(Intercept) expedition_id:TKPO02102] -2.73757 2.35462 -5.92799
b[(Intercept) expedition_id:TUKU05101] -2.22143 2.55639 -5.64090
b[(Intercept) expedition_id:TUKU16301] 7.30242 1.67234 5.43851
Sigma[expedition_id:(Intercept),(Intercept)] 13.23021 2.53061 10.22658
50% 90%
(Intercept) -1.41182 -0.80056
age -0.04751 -0.03554
oxygen_usedTRUE 5.79456 6.45892
b[(Intercept) expedition_id:AMAD03107] 2.59309 3.90074
b[(Intercept) expedition_id:AMAD03327] 5.81565 8.58628
b[(Intercept) expedition_id:AMAD05338] 4.09092 5.07189
b[(Intercept) expedition_id:AMAD06110] -1.79583 0.90702
b[(Intercept) expedition_id:AMAD06334] -2.37793 0.12073
b[(Intercept) expedition_id:AMAD07101] -2.21711 0.17940
b[(Intercept) expedition_id:AMAD07310] 4.93501 5.94572
b[(Intercept) expedition_id:AMAD07337] 2.93849 4.01636
b[(Intercept) expedition_id:AMAD07340] 2.98154 4.03198
b[(Intercept) expedition_id:AMAD09331] 3.69758 5.00269
b[(Intercept) expedition_id:AMAD09333] 3.34913 4.62777
b[(Intercept) expedition_id:AMAD11304] 3.94135 5.28486
b[(Intercept) expedition_id:AMAD11321] 6.38196 8.92144
b[(Intercept) expedition_id:AMAD11340] 6.02605 8.69024
b[(Intercept) expedition_id:AMAD12108] 4.41895 5.97108
b[(Intercept) expedition_id:AMAD12342] 4.71625 6.28948
b[(Intercept) expedition_id:AMAD14105] -1.82088 0.92202
b[(Intercept) expedition_id:AMAD14322] 2.62127 3.54707
b[(Intercept) expedition_id:AMAD16354] 4.10411 5.44664
b[(Intercept) expedition_id:AMAD17307] 2.42515 3.60061
b[(Intercept) expedition_id:AMAD81101] 4.53133 5.70590
b[(Intercept) expedition_id:AMAD84401] -2.02064 0.40129
b[(Intercept) expedition_id:AMAD89102] -1.90447 0.76102
b[(Intercept) expedition_id:AMAD93304] 3.03838 4.22305
b[(Intercept) expedition_id:AMAD98311] 6.35574 8.95524
b[(Intercept) expedition_id:AMAD98316] 3.39396 4.28610
b[(Intercept) expedition_id:AMAD98402] 4.19685 5.80662
b[(Intercept) expedition_id:ANN115106] -2.34830 0.21420
b[(Intercept) expedition_id:ANN115108] -2.29296 0.36587
b[(Intercept) expedition_id:ANN179301] -2.14044 0.35378
b[(Intercept) expedition_id:ANN185401] -2.54216 -0.29536
b[(Intercept) expedition_id:ANN190101] -2.04870 0.41771
b[(Intercept) expedition_id:ANN193302] -2.35521 0.14690
b[(Intercept) expedition_id:ANN281302] -2.56342 -0.27107
b[(Intercept) expedition_id:ANN283302] -2.15921 0.41642
b[(Intercept) expedition_id:ANN378301] -2.62919 -0.26303
b[(Intercept) expedition_id:ANN402302] -1.79753 0.82965
b[(Intercept) expedition_id:ANN405301] -1.56313 1.31237
b[(Intercept) expedition_id:ANN409201] 6.24746 8.80351
b[(Intercept) expedition_id:ANN416101] -1.83768 0.86073
b[(Intercept) expedition_id:BARU01304] -2.06908 0.63432
b[(Intercept) expedition_id:BARU02101] -2.16196 0.31316
b[(Intercept) expedition_id:BARU02303] 3.98011 5.25958
b[(Intercept) expedition_id:BARU09102] -2.06431 0.59838
b[(Intercept) expedition_id:BARU10312] -2.32995 0.11200
b[(Intercept) expedition_id:BARU11303] 4.20036 5.20484
b[(Intercept) expedition_id:BARU92301] 3.35102 4.34465
b[(Intercept) expedition_id:BARU94301] -1.91620 0.65712
b[(Intercept) expedition_id:BARU94307] -2.31722 0.24646
b[(Intercept) expedition_id:BHRI86101] -2.45200 -0.14541
b[(Intercept) expedition_id:CHAM18101] -1.89666 0.82428
b[(Intercept) expedition_id:CHAM91301] 2.16649 3.07852
b[(Intercept) expedition_id:CHOY02117] 2.30550 3.56265
b[(Intercept) expedition_id:CHOY03103] 0.98239 2.44595
b[(Intercept) expedition_id:CHOY03318] -2.71422 -1.80591
b[(Intercept) expedition_id:CHOY04122] 1.74775 2.89148
b[(Intercept) expedition_id:CHOY04302] 4.43361 5.94611
b[(Intercept) expedition_id:CHOY06330] 3.54091 4.91379
b[(Intercept) expedition_id:CHOY07310] 3.58703 4.77252
b[(Intercept) expedition_id:CHOY07330] -2.28391 0.25251
b[(Intercept) expedition_id:CHOY07356] -2.08475 0.41247
b[(Intercept) expedition_id:CHOY07358] 2.37505 3.38749
b[(Intercept) expedition_id:CHOY10328] -3.69874 -1.32523
b[(Intercept) expedition_id:CHOY10341] -1.73160 0.88127
b[(Intercept) expedition_id:CHOY93304] -2.10977 0.41866
b[(Intercept) expedition_id:CHOY96319] 2.15217 3.17413
b[(Intercept) expedition_id:CHOY97103] -1.63606 -0.33383
b[(Intercept) expedition_id:CHOY98307] -0.53268 1.35156
b[(Intercept) expedition_id:CHOY99301] 3.20198 4.12029
b[(Intercept) expedition_id:DHA109113] 2.48584 4.11821
b[(Intercept) expedition_id:DHA116106] -2.39375 -0.05460
b[(Intercept) expedition_id:DHA183302] -2.28263 0.20485
b[(Intercept) expedition_id:DHA193302] 1.48352 3.04698
b[(Intercept) expedition_id:DHA196104] -2.23101 0.38982
b[(Intercept) expedition_id:DHA197107] -2.01806 0.63103
b[(Intercept) expedition_id:DHA197307] -1.84552 0.79590
b[(Intercept) expedition_id:DHA198107] 1.55672 2.65327
b[(Intercept) expedition_id:DHA198302] -1.91436 0.81485
b[(Intercept) expedition_id:DHAM88301] -1.94433 0.68310
b[(Intercept) expedition_id:DZA215301] 5.62376 8.34154
b[(Intercept) expedition_id:EVER00116] -1.27596 0.31721
b[(Intercept) expedition_id:EVER00148] -1.88890 0.87087
b[(Intercept) expedition_id:EVER01108] -1.26350 0.01811
b[(Intercept) expedition_id:EVER02112] 0.12073 1.95307
b[(Intercept) expedition_id:EVER03124] 0.39163 2.13013
b[(Intercept) expedition_id:EVER03168] -1.84607 -0.55019
b[(Intercept) expedition_id:EVER04150] 0.93231 2.79106
b[(Intercept) expedition_id:EVER04156] -0.03172 1.52930
b[(Intercept) expedition_id:EVER06133] -1.45282 -0.15212
b[(Intercept) expedition_id:EVER06136] -0.58442 0.77227
b[(Intercept) expedition_id:EVER06152] -2.18081 -0.26742
b[(Intercept) expedition_id:EVER06163] -0.49751 0.89242
b[(Intercept) expedition_id:EVER07107] 0.91175 2.79202
b[(Intercept) expedition_id:EVER07148] -1.15693 0.22386
b[(Intercept) expedition_id:EVER07175] -0.20226 1.18582
b[(Intercept) expedition_id:EVER07192] 2.30884 5.95770
b[(Intercept) expedition_id:EVER07194] -5.80022 -3.96190
b[(Intercept) expedition_id:EVER08139] -0.53589 0.32529
b[(Intercept) expedition_id:EVER08152] 0.03470 1.38132
b[(Intercept) expedition_id:EVER08154] 1.04674 2.67831
b[(Intercept) expedition_id:EVER10121] -0.87000 0.82655
b[(Intercept) expedition_id:EVER10136] -1.18727 0.42998
b[(Intercept) expedition_id:EVER10154] -1.75856 -0.82828
b[(Intercept) expedition_id:EVER11103] -0.47224 0.52482
b[(Intercept) expedition_id:EVER11134] 2.97211 6.42978
b[(Intercept) expedition_id:EVER11144] -5.37897 -3.42123
b[(Intercept) expedition_id:EVER11152] 3.10697 6.56132
b[(Intercept) expedition_id:EVER12133] 2.20948 5.97524
b[(Intercept) expedition_id:EVER12141] 1.44833 3.66222
b[(Intercept) expedition_id:EVER14138] -1.82829 0.93252
b[(Intercept) expedition_id:EVER14165] 1.62625 3.74489
b[(Intercept) expedition_id:EVER15117] -3.14493 -0.90079
b[(Intercept) expedition_id:EVER15123] -2.94456 -0.68229
b[(Intercept) expedition_id:EVER16105] -1.32617 -0.32631
b[(Intercept) expedition_id:EVER17123] -1.64432 -0.64552
b[(Intercept) expedition_id:EVER17137] 1.77470 3.73640
b[(Intercept) expedition_id:EVER18118] 1.06859 2.48373
b[(Intercept) expedition_id:EVER18132] 0.50753 2.45496
b[(Intercept) expedition_id:EVER19104] 3.14504 6.40112
b[(Intercept) expedition_id:EVER19112] -6.88253 -5.24638
b[(Intercept) expedition_id:EVER19120] 3.05393 6.42491
b[(Intercept) expedition_id:EVER81101] -4.31989 -1.75433
b[(Intercept) expedition_id:EVER86101] -2.16547 0.30630
b[(Intercept) expedition_id:EVER89304] -2.57921 -0.30977
b[(Intercept) expedition_id:EVER91108] -4.95084 -2.79539
b[(Intercept) expedition_id:EVER96302] -0.32154 1.07589
b[(Intercept) expedition_id:EVER97105] -4.73037 -2.57404
b[(Intercept) expedition_id:EVER98116] -0.93394 0.93468
b[(Intercept) expedition_id:EVER99107] -4.61241 -2.43385
b[(Intercept) expedition_id:EVER99113] -2.15971 -0.78078
b[(Intercept) expedition_id:EVER99116] 0.59488 1.98766
b[(Intercept) expedition_id:FANG97301] -2.26193 0.33428
b[(Intercept) expedition_id:GAN379301] 4.74609 6.28514
b[(Intercept) expedition_id:GAUR83101] -2.31396 0.00629
b[(Intercept) expedition_id:GURK03301] -2.06649 0.50235
b[(Intercept) expedition_id:GYAJ15101] -1.60755 1.25423
b[(Intercept) expedition_id:HIML03302] 1.90678 3.03686
b[(Intercept) expedition_id:HIML08301] 1.70061 2.74442
b[(Intercept) expedition_id:HIML08307] -1.76151 0.96444
b[(Intercept) expedition_id:HIML13303] 4.96946 6.13309
b[(Intercept) expedition_id:HIML13305] -1.75289 0.86160
b[(Intercept) expedition_id:HIML18310] 5.86835 8.55143
b[(Intercept) expedition_id:JANU02401] -2.05882 0.59843
b[(Intercept) expedition_id:JANU89401] -2.19172 0.31862
b[(Intercept) expedition_id:JANU97301] -2.30801 0.16867
b[(Intercept) expedition_id:JANU98102] -1.93404 0.67089
b[(Intercept) expedition_id:JETH78101] 3.00632 4.37139
b[(Intercept) expedition_id:KABD85101] 3.21083 4.35412
b[(Intercept) expedition_id:KAN180101] -2.05255 0.63654
b[(Intercept) expedition_id:KANG81101] -0.66685 0.54709
b[(Intercept) expedition_id:KANG94301] -2.54570 -0.06795
b[(Intercept) expedition_id:KWAN82402] 5.78770 8.57843
b[(Intercept) expedition_id:LANG84101] -2.22523 0.27807
b[(Intercept) expedition_id:LEOE05301] -2.14681 0.38885
b[(Intercept) expedition_id:LHOT09108] 0.51654 2.35884
b[(Intercept) expedition_id:LHOT88301] 0.89001 3.29329
b[(Intercept) expedition_id:LNJU16301] -1.88359 0.84248
b[(Intercept) expedition_id:LSHR90301] 1.51258 2.60090
b[(Intercept) expedition_id:MAKA01104] -2.05998 0.54943
b[(Intercept) expedition_id:MAKA02102] 1.82996 2.89163
b[(Intercept) expedition_id:MAKA02103] 3.95575 5.21502
b[(Intercept) expedition_id:MAKA08112] -5.49644 -3.60165
b[(Intercept) expedition_id:MAKA09108] 0.04073 1.72790
b[(Intercept) expedition_id:MAKA15106] -2.22348 0.29547
b[(Intercept) expedition_id:MAKA90306] -2.31266 0.05963
b[(Intercept) expedition_id:MANA06305] -1.80119 0.87612
b[(Intercept) expedition_id:MANA08324] -5.84841 -4.06935
b[(Intercept) expedition_id:MANA09107] 1.27704 2.69432
b[(Intercept) expedition_id:MANA10319] 2.06653 2.99491
b[(Intercept) expedition_id:MANA11106] 4.03723 5.74597
b[(Intercept) expedition_id:MANA11318] 1.62382 3.68080
b[(Intercept) expedition_id:MANA17333] 4.52845 7.68896
b[(Intercept) expedition_id:MANA18306] 0.48682 1.94192
b[(Intercept) expedition_id:MANA80101] -2.75394 -0.94244
b[(Intercept) expedition_id:MANA82401] -5.45738 -3.43931
b[(Intercept) expedition_id:MANA84101] 6.59840 9.28028
b[(Intercept) expedition_id:MANA91301] -2.21892 0.36612
b[(Intercept) expedition_id:MANA97301] 2.37539 3.28364
b[(Intercept) expedition_id:NAMP79101] -2.31487 0.23481
b[(Intercept) expedition_id:NEMJ83101] -2.71077 -0.39226
b[(Intercept) expedition_id:NILC79101] 3.67558 4.86909
b[(Intercept) expedition_id:NUPT99102] -1.77986 1.04757
b[(Intercept) expedition_id:OHMI89101] 1.10693 2.16813
b[(Intercept) expedition_id:PUMO06301] -2.14376 0.34492
b[(Intercept) expedition_id:PUMO80302] -2.02074 0.64671
b[(Intercept) expedition_id:PUMO92308] 0.98108 2.42132
b[(Intercept) expedition_id:PUMO96101] 2.41038 3.46193
b[(Intercept) expedition_id:PUTH02301] 1.45990 2.56966
b[(Intercept) expedition_id:PUTH97302] -1.90426 0.80396
b[(Intercept) expedition_id:SHAL05301] -1.90138 0.68167
b[(Intercept) expedition_id:SPHN93101] 6.74766 9.27812
b[(Intercept) expedition_id:SYKG12301] 5.37304 6.88947
b[(Intercept) expedition_id:TAWO83101] -2.03760 0.57846
b[(Intercept) expedition_id:TILI04301] -2.18392 0.36077
b[(Intercept) expedition_id:TILI05301] 2.97656 3.83726
b[(Intercept) expedition_id:TILI91302] 1.70902 2.80572
b[(Intercept) expedition_id:TILI92301] -2.45164 0.02521
b[(Intercept) expedition_id:TKPO02102] -2.42112 0.06456
b[(Intercept) expedition_id:TUKU05101] -1.86017 0.77908
b[(Intercept) expedition_id:TUKU16301] 7.09043 9.52697
Sigma[expedition_id:(Intercept),(Intercept)] 12.93519 16.62386
Fit Diagnostics:
mean sd 10% 50% 90%
mean_PPD 0.38891 0.00824 0.37813 0.38921 0.39933
The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
MCMC diagnostics
mcse Rhat n_eff
(Intercept) 0.01120 1.00184 1904
age 0.00012 1.00003 6092
oxygen_usedTRUE 0.00997 0.99971 2378
b[(Intercept) expedition_id:AMAD03107] 0.01631 1.00031 4494
b[(Intercept) expedition_id:AMAD03327] 0.02709 1.00051 4995
b[(Intercept) expedition_id:AMAD05338] 0.01319 1.00025 3211
b[(Intercept) expedition_id:AMAD06110] 0.03284 0.99998 5923
b[(Intercept) expedition_id:AMAD06334] 0.03396 1.00039 5138
b[(Intercept) expedition_id:AMAD07101] 0.03424 1.00099 4870
b[(Intercept) expedition_id:AMAD07310] 0.01352 1.00076 3073
b[(Intercept) expedition_id:AMAD07337] 0.01492 1.00046 3533
b[(Intercept) expedition_id:AMAD07340] 0.01433 0.99983 3251
b[(Intercept) expedition_id:AMAD09331] 0.01411 0.99991 5079
b[(Intercept) expedition_id:AMAD09333] 0.01601 1.00004 3790
b[(Intercept) expedition_id:AMAD11304] 0.01556 1.00050 4561
b[(Intercept) expedition_id:AMAD11321] 0.02596 1.00075 4322
b[(Intercept) expedition_id:AMAD11340] 0.02655 1.00021 4689
b[(Intercept) expedition_id:AMAD12108] 0.01658 1.00050 4952
b[(Intercept) expedition_id:AMAD12342] 0.01742 1.00000 3975
b[(Intercept) expedition_id:AMAD14105] 0.03422 0.99971 5656
b[(Intercept) expedition_id:AMAD14322] 0.01388 1.00174 2860
b[(Intercept) expedition_id:AMAD16354] 0.01519 0.99962 4580
b[(Intercept) expedition_id:AMAD17307] 0.01582 1.00012 3952
b[(Intercept) expedition_id:AMAD81101] 0.01485 1.00083 3417
b[(Intercept) expedition_id:AMAD84401] 0.03683 1.00161 4179
b[(Intercept) expedition_id:AMAD89102] 0.03552 0.99995 5086
b[(Intercept) expedition_id:AMAD93304] 0.01355 1.00012 4845
b[(Intercept) expedition_id:AMAD98311] 0.02494 0.99967 5063
b[(Intercept) expedition_id:AMAD98316] 0.01339 1.00075 2734
b[(Intercept) expedition_id:AMAD98402] 0.01663 1.00016 4866
b[(Intercept) expedition_id:ANN115106] 0.03348 1.00023 5463
b[(Intercept) expedition_id:ANN115108] 0.03681 1.00035 4520
b[(Intercept) expedition_id:ANN179301] 0.03448 0.99943 5114
b[(Intercept) expedition_id:ANN185401] 0.03366 1.00003 4584
b[(Intercept) expedition_id:ANN190101] 0.03462 1.00055 5045
b[(Intercept) expedition_id:ANN193302] 0.03325 0.99999 5303
b[(Intercept) expedition_id:ANN281302] 0.03299 0.99996 5087
b[(Intercept) expedition_id:ANN283302] 0.03477 1.00030 5104
b[(Intercept) expedition_id:ANN378301] 0.03517 1.00078 4506
b[(Intercept) expedition_id:ANN402302] 0.03187 1.00037 6111
b[(Intercept) expedition_id:ANN405301] 0.03526 0.99932 5813
b[(Intercept) expedition_id:ANN409201] 0.02555 0.99962 4596
b[(Intercept) expedition_id:ANN416101] 0.03462 0.99982 5445
b[(Intercept) expedition_id:BARU01304] 0.03529 1.00076 4847
b[(Intercept) expedition_id:BARU02101] 0.03653 0.99975 4426
b[(Intercept) expedition_id:BARU02303] 0.01422 1.00053 4655
b[(Intercept) expedition_id:BARU09102] 0.03385 1.00004 5361
b[(Intercept) expedition_id:BARU10312] 0.03200 0.99946 5352
b[(Intercept) expedition_id:BARU11303] 0.01348 1.00050 2929
b[(Intercept) expedition_id:BARU92301] 0.01411 1.00022 3138
b[(Intercept) expedition_id:BARU94301] 0.03268 1.00013 5612
b[(Intercept) expedition_id:BARU94307] 0.03285 1.00022 5647
b[(Intercept) expedition_id:BHRI86101] 0.03338 0.99967 4908
b[(Intercept) expedition_id:CHAM18101] 0.03552 0.99970 5169
b[(Intercept) expedition_id:CHAM91301] 0.01332 1.00039 3067
b[(Intercept) expedition_id:CHOY02117] 0.01635 1.00043 3914
b[(Intercept) expedition_id:CHOY03103] 0.01955 1.00002 4573
b[(Intercept) expedition_id:CHOY03318] 0.01467 1.00110 2291
b[(Intercept) expedition_id:CHOY04122] 0.01518 1.00018 3813
b[(Intercept) expedition_id:CHOY04302] 0.01612 1.00045 4580
b[(Intercept) expedition_id:CHOY06330] 0.01603 0.99993 4328
b[(Intercept) expedition_id:CHOY07310] 0.01449 0.99936 3788
b[(Intercept) expedition_id:CHOY07330] 0.03665 0.99968 4555
b[(Intercept) expedition_id:CHOY07356] 0.03270 0.99972 5507
b[(Intercept) expedition_id:CHOY07358] 0.01420 1.00077 3275
b[(Intercept) expedition_id:CHOY10328] 0.02968 0.99976 5547
b[(Intercept) expedition_id:CHOY10341] 0.03302 1.00002 5802
b[(Intercept) expedition_id:CHOY93304] 0.03300 1.00037 5439
b[(Intercept) expedition_id:CHOY96319] 0.01349 1.00131 3856
b[(Intercept) expedition_id:CHOY97103] 0.01661 1.00035 3554
b[(Intercept) expedition_id:CHOY98307] 0.01823 0.99970 7555
b[(Intercept) expedition_id:CHOY99301] 0.01331 1.00049 2981
b[(Intercept) expedition_id:DHA109113] 0.01918 1.00059 5289
b[(Intercept) expedition_id:DHA116106] 0.03365 1.00069 5003
b[(Intercept) expedition_id:DHA183302] 0.03256 1.00060 5607
b[(Intercept) expedition_id:DHA193302] 0.02112 0.99973 4322
b[(Intercept) expedition_id:DHA196104] 0.03250 1.00006 5550
b[(Intercept) expedition_id:DHA197107] 0.03393 0.99945 5546
b[(Intercept) expedition_id:DHA197307] 0.03363 0.99975 5617
b[(Intercept) expedition_id:DHA198107] 0.01540 1.00049 3973
b[(Intercept) expedition_id:DHA198302] 0.03614 0.99999 5419
b[(Intercept) expedition_id:DHAM88301] 0.03394 1.00047 5425
b[(Intercept) expedition_id:DZA215301] 0.02631 0.99961 4699
b[(Intercept) expedition_id:EVER00116] 0.01664 0.99954 5127
b[(Intercept) expedition_id:EVER00148] 0.03363 1.00023 6019
b[(Intercept) expedition_id:EVER01108] 0.01622 1.00105 3615
b[(Intercept) expedition_id:EVER02112] 0.01557 1.00005 8375
b[(Intercept) expedition_id:EVER03124] 0.01752 0.99966 5533
b[(Intercept) expedition_id:EVER03168] 0.01692 0.99999 3585
b[(Intercept) expedition_id:EVER04150] 0.02043 0.99988 4440
b[(Intercept) expedition_id:EVER04156] 0.01589 0.99993 5710
b[(Intercept) expedition_id:EVER06133] 0.01626 1.00075 3548
b[(Intercept) expedition_id:EVER06136] 0.01767 1.00090 3209
b[(Intercept) expedition_id:EVER06152] 0.01833 0.99998 6752
b[(Intercept) expedition_id:EVER06163] 0.01539 1.00056 4355
b[(Intercept) expedition_id:EVER07107] 0.01734 1.00025 6622
b[(Intercept) expedition_id:EVER07148] 0.01624 0.99987 4362
b[(Intercept) expedition_id:EVER07175] 0.01671 0.99953 3892
b[(Intercept) expedition_id:EVER07192] 0.03384 1.00132 5407
b[(Intercept) expedition_id:EVER07194] 0.02725 1.00055 4567
b[(Intercept) expedition_id:EVER08139] 0.01568 1.00120 1864
b[(Intercept) expedition_id:EVER08152] 0.01762 1.00031 3152
b[(Intercept) expedition_id:EVER08154] 0.01621 0.99985 5973
b[(Intercept) expedition_id:EVER10121] 0.01715 0.99959 5264
b[(Intercept) expedition_id:EVER10136] 0.01747 0.99961 4897
b[(Intercept) expedition_id:EVER10154] 0.01433 1.00053 2492
b[(Intercept) expedition_id:EVER11103] 0.01503 1.00132 2466
b[(Intercept) expedition_id:EVER11134] 0.03152 0.99970 5190
b[(Intercept) expedition_id:EVER11144] 0.02565 0.99950 5541
b[(Intercept) expedition_id:EVER11152] 0.03459 1.00123 4269
b[(Intercept) expedition_id:EVER12133] 0.03402 0.99944 5427
b[(Intercept) expedition_id:EVER12141] 0.02106 0.99976 5620
b[(Intercept) expedition_id:EVER14138] 0.03391 0.99975 5703
b[(Intercept) expedition_id:EVER14165] 0.02010 0.99955 5861
b[(Intercept) expedition_id:EVER15117] 0.03151 0.99957 4639
b[(Intercept) expedition_id:EVER15123] 0.03416 0.99969 4623
b[(Intercept) expedition_id:EVER16105] 0.01510 1.00111 2485
b[(Intercept) expedition_id:EVER17123] 0.01529 1.00063 2508
b[(Intercept) expedition_id:EVER17137] 0.02069 1.00003 4615
b[(Intercept) expedition_id:EVER18118] 0.01472 1.00008 4952
b[(Intercept) expedition_id:EVER18132] 0.01825 0.99973 6565
b[(Intercept) expedition_id:EVER19104] 0.03449 1.00065 4059
b[(Intercept) expedition_id:EVER19112] 0.02447 1.00015 4676
b[(Intercept) expedition_id:EVER19120] 0.03531 0.99991 4017
b[(Intercept) expedition_id:EVER81101] 0.02958 0.99988 6035
b[(Intercept) expedition_id:EVER86101] 0.03527 0.99967 4800
b[(Intercept) expedition_id:EVER89304] 0.03308 0.99985 4755
b[(Intercept) expedition_id:EVER91108] 0.02776 0.99971 5356
b[(Intercept) expedition_id:EVER96302] 0.01444 0.99981 5773
b[(Intercept) expedition_id:EVER97105] 0.02982 1.00101 4715
b[(Intercept) expedition_id:EVER98116] 0.02007 1.00021 4364
b[(Intercept) expedition_id:EVER99107] 0.02670 1.00081 5999
b[(Intercept) expedition_id:EVER99113] 0.01612 0.99959 4313
b[(Intercept) expedition_id:EVER99116] 0.01596 0.99973 4467
b[(Intercept) expedition_id:FANG97301] 0.03275 0.99952 5613
b[(Intercept) expedition_id:GAN379301] 0.01749 0.99996 4004
b[(Intercept) expedition_id:GAUR83101] 0.03216 0.99951 5328
b[(Intercept) expedition_id:GURK03301] 0.03043 0.99993 6137
b[(Intercept) expedition_id:GYAJ15101] 0.03623 1.00021 5448
b[(Intercept) expedition_id:HIML03302] 0.01580 1.00042 3631
b[(Intercept) expedition_id:HIML08301] 0.01672 1.00065 3038
b[(Intercept) expedition_id:HIML08307] 0.03417 0.99993 5728
b[(Intercept) expedition_id:HIML13303] 0.01608 0.99964 3085
b[(Intercept) expedition_id:HIML13305] 0.03169 0.99959 6443
b[(Intercept) expedition_id:HIML18310] 0.02704 1.00146 4397
b[(Intercept) expedition_id:JANU02401] 0.03245 1.00009 5763
b[(Intercept) expedition_id:JANU89401] 0.03243 1.00084 5721
b[(Intercept) expedition_id:JANU97301] 0.03177 1.00002 5681
b[(Intercept) expedition_id:JANU98102] 0.03642 0.99959 4524
b[(Intercept) expedition_id:JETH78101] 0.01606 1.00043 4133
b[(Intercept) expedition_id:KABD85101] 0.01341 0.99992 4417
b[(Intercept) expedition_id:KAN180101] 0.03343 1.00031 5616
b[(Intercept) expedition_id:KANG81101] 0.01474 1.00077 4224
b[(Intercept) expedition_id:KANG94301] 0.03496 1.00068 4856
b[(Intercept) expedition_id:KWAN82402] 0.02989 1.00141 3847
b[(Intercept) expedition_id:LANG84101] 0.03481 1.00017 4992
b[(Intercept) expedition_id:LEOE05301] 0.03595 1.00020 4667
b[(Intercept) expedition_id:LHOT09108] 0.01911 1.00034 5664
b[(Intercept) expedition_id:LHOT88301] 0.02069 0.99940 7531
b[(Intercept) expedition_id:LNJU16301] 0.03280 1.00064 6076
b[(Intercept) expedition_id:LSHR90301] 0.01594 1.00141 3578
b[(Intercept) expedition_id:MAKA01104] 0.03354 0.99972 5599
b[(Intercept) expedition_id:MAKA02102] 0.01757 1.00074 2880
b[(Intercept) expedition_id:MAKA02103] 0.01520 1.00010 3892
b[(Intercept) expedition_id:MAKA08112] 0.02825 1.00028 4311
b[(Intercept) expedition_id:MAKA09108] 0.01626 0.99964 6376
b[(Intercept) expedition_id:MAKA15106] 0.03640 1.00101 4421
b[(Intercept) expedition_id:MAKA90306] 0.03296 0.99995 5172
b[(Intercept) expedition_id:MANA06305] 0.03236 0.99968 6130
b[(Intercept) expedition_id:MANA08324] 0.02780 1.00038 4185
b[(Intercept) expedition_id:MANA09107] 0.01933 0.99994 4739
b[(Intercept) expedition_id:MANA10319] 0.01456 1.00035 2782
b[(Intercept) expedition_id:MANA11106] 0.01697 0.99943 5476
b[(Intercept) expedition_id:MANA11318] 0.02141 1.00035 5130
b[(Intercept) expedition_id:MANA17333] 0.02655 1.00027 7096
b[(Intercept) expedition_id:MANA18306] 0.01711 0.99994 3787
b[(Intercept) expedition_id:MANA80101] 0.01858 1.00012 6306
b[(Intercept) expedition_id:MANA82401] 0.02847 0.99947 4674
b[(Intercept) expedition_id:MANA84101] 0.02809 0.99971 4034
b[(Intercept) expedition_id:MANA91301] 0.03351 1.00015 5166
b[(Intercept) expedition_id:MANA97301] 0.01381 1.00024 2821
b[(Intercept) expedition_id:NAMP79101] 0.03295 1.00015 5653
b[(Intercept) expedition_id:NEMJ83101] 0.03363 1.00030 4818
b[(Intercept) expedition_id:NILC79101] 0.01346 0.99998 4493
b[(Intercept) expedition_id:NUPT99102] 0.03219 0.99989 6566
b[(Intercept) expedition_id:OHMI89101] 0.01505 1.00005 3853
b[(Intercept) expedition_id:PUMO06301] 0.03572 0.99979 4903
b[(Intercept) expedition_id:PUMO80302] 0.03187 1.00001 6039
b[(Intercept) expedition_id:PUMO92308] 0.02068 1.00040 4084
b[(Intercept) expedition_id:PUMO96101] 0.01419 1.00021 3627
b[(Intercept) expedition_id:PUTH02301] 0.01463 1.00011 4166
b[(Intercept) expedition_id:PUTH97302] 0.03174 0.99942 6155
b[(Intercept) expedition_id:SHAL05301] 0.03083 0.99951 6418
b[(Intercept) expedition_id:SPHN93101] 0.02649 0.99957 4140
b[(Intercept) expedition_id:SYKG12301] 0.01626 0.99966 4726
b[(Intercept) expedition_id:TAWO83101] 0.03483 0.99943 5339
b[(Intercept) expedition_id:TILI04301] 0.03263 0.99995 5582
b[(Intercept) expedition_id:TILI05301] 0.01339 1.00185 2491
b[(Intercept) expedition_id:TILI91302] 0.01600 1.00088 3624
b[(Intercept) expedition_id:TILI92301] 0.03281 0.99965 5325
b[(Intercept) expedition_id:TKPO02102] 0.03209 0.99983 5383
b[(Intercept) expedition_id:TUKU05101] 0.03241 0.99969 6222
b[(Intercept) expedition_id:TUKU16301] 0.02745 1.00002 3712
Sigma[expedition_id:(Intercept),(Intercept)] 0.06973 1.00350 1317
mean_PPD 0.00012 1.00015 5050
log-posterior 0.42548 1.00512 976
For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
#plot(climb_stan_glm1, "trace")
#plot(climb_stan_glm1, "hist")
Now we can conduct our posterior predictive check. Does the model seem to represent our data well?
# Define success rate function
success_rate <- function (x){mean (x == 1 )}
# Posterior predictive check
pp_check (climb_stan_glm1,
plotfun = "stat" , stat = "success_rate" ) +
xlab ("success rate" )
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.