### Libraries
library("tidyverse")
library("ggplot2")
library("patchwork")
library("lubridate")
library("kableExtra")
library("gtsummary")
library("lubridate")
library("equatiomatic")
library("ggdag")
library("brms")
library("rstan")
library("rstanarm")
library("bayesplot")
library("easystats")
library("kableExtra")
library("broom")
library("tidybayes")
library("bmlm")
if (!require(tidyLPA)) {
install.packages("tidyLPA")
}
# rstan options
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores ())
theme_set(theme_classic())
To develop the following skills:
An ordinal model is a cumulative logistic model. Instead of binary outcomes, we assume there are n > 2 outcomes .
Consider the HLTH.Fatigue
indicator from the NZAVS jitter dataset. The item reads:
“During the past 30 days have you felt fatigue…”
and the responses are:
“None Of The Time,”
“A Little Of The Time,”
“Some Of The Time,”
“Most Of The Time,”
“All Of The Time.”
A priori, we might think that Fatigue is better approached as an ordered categorical indicator rather than as a continuous indicator:
hist(nzl$HLTH.Fatigue)
We can write the model equation for an ordinal (or cumulative logistic) model of Fatigue s follows
\[\begin{align} y_{i}\sim \text{Ordered}(p_i) \\ \text{CumLogit}(\boldsymbol{p_i}) = \alpha^c +\beta x_i \\ \alpha^c \sim Normal(0,10)\\ \boldsymbol{\beta}\sim \text{Normal}(0,1) \end{align}\]
For an ordinal response model, where there are \(R\) responses for our outcome indicator, there are \(C = R-1, c = 1\dots C\) thresholds or “cutpoints.”
As with binary logistic regression, residuals for cumulative logistic models are not identified, so are fixed to 1. \(\alpha^c\) denotes the \(c^{th}\) intercept estimated in the model. Where the lowest response level is modeled as zero, hence four intercepts are estimated. \(\mathbf{\beta}\) is the esitmate for the\(X\) parameter measured on \(i...N\) individuals. In this case the time categories of the Covid_Timeline
indicator.
m1 <- brm(
bf(HLTH.Fatigue_int ~
Covid_Timeline_cr),
family = cumulative(link = "logit"),
data = nz,
file = here::here("models", "ordinal_fatigue"),
silent = FALSE
)
par_m1 <- parameters::model_parameters(m1)
par_m1
# Fixed effects
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
----------------------------------------------------------------------------------------------
Intercept[1] | -1.87 | [-1.94, -1.80] | 100% | 0% | 1.000 | 3136.00
Intercept[2] | -0.07 | [-0.12, -0.02] | 99.00% | 99.98% | 1.000 | 4351.00
Intercept[3] | 1.33 | [ 1.27, 1.39] | 100% | 0% | 1.000 | 4529.00
Intercept[4] | 3.02 | [ 2.91, 3.12] | 100% | 0% | 0.999 | 4437.00
Covid_Timeline_crJanFeb | 0.08 | [-0.06, 0.22] | 81.92% | 86.88% | 0.999 | 5554.00
Covid_Timeline_crEarlyMarch | 0.04 | [-0.10, 0.19] | 67.50% | 93.50% | 1.000 | 5763.00
Covid_Timeline_crLockdown | -0.26 | [-0.43, -0.08] | 99.12% | 22.82% | 0.999 | 4896.00
Covid_Timeline_crPostLockdown | -0.10 | [-0.22, 0.01] | 92.50% | 84.38% | 1.000 | 4559.00
plot(par_m1)
Hre we can see that the log-odds of Fatigue dropped during Lockdown. But how exactly did Fatigue drop?
As with logistic regression, the coefficients here are not obvious. To interpret results, we should graph the predicted outcomes at specific values.
To understand what is happening, we plot all the individual intercepts by setting categorical = TRUE
. Following McElreath’s Statistical Rethinking, we will predict outcomes using the .89 probability credible interval:
plot(
conditional_effects(
m1,
categorical = TRUE,
prob = 0.89,
re_formula = NA,
), # WE cannot graph points when the points arg = TRUE
points = TRUE,
point_args = list(
alpha = 0.1,
width = .1,
size = .2
)
)
Here we see the separation occurring during Lockdown
as compared with the baseline PreCOVID
This is another method for graphing, where the individual facets denote our expected response thresholds:
Finally, for the purposes of exploration, we can obtain an average response using the BRMS conditional_effects
function, with the setting categorical = FALSE
plot(
conditional_effects(
m1,
categorical = FALSE,
prob = 0.89,
re_formula = NA,
),
points = TRUE,
point_args = list(
alpha = 0.1,
width = .1,
size = .2
)
)
Reference, here (https://psyarxiv.com/9qkhj/) also the BRMS documentation here
A predictor, which we want to model as monotonic (i.e., having a monotonically increasing or decreasing relationship with the response), must either be integer valued or an ordered factor. As opposed to a continuous predictor, predictor categories (or integers) are not assumed to be equidistant with respect to their effect on the response variable. Instead, the distance between adjacent predictor categories (or integers) is estimated from the data and may vary across categories.
For an ordinal predictor model, where there are \(R\) indicator responses, there are \(C = R-1, c = 1\dots C\) thresholds to estimate for the the ordered reponses predictors
\[g(y_i) = \alpha + b^c \zeta^c x_i\]
Suppose we wanted to assess whether fatigue predicts psychological distress. We’ll use a smaller sub-sample of the NZAVS jitter dataset to speed up computing time:
First we estimate a model without a monotonic effect:
# ordinary predictor
mo_0 <- brm(
bf(KESSLER6sum ~ HLTH.Fatigue,
family = "poisson"),
data = sub_nzl,
file = here::here("models", "monotonic_0"),
silent = FALSE
)
# table
par_mo_0 <-parameters::model_parameters(mo_0)
par_mo_0
# Fixed effects
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
-------------------------------------------------------------------------
(Intercept) | 0.76 | [0.67, 0.86] | 100% | 0% | 1.001 | 1640.00
HLTH.Fatigue | 0.45 | [0.42, 0.49] | 100% | 0% | 1.000 | 1895.00
The expected increase in distress from a one unit increase in Fatigue is .45 units of Kessler 6 distress on the log scale.
To translate this expectation to the data scale recall that we must exponentiate our result.
A one unit change in fatigue at the intercept leads to an expected change in distress of:
exp(.75 + .45)
[1] 3.320117
We plot the entire range of the response scale:
p_mo_0 <- plot(
conditional_effects(mo_0),
points = TRUE,
point_args = list(alpha = 0.1,
width = .2),
ask = FALSE
)[[1]]
We next model the relationship of Fatigue on Distress by thinking of Fatigue as as a monotonic predictor, using the mo
command, as follows:
mo_1 <- brm(
bf(KESSLER6sum ~ mo(HLTH.Fatigue),
family = "poisson"),
data = sub_nzl,
file = here::here("models", "monotonic_1")
)
# table
par_mo_1 <-parameters::model_parameters(mo_1)
par_mo_1
# Fixed effects
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
---------------------------------------------------------------------------
(Intercept) | 0.84 | [0.67, 1.01] | 100% | 0% | 1.004 | 1179.00
moHLTH.Fatigue | 0.44 | [0.38, 0.49] | 100% | 0% | 1.004 | 1253.00
# Fixed effects (monotonic effects)
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
-----------------------------------------------------------------------------
HLTH.Fatigue1[1] | 0.19 | [0.09, 0.28] | 100% | 7.90% | 1.003 | 1248.00
HLTH.Fatigue1[2] | 0.29 | [0.21, 0.36] | 100% | 0% | 1.002 | 1691.00
HLTH.Fatigue1[3] | 0.24 | [0.17, 0.31] | 100% | 0.03% | 1.000 | 2338.00
HLTH.Fatigue1[4] | 0.28 | [0.20, 0.35] | 100% | 0% | 1.001 | 2441.00
Note that we have 4 x betas for Fatigue. What do these mean?
Graphing the results:
p_mo_1 <- plot(
conditional_effects(mo_1),
points = TRUE,
point_args = list(alpha = 0.1,
width = .2)
)[[1]]
The monotonic predictor, here, doesn’t look to make all that much a difference.
We can formally compare the two models:
compare_mo_0 <- add_criterion(mo_0, "loo")
compare_mo_1 <- add_criterion(mo_1, "loo")
compare_mo <-
loo_compare(compare_mo_0, compare_mo_1, criterion = "loo")
compare_mo
elpd_diff se_diff
compare_mo_0 0.0 0.0
compare_mo_1 -4.1 1.7
No real difference.
And can visually inspect the posterior predictions, and they not look too different:
Without monotonic predictors
brms::pp_check(mo_0)
With monotonic predictors
brms::pp_check(mo_1)
Again, no real difference. However, our model isn’t fitting well. What would you do to improve the model fit?
There are no default strategies. Modelling requires a combination of reasoning and art.
A distributional model predicts the variance as well as the mean of a response.
Paul Bruckner, the author of the BRMS package, offers the following simulated data for a distributional model (see the explanation {here](https://cran.r-project.org/web/packages/brms/vignettes/brms_distreg.html))
group <- rep(c("treat", "placebo"), each = 30)
symptom_post <-
c(rnorm(30, mean = 1, sd = 2), rnorm(30, mean = 0, sd = 1))
dat1 <- data.frame(group, symptom_post)
head(dat1)
group symptom_post
1 treat -5.621357
2 treat 2.076932
3 treat 2.784972
4 treat 1.978924
5 treat -1.392703
6 treat 1.925476
Fitting the model we have:
fit1 <- brm(bf(symptom_post ~ group,
sigma ~ group),
data = dat1,
file = here::here("models", "distributional"),
family = gaussian())
par_fit1 <-parameters::model_parameters(fit1)
par_fit1
# Fixed effects
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
---------------------------------------------------------------------------------
(Intercept) | -0.21 | [-0.38, -0.03] | 96.97% | 35.80% | 1.000 | 4801.00
sigma_Intercept | -0.51 | [-0.72, -0.31] | 100% | 0.73% | 1.000 | 2927.00
grouptreat | 1.50 | [ 0.88, 2.12] | 99.98% | 0.07% | 1.000 | 2219.00
sigma_grouptreat | 1.21 | [ 0.91, 1.49] | 100% | 0% | 1.000 | 2844.00
# Fixed effects sigma
Parameter | Median | 89% CI | pd | % in ROPE | Rhat | ESS
-------------------------------------------------------------------------------
sigma_Intercept | -0.51 | [-0.72, -0.31] | 100% | 0.73% | 1.000 | 2927.00
sigma_grouptreat | 1.21 | [ 0.91, 1.49] | 100% | 0% | 1.000 | 2844.00
# coefficient plot
plot(par_fit1)
Here we have the prediction plot
plot(conditional_effects(fit1), points = TRUE)
We can see the variance in the treatment group:
When might this model apply?
Suppose we have two groups of patients: One group receives a treatment (e.g., an antidepressive drug) and another group receives placebo. Since the treatment may not work equally well for all patients, the symptom variance of the treatment group may be larger than the symptom variance of the placebo group after some weeks of treatment. (Paul Bruckner, BRMS vignette)
Typically our model has clustering. Often this clustering arises from dependencies owning to unmeasured grouping factors, students within schools, or repeated measures within students.
However, recall a fundamental assumption of regression:
The simple regression model assumes that the errors from the prediction line are independent, an assumption that is violated in time series, spatial, and multilevel settings (Gelman, Hill, and Vehtari 2020, 153)
Let’s look at environmental concern as predicted by religious group membership:
# 506 weekss betwen 530 June 2009 and 15 March 2019
nzl_11 <- nzl %>%
dplyr::filter(Wave == 2019)
ri_relgroups <- brm(
bf(Env.ClimateChgConcern ~
(1 | Religious_Group)),
data = nzl_11,
file = here::here("models", "ri_relgroups"),
silent = FALSE
)
summary(ri_relgroups)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: Env.ClimateChgConcern ~ (1 | Religious_Group)
Data: nzl_11 (Number of observations: 5618)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~Religious_Group (Number of levels: 71)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(Intercept) 0.70 0.17 0.40 1.04 1.00
Bulk_ESS Tail_ESS
sd(Intercept) 845 1064
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat
Intercept 5.07 0.13 4.82 5.34 1.00
Bulk_ESS Tail_ESS
Intercept 881 1404
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sigma 1.71 0.02 1.68 1.74 1.00 6911
Tail_ESS
sigma 2903
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
\[ g(y_{ig}) \sim N(\mu_{ig}, \sigma)\\ \mu_{ig} = \boldsymbol{\alpha} + \beta x_{i}\\ \boldsymbol{\alpha} = \alpha_o + \alpha_g\\ \alpha_o \sim N(3.5,10)\\ \alpha_g \sim N(0,\sigma_g)\\ \sigma_g \sim Exp(1)\\ \sigma \sim Exp(1) \]
We can calculate the intraclass correlation coefficient for religious groups, which is the ration of group variance devited by the total variance or in this case:
\[\frac{\sigma^2_{religious groups}}{\sigma^2_{religious groups} + \sigma^2_{indiviuals}}\]
performance::icc(ri_relgroups)
# Intraclass Correlation Coefficient
Adjusted ICC: 0.143
Conditional ICC: 0.143
The authors of the performance package write (performance package documentation)
While the adjusted ICC only relates to the random effects, the conditional ICC also takes the fixed effects variances into account (see Nakagawa et al. 2017). Typically, the adjusted ICC is of interest when the analysis of random effects is of interest.
We can plot the group-level variances:
par_ri_relgroups <-parameters::model_parameters(ri_relgroups,
effects = "random")
plot(par_ri_relgroups, sort = TRUE)
References: See: https://m-clark.github.io/mixed-models-with-R/
With longitudinal data, we have repeated measures within individuals. This leads to clustering. We can adjust for this clustering by including group-level intercept for individuals.
Let’s return to the Fatigue ~ Covid_Timeline model.
\[\begin{align} y_{ij} \sim \text{Ordered}(\mu_{ij}) \\ \text{CumLogit}(\mu_{ij}) = \boldsymbol{\alpha}^c +\beta X_i \\ \boldsymbol{\alpha}^c = \alpha_{0}^c +\alpha_j\\ \alpha_{0}^c \sim N(0,10)\\ \alpha_j \sim N(0,\sigma_j)\\ \sigma_j \sim exp(1)\\ \boldsymbol{\beta}\sim \text{Normal}(0,1) \\ \end{align}\]
m1_l <- brm(
bf(HLTH.Fatigue_int ~
Covid_Timeline + (1|Id),
family = cumulative(link = "logit")),
data = nzl,
file = here::here("models", "ordinal_fatigue_longitudinal"),
silent = FALSE
)
summary(m1_l)
Family: cumulative
Links: mu = logit; disc = identity
Formula: HLTH.Fatigue_int ~ Covid_Timeline + (1 | Id)
Data: nzl (Number of observations: 11736)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~Id (Number of levels: 5934)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(Intercept) 2.48 0.05 2.38 2.58 1.01
Bulk_ESS Tail_ESS
sd(Intercept) 870 2060
Population-Level Effects:
Estimate Est.Error l-95% CI
Intercept[1] -3.06 0.06 -3.18
Intercept[2] -0.02 0.05 -0.11
Intercept[3] 2.65 0.06 2.53
Intercept[4] 5.32 0.09 5.14
Covid_TimelinePreCOVID 0.23 0.04 0.15
Covid_TimelineJanFeb 0.27 0.11 0.05
Covid_TimelineEarlyMarch 0.19 0.12 -0.04
Covid_TimelineLockdown -0.38 0.15 -0.68
Covid_TimelinePostLockdown -0.06 0.10 -0.25
u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1] -2.95 1.00 2158 2852
Intercept[2] 0.07 1.00 2688 3297
Intercept[3] 2.76 1.00 2192 3020
Intercept[4] 5.49 1.00 2001 2816
Covid_TimelinePreCOVID 0.32 1.00 7122 3181
Covid_TimelineJanFeb 0.49 1.00 4779 3284
Covid_TimelineEarlyMarch 0.43 1.00 4527 3187
Covid_TimelineLockdown -0.08 1.00 4456 3136
Covid_TimelinePostLockdown 0.13 1.00 4723 3491
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
disc 1.00 0.00 1.00 1.00 1.00 4000
Tail_ESS
disc 4000
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
We graph the results
plot(
conditional_effects(
m1_l,
# spaghetti = TRUE,
# nsamples = 100,
categorical = T,
prob = 0.89,
re_formula = NA,
),
points = TRUE,
point_args = list(alpha = 0.1,
width = .02)
) # note this command controls which facet
\[ g(y^n_{ij}) \sim N(\mu_{ij}^n, \sigma^n)\\ \mu_{ij}^n = \boldsymbol{\alpha^n} + \beta^n x_{i}\\ \boldsymbol{\alpha} = \alpha_o + \alpha_g\\ \alpha^n_o \sim N(3.5,10)\\ \alpha^n_g \sim N(0,\sigma_g^n)\\ \sigma_g^n \sim exp(1)\\ \sigma^{n_1} \sim exp(1)\\ \sigma^{n_2}\sim exp(1) \\ \sigma\sim exp(1) \\ \begin{bmatrix} \sigma^{n_1} \\ \sigma^{n_2} \end{bmatrix} \sim \boldsymbol{R} \begin{pmatrix} 1 & \sigma^{n_1}\sigma^{n_2} \rho \\ \sigma^{n_1}\sigma^{n_2}\rho & 1 \end{pmatrix}\\ \boldsymbol{R}\sim \text{LKJcorr}(2) \]
f_mv <- brm(
mvbind(Warm.Immigrants, Warm.Muslims) ~ log(Hours.News + 1),
data = sub_nzl,
file = here::here("models", "multivariate_warmth"),
)
summary(f_mv)
Family: MV(gaussian, gaussian)
Links: mu = identity; sigma = identity
mu = identity; sigma = identity
Formula: Warm.Immigrants ~ log(Hours.News + 1)
Warm.Muslims ~ log(Hours.News + 1)
Data: sub_nzl (Number of observations: 293)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI
WarmImmigrants_Intercept 4.54 0.16 4.23
WarmMuslims_Intercept 4.30 0.18 3.95
WarmImmigrants_logHours.NewsP1 0.08 0.10 -0.11
WarmMuslims_logHours.NewsP1 0.03 0.12 -0.20
u-95% CI Rhat Bulk_ESS
WarmImmigrants_Intercept 4.84 1.00 3094
WarmMuslims_Intercept 4.66 1.00 3101
WarmImmigrants_logHours.NewsP1 0.28 1.00 3046
WarmMuslims_logHours.NewsP1 0.26 1.00 3046
Tail_ESS
WarmImmigrants_Intercept 2693
WarmMuslims_Intercept 2861
WarmImmigrants_logHours.NewsP1 2395
WarmMuslims_logHours.NewsP1 2622
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI
sigma_WarmImmigrants 1.24 0.05 1.14 1.34
sigma_WarmMuslims 1.46 0.06 1.35 1.58
Rhat Bulk_ESS Tail_ESS
sigma_WarmImmigrants 1.00 2980 2955
sigma_WarmMuslims 1.00 3086 2924
Residual Correlations:
Estimate Est.Error
rescor(WarmImmigrants,WarmMuslims) 0.77 0.02
l-95% CI u-95% CI Rhat
rescor(WarmImmigrants,WarmMuslims) 0.72 0.81 1.00
Bulk_ESS Tail_ESS
rescor(WarmImmigrants,WarmMuslims) 3547 2954
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Coefficient plot
brms::mcmc_plot(f_mv,
type = "areas",
prob = .89)
bmlm::mlm_path_plot(xlab = "Condition\n(X)",
mlab = "Mediator\n(M)",
ylab = "Distress\n(Y)")
path_m <- bf(
mF ~ x + (1 | c | id)
)
path_y <- bf(
y ~ x + mF_w + mF_b +
(1 | c | id)
)
m1 <- brm(
path_m + path_y + set_rescor(FALSE),
data = datF,
file = here("models/mediation-k6-covid-fatigue")
)
x <-nzl$Covid_Timeline
mF <-nzl$HLTH.Fatigue_int
y<- nzl$KESSLER6sum
id <-nzl$Id
datF <-data.frame(x,mF, y,id)
path_m <- bf(
mF ~ x + (1 | c | id)
)
path_y <- bf(
y ~ x + mF + (1 | c | id)
)
f1 <- brm(
path_m + path_y + set_rescor(FALSE),
data = datF,
file = here::here("models/mediation-k6-covid-fatigue")
)
summary(f1)
Family: MV(gaussian, gaussian)
Links: mu = identity; sigma = identity
mu = identity; sigma = identity
Formula: mF ~ x + (1 | id)
y ~ x + mF + (1 | c | id)
Data: datF (Number of observations: 11735)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~id (Number of levels: 5933)
Estimate Est.Error l-95% CI u-95% CI Rhat
sd(mF_Intercept) 0.81 0.01 0.79 0.83 1.00
sd(y_Intercept) 2.52 0.03 2.46 2.59 1.01
Bulk_ESS Tail_ESS
sd(mF_Intercept) 1271 2049
sd(y_Intercept) 1000 1890
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat
mF_Intercept 2.58 0.01 2.55 2.60 1.01
y_Intercept 0.69 0.09 0.52 0.87 1.00
mF_xPreCOVID 0.08 0.01 0.05 0.11 1.00
mF_xJanFeb 0.09 0.04 0.02 0.17 1.00
mF_xEarlyMarch 0.06 0.04 -0.02 0.14 1.00
mF_xLockdown -0.12 0.05 -0.22 -0.02 1.00
mF_xPostLockdown -0.02 0.03 -0.08 0.04 1.00
y_xPreCOVID -0.04 0.05 -0.13 0.05 1.00
y_xJanFeb -0.19 0.12 -0.43 0.05 1.00
y_xEarlyMarch 0.23 0.12 -0.01 0.48 1.00
y_xLockdown 0.55 0.16 0.24 0.88 1.00
y_xPostLockdown 0.15 0.10 -0.04 0.34 1.00
y_mF 1.71 0.03 1.65 1.77 1.00
Bulk_ESS Tail_ESS
mF_Intercept 1654 2355
y_Intercept 1638 2484
mF_xPreCOVID 5789 3093
mF_xJanFeb 3596 3181
mF_xEarlyMarch 3688 2978
mF_xLockdown 3466 2915
mF_xPostLockdown 3998 3049
y_xPreCOVID 4813 3216
y_xJanFeb 3616 3138
y_xEarlyMarch 3418 3080
y_xLockdown 3601 3083
y_xPostLockdown 3569 3198
y_mF 1662 2226
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sigma_mF 0.67 0.01 0.66 0.68 1.00 1521
sigma_y 2.10 0.02 2.06 2.14 1.00 1612
Tail_ESS
sigma_mF 2310
sigma_y 2807
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
post1F <- brms::posterior_samples(f1)
post_marF <- post1F %>%
transmute(
a = b_mF_xLockdown,
b = b_y_mF,
cp = b_y_xLockdown,
me = a * b,
c = cp + me#,
# pme = me / c
)
# posterior_summary(post_marF)
mcmc_intervals(post_marF)
h1 <- c(
a = "mF_xLockdown = 0",
b = "y_mF = 0",
cp = "y_xLockdown = 0",
me = "mF_xLockdown * y_xLockdown = 0",
c = "mF_xLockdown * y_mF + y_xLockdown = 0"
)
plot(
hypothesis(f1, h1)
)
Distinct profiles or clusters… tbc…
library(tidyLPA)
out<-nzl %>%
dplyr::select(Hours.Internet,
Hours.Exercise,
Hours.Work)%>%
dplyr::mutate_all(., scale)
out%>%
single_imputation() %>%
tidyLPA::estimate_profiles(3) %>%
plot_profiles(add_line = TRUE)
If you see mistakes or want to suggest changes, please create an issue on the source repository.
Text and figures are licensed under Creative Commons Attribution CC BY-NC-SA 4.0. Source code is available at https://go-bayes.github.io/psych-447/, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Bulbulia (2021, May 4). Psych 447: Ordinal responses, monotonic predictors, mediation, and group-level intercepts. Retrieved from https://vuw-psych-447.netlify.app/posts/9_1/
BibTeX citation
@misc{bulbulia2021ordinal, author = {Bulbulia, Joseph}, title = {Psych 447: Ordinal responses, monotonic predictors, mediation, and group-level intercepts}, url = {https://vuw-psych-447.netlify.app/posts/9_1/}, year = {2021} }