---
title: "Reporting Guide"
date: "2025-AUG-12"
bibliography: /Users/joseph/GIT/templates/bib/references.bib
editor_options:
chunk_output_type: console
format:
html:
warnings: FALSE
error: FALSE
messages: FALSE
code-overflow: scroll
highlight-style: kate
code-tools:
source: true
toggle: FALSE
html-math-method: katex
cap-location: margin
---
```{r}
#| echo: FALSE
#| warning: FALSE
# reference-location: margin
# citation-location: margin
# WARNING: COMMENT THIS OUT. JB DOES THIS FOR WORKING WITHOUT WIFI
#source("/Users/joseph/GIT/templates/functions/libs2.R")
# WARNING: COMMENT THIS OUT. JB DOES THIS FOR WORKING WITHOUT WIFI
#source("/Users/joseph/GIT/templates/functions/funs.R")
# ALERT: UNCOMMENT THIS AND DOWNLOAD THE FUNCTIONS FROM JB's GITHUB
# source(
# "https://raw.githubusercontent.com/go-bayes/templates/main/functions/experimental_funs.R"
# )
# source(
# "https://raw.githubusercontent.com/go-bayes/templates/main/functions/experimental_funs.R"
# )
# for making graphs
library("tinytex")
library("extrafont")
loadfonts(device = "all")
```
::: {.callout-note}
## Readings for Workshop
- Barrett M (2023). _ggdag: Analyze and Create Elegant Directed Acyclic Graphs_. R package version 0.2.7.9000, <https://github.com/malcolmbarrett/ggdag>
- "An Introduction to Directed Acyclic Graphs", <https://r-causal.github.io/ggdag/articles/intro-to-dags.html>
- "Common Structures of Bias", <https://r-causal.github.io/ggdag/articles/bias-structures.html>
:::
::: {.callout-important}
## Key concepts:
- **Confounding**
- **Causal Directed Acyclic Graph**
- **Five Elementary Causal Structures**
- **d-separation**
- **Back door path**
- **Conditioning**
- **Fork bias**
- **Collider bias**
- **Mediator bias**
- **Four Rules of Confounding Control**
:::
# A Practical Checklist for Your Sparrc Study
1. **State a well-defined treatment.**
Specify the hypothetical intervention precisely enough that every member of the target population could, in principle, receive it. For example, 'weight loss' is too vague—people lose weight via exercise, diet, depression, cancer, amputation, and more [@hernan2024WHATIF]. A clearer intervention is: *“engage in vigorous physical activity for ≥30 minutes per day”* [@hernan2008aObservationalStudiesAnalysedLike]. Precision here underwrites consistency (see step 5) and interpretability downstream.
2. **State a well-defined outcome.**
Define the outcome so the causal contrast is meaningful and temporally anchored. ‘Well-being’ is underspecified; *'psychological distress one year post-intervention measured with the Kessler-6'* is interpretable and reproducible [@kessler2002]. Include timing, scale, and instrument.
3. **Clarify the target population.**
Say exactly *who* you aim to inform. Eligibility rules define the **source population**, but sampling and participation can yield a **study population** with a different distribution of effect modifiers [@dahabreh2019; @dahabreh2019generalizing; @stuart2018generalizability; @bulbulia2024wierd]. If you intend to generalise beyond the source population (transport), articulate the additional conditions and knowledge required [@deffner2022; @bareinboim2013general; @westreich2017transportability; @dahabreh2019; @pearl2022external].
4. **Evaluate whether treatment groups are exchangeable given measured covariates.**
Make the case that potential outcomes are independent of treatment conditional on covariates, i.e., $Y(a)\coprod A\mid X$ [@neal2020introduction; @morgan2014; @angrist2009mostly; @hernan2024WHATIF]. Use design and diagnostics (design diagrams/DAGs, subject-matter arguments, pre-treatment covariate balance, overlap checks). If exchangeability is doubtful, redesign (e.g., stronger measurement, alternative identification strategies) rather than rely solely on modelling.
5. **Ensure treatments to be compared satisfy causal consistency.**
Consistency requires that, for units receiving a treatment *version* compatible with level $a$, the observed outcome equals $Y(a)$; it also presumes well-defined versions and no interference between units [@vanderweele2013; @hernan2024WHATIF]. When multiple versions exist, either refine the intervention so versions are irrelevant to $Y(a)$, or condition on version-defining covariates in ways that preserve your estimand.
6. **Check the positivity (overlap) assumption.**
Each treatment level must occur with non-zero probability at every covariate profile needed for exchangeability—and, when versions exist, for consistency as well [@westreich2010]. Diagnose limited overlap (propensity score distributions, extreme weights), and consider design-stage remedies (trimming, restriction, adaptive sampling) before estimation.
7. **Ensure measurement aligns with the scientific question.**
Verify that constructs are captured by instruments whose error structures won’t distort the causal contrast of interest. Be explicit about likely forms of measurement error (classical, Berkson, differential, misclassification) and their structural implications for bias [@hernan2024WHATIF; @vanderweele2012MEASUREMENT; @bulbulia2024wierd]. Where feasible, incorporate validation studies, multiple indicators, or calibration models.
8. **Preserve representativeness from start to finish.**
End-of-study analyses should reflect the target population’s distribution of effect modifiers. Differential attrition, non-response, or measurement processes tied to treatment and outcomes can induce selection bias in the presence of true effects [@hernan2017SELECTIONWITHOUTCOLLIDER; @hernan2017per; @hernan2004STRUCTURAL; @bulbulia2024wierd]. Plan and justify strategies such as inverse probability weighting for censoring, multiple imputation under defensible mechanisms, sensitivity analyses for missing not at random, and careful timing of measurements.
9. **Document the reasoning that supports steps 1–8.**
Make assumptions, disagreements, and judgement calls legible: register or otherwise time-stamp your analytic plan; include identification arguments (e.g., DAGs), code, and data where possible; report robustness and sensitivity analyses; and explain decisions about design restrictions, modelling choices, and transportability [@ogburn2021]. Transparent reasoning is a scientific result in its own right.
## Projects with Applied Interests
1. **Translate effects into absolute, population-level impact.**
Report absolute risk differences for the target population. Always show the baseline so deltas are interpretable.
2. **Show heterogeneity and targetability.**
Where possible, identify who is affected, who is unaffected, and who may be harmed. If appropriate, provide a simple, auditable policy rule for targeting (and a plain-language rationale).
3. **Express uncertainty in decision terms.**
Go beyond confidence intervals: report probabilities that an option is optimal, expected net benefit, expected regret, and when helpful the value of additional information. Use simulation to make uncertainty tangible.
<!-- ### 1 Nine‑Step Causal Inference Workflow -->
<!-- | Step | Requirement | Key Question(s) | Example Good vs. Poor Definition | Core Assumption | -->
<!-- |------|-------------|-----------------|----------------------------------|-----------------| -->
<!-- | **1** | **State a Well-Defined Treatment** | What specific hypothetical intervention will be applied to all population members? | ✅ "Weight loss by at least 30 minutes of vigorous exercise each day" <br> ❌ "Weight loss" (many pathways: diet, exercise, illness, surgery) | **Consistency** | -->
<!-- | **2** | **State a Well-Defined Outcome** | What specific outcome measure will be assessed, when, and how? | ✅ "Psychological distress measured one year after intervention using Kessler-6 scale" <br> ❌ "Well-being" (vague construct) | **Interpretability** | -->
<!-- | **3** | **Clarify the Target Population** | To whom will results generalise? How does the source population relate to the target? | ✅ "New Zealand adults aged 18+ as represented in NZAVS weighted by census demographics" <br> ❌ "People in general" | **Generalisability** | -->
<!-- | **4** | **Evaluate Exchangeability** | Are treatment groups independent of potential outcomes conditional on measured covariates? | Can we achieve conditional independence through measured confounders? Consider unmeasured confounding. | **Conditional Exchangeability** | -->
<!-- | **5** | **Ensure Causal Consistency** | Are treatment versions independent of potential outcomes conditional on covariates? | Do different ways of receiving "treatment" yield the same potential outcomes? | **Consistency** | -->
<!-- | **6** | **Check Positivity** | Is there non-zero probability of each treatment level at every covariate combination? | Examine overlap in treatment assignment across the full covariate space. | **Positivity** | -->
<!-- | **7** | **Evaluate Measurement Validity** | Do measures relate to scientific questions? What are sources of measurement error bias? | Consider structural features of measurement error and their impact on causal inference. | **Measurement Validity** | -->
<!-- | **8** | **Address Attrition and Selection** | Does the final study group represent the target population throughout the study period? | Develop strategies for attrition, non-response, and measurement error bias. | **Representativeness** | -->
<!-- | **9** | **Document Transparently** | Have all reasoning, evidence, and decisions for steps 1-8 been clearly communicated? | Provide thorough documentation including causal assumptions and disagreements. | **Transparency** | -->
<!-- ## STEP 1: Ask Your Causal Question -->
<!-- - **State your question:** is my question clearly stated? If not, state it. -->
<!-- - **Relevance:** have I explained its importance? If not, explain. -->
<!-- - **Ethics** how might this question affect people? How might not investigating this question affect people? -->
<!-- - **Causality:** Is my question causal? If not, refine your question. -->
<!-- - **Heterogeneous Treatment Effects**: Do I want to examine who responds differently (CATE)? -->
<!-- - **Subgroup analysis:** does my question involve a subgroup (e.g., cultural group)? If not, develop a subgroup analysis question. -->
<!-- - **Explain the Framework:** can I explain the causal inference framework and convey the gist to non-specialists? -->
<!-- #### Determine Data Requirements -->
<!-- - **Data types:** is treatment assignment in my data randomised? -->
<!-- - **Time-series data:** are my data time-series? If not, reconsider your causal question. -->
<!-- - **Data waves:** do I have at least three waves of data? If not, beware of confounding control issues. -->
<!-- - **Data source:** what are the biases in collection/reporting -->
<!-- #### Determine the Outcome -->
<!-- - **Outcome variable:** is the outcome variable *Y* defined? If not, define it. -->
<!-- - **Multiple outcomes:** are there multiple outcomes? If yes, explain and define them. -->
<!-- - **Outcome relevance:** can I explain how the outcome variable/s relate to my question? If not, clarify. -->
<!-- - **Outcome type:** is my outcome binary and rare? If yes, consider logistic regression. If my outcome is continuous, consider z-transforming it or categorising it (consult an expert). -->
<!-- - **Outcome timing:** does the outcome appear after the exposure? It should. -->
<!-- #### Determine the Exposure -->
<!-- - **Exposure variable:** is the exposure variable *A* defined? If not, define it. -->
<!-- - **Multiple exposures:** are there multiple exposures? If yes, reassess; if only one exposure, proceed. -->
<!-- - **Exposure relevance:** can I explain how the exposure variable relates to my question? If not, clarify. -->
<!-- - **Positivity:** can we intervene on the exposure at all levels of the covariates? We should be able to. -->
<!-- - **Consistency:** can I interpret what it means to intervene on the exposure? -->
<!-- - **Exchangeability:** are different versions of the exposure conditionally exchangeable given measured baseline confounders? They should be. -->
<!-- - **Exposure type:** is the exposure binary or continuous? -->
<!-- - **Shift intervention**: Am I contrasting static interventions or modified treatment policies? -->
<!-- - **Exposure timing:** Does the exposure appear before the outcome? It should. -->
<!-- #### Account for Confounders -->
<!-- - **Baseline confounders:** Have I defined my baseline confounders *L*? I should have. -->
<!-- - **Justification:** Can I explain how the baseline confounders could affect both *A* and *Y*? I should be able to. -->
<!-- - **Timing:** Are the baseline confounders measured before the exposure? They should be. -->
<!-- - **Inclusion:** Is the baseline measure of the exposure and the baseline outcome included in the set of baseline confounders? They should be. -->
<!-- - **Sufficiency:** Are the baseline confounders sufficient to ensure balance on the exposure, such that *A* is independent of *Y* given *L*? If not, plan a sensitivity analysis. -->
<!-- - **Confounder type:** Are the confounders continuous or binary? If so, consider converting them to z-scores. If they are categorical with three or more levels, do not convert them to z-scores. -->
<!-- #### Draw a Causal Diagram with Unmeasured Confounders -->
<!-- - **Unmeasured confounders:** Does previous science suggest the presence of unmeasured confounders? If not, expand your understanding. -->
<!-- - **Causal diagram:** Have I drawn a causal diagram (DAG) to highlight both measured and unmeasured sources of confounding? -->
<!-- - **Measurement error:** Have I described potential biases from measurement errors? If not, we'll discuss later. -->
<!-- - **Temporal order:** Does my DAG have time indicators to ensure correct temporal order? It should. -->
<!-- - **Time consistency:** Is my DAG organized so that time follows in a consistent direction? It should. -->
<!-- #### Identify the Estimand -->
<!-- - ATE or CATE or both? -->
<!-- #### Understanding Source and Target Populations -->
<!-- - **Populations identified:** Have I explained how my sample relates to my target populations? I should have. -->
<!-- - **Generalisability and transportability:** Have I considered whether my results generalise different populations? I should have. -->
<!-- #### Set Eligibility Criteria -->
<!-- - **Criteria stated:** Have I stated the eligibility criteria for the study? I should have. -->
<!-- #### Describe Sample Characteristics -->
<!-- - **Descriptive statistics:** have I provided descriptive statistics for demographic information taken at baseline? I should have. -->
<!-- - **Exposure change:** Have I demonstrated the magnitudes of change in the exposure from baseline to the exposure interval? I should have. -->
<!-- - **References:** Have I included references for more information about the sample? I should have. -->
<!-- #### Addressing Missing Data -->
<!-- - **Missing data checks:** Have I checked for missing data? I should have. -->
<!-- - **Missing data plan:** If there is missing data, have I described how I will address it? I should have. -->
<!-- #### Selecting the Model Approach: If Not Using Machine Learning -->
<!-- - **Approach decision:** Have I decided on using G-computation, IPTW, or Doubly-Robust Estimation? I should have. -->
<!-- - **Interactions:** If not using machine learning, have I included the interaction of the exposure and baseline covariates? I should have. -->
<!-- - **Big data:** If I have a large data set, should I include the interaction of the exposure, group, and baseline confounders? I should consider it. -->
<!-- - **Model specification:** have I double-checked the model specification? I should. -->
<!-- - **Outcome assessment:** If the outcome is rare and binary, have I specified logistic regression? If it's continuous, have I considered converting it to z-scores? -->
<!-- - **Sensitivity analysis:** am I planning a sensitivity analysis using simulation? If yes, describe it (e.g. E-values.) -->
<!-- #### Machine Learing -->
<!-- Have I explained how causal forests work -- also the workshop R package -->
<!-- ### Clarify unmeasured pre-treatment covariates -->
<!-- Let **U** denoted unmeasured pre-treatment covariates that may potentially bias the statistical association between *A* and *Y* independently of the measured covariates. -->
<!-- #### Consider: -->
<!-- - To affect *Y* and *A*, *U* must occur before *A*. -->
<!-- - It is useful to draw a causal diagramme to illustrate all potential sources of bias. -->
<!-- - Causal diagrammes are qualitative tools that require specialist expertise. We cannot typically obtain a causal graph from the data. -->
<!-- - A causal diagramme should include only as much information as is required to assess confounding. See @fig-dag-outcomewide for an example. -->
<!-- - Because we cannot ensure the absence of unmeasured confounders in observational settings, it is vital to conduct sensitivity analyses for the results. For sensitivity analyeses, we use E-values. -->
<!-- ### Choose the scale for a causal contrast -->
<!-- Difference/ Risk ratio? -->
<!-- #### Consider: -->
<!-- - In the causal inference literature, the concept we use to make sense of stratum specific comparisons is called "effect modification." -->
<!-- - By inferring effects within strata, we may evaluate whether the effects of different exposures or treatments on some well-defined outcome (measured in some well-defined time-period after the exposure) differ depending on group measurement. -->
<!-- - The logic of effect modification differs slightly from that of interaction. -->
<!-- #### Aside: extensions -->
<!-- For continuous exposures, we must stipulate the level of contrast for the exposure (e.g. weekly versus monthly church attendance): -->
<!-- $$ATE_{a,a'} = E[Y(a) - Y(a')]$$ -->
<!-- This essentially denotes an average treatment effect comparing the outcome under treatment level $A$ to the outcome under treatment level $A'$. Likewise: -->
<!-- $$ATE_{a/a'} = \frac{E[Y(a)]}{E[Y(a')]}$$ -->
<!-- This defines the contrast of $A$ and $A'$ on a ratio scale. -->
<!-- #### Describe the population(s) for whom the intended study is meant to generalise by distinguishing between source and target populations. -->
<!-- Consider the following concepts: -->
<!-- - **Source population**: a source population is where we gather our data for a study. We pull our specific sample from this group. It needs to mirror the broader group for our conclusions to be valid and widely applicable. -->
<!-- - **Target population**: the target population is the larger group we aim to apply our study's results to. It could be defined by location, demographics, or specific conditions. The closer the source matches the target in ways that are relevant to our causal questions, the stronger our causal inferences about the target population will be. -->
<!-- - **Generalisability** refers to the ability to apply the causal effects estimated from a sample to the population it was drawn from. In simpler terms, it deals with the extrapolation of causal knowledge from a sample to the broader population. This concept is also called "external validity". -->
<!-- $$\text{Generalisability} = PATE \approx ATE_{\text{sample}}$$ -->
<!-- - **Transportability** refers to the ability to extrapolate causal effects learned from a source population to a target population when certain conditions are met. It deals with the transfer of causal knowledge across different settings or populations. -->
<!-- $$\text{Transportability} = ATE_{\text{target}} \approx f(ATE_{\text{source}}, T)$$ -->
<!-- where $f$ is a function and $T$ is a function that maps the results from our source population to another population. To achieve transportability, we need information about the source and target populations and an understanding of how the relationships between treatment, outcome, and covariates differ between the populations. Assessing transportability requires scientific knowledge. -->
<!-- ### Summary Step 1: Consider how much we need to do when asking a causal question! -->
<!-- We discover that asking a causal question is a multifaceted task. It demands careful definition of the outcome, including its timing, the exposure, and covariates. It also requires selecting the appropriate scale for causal contrast, controlling for confounding, and potentially adjusting for sample weights or stratification. Finally, when asking a causal question, we must consider for whom the results apply. Only after following these steps can we then ask: "How may we answer this causal question?" -->
<!-- ## STEP 2: Answer Your Question -->
<!-- #### Obtain longitudinal data -->
<!-- Note that causal inference from observational data turns on the appropriate temporal ordering of the key variables involved in the study. -->
<!-- Recall we have defined. -->
<!-- - **A**: Our exposure or treatment variable, denoted as **A**. Here we consider the example of 'Church attendance'. -->
<!-- - **Y**: The outcome variable we are interested in, represented by **Y**, is psychological distress. We operationalise this variable through the 'Kessler-6' distress scale. -->
<!-- - **L**: The confounding variables, collectively referred to as **L**, represent factors that can independently influence both **A** and **Y**. For example, socio-economic status could be a confounder that impacts both the likelihood of church attendance and the levels of psychological distress. -->
<!-- Given the importance of temporal ordering, we must now define time: -->
<!-- - **t** $\in$ T: Let $t$ denote within a multiwave panel study with **T** measurement intervals. -->
<!-- Where $t/\text{{exposure}}$ denotes the measurement interval for the exposure. Longitudinal data collection provides us the ability to establish a causal model such that: -->
<!-- $$t_{confounders} < t_{exposure}< t_{outcome}$$ -->
<!-- To minimise the posibility of time-varying confounding and obtain the clearest effect estimates, we should acquire the most recent values of $\mathbf{L}$ preceding $A$ and the latest values of $A$ before $Y$. -->
<!-- Note in @fig-dag-outcomewide, We use the prefixes "t0, t1, and t2" to denote temporal ordering. We include in the set of baseline confounders the pre-exposure measurement of *A* and *Y*. This allows for more substantial confounding control. For unmeasured confounder to affect both the exposure and the outcome, it would need to do so independently of the pre-exposure confounders. Additionally, including the baseline exposure gives us an effect estimate for the incidence exposure, rather than the prevelance of the exposure. This helps us to assess the expected change in the outcome were we to initate a change in the exposure. -->
<!-- ### Include the measured exposure with baseline covariates -->
<!-- Controlling for prior exposure enables the interpretation of the effect estimate as a change in the exposure in a manner akin to a randomised trial. We propose that the effect estimate with prior control for the exposure estimates the "incidence exposure" rather than the "prevalence exposure" [@danaei2012]. It is crucial to estimate the incidence exposure because if the effects of an exposure are harmful in the short term such that these effects are not subsequently measured, a failure to adjust for prior exposure will yield the illusion that the exposure is beneficial. Furthermore, this approach aids in controlling for unmeasured confounding. For such a confounder to explain away the observed exposure-outcome association, it would need to do so independently of the prior level of the exposure and outcome. -->
<!-- ### State the eligibility criteria for participation -->
<!-- This step is invaluable for assessing whether we are answering the causal question that we have asked. -->
<!-- #### Consider: -->
<!-- - Generalisability: we cannot evaluate inferences to a target group from the source population if we do not describe the source population -->
<!-- - Eligibility criteria will help us to ensure whether we have correctly evaluated potential measurement bias/error in our instruments. For example, the New Zealand Attitudes and Values Study is a National Probability study of New Zealanders. Individuals were randomly selected from the country's electoral roll. From these invitations there was typically less than 15% response rate. How might this process of recruitment affect generalisability and transportability of our results? -->
<!-- - Aside: discuss per protocol effects/ intention to treat effects -->
<!-- ### Determine how missing data will be handled -->
<!-- - As we consider in this workshop, loss to follow up and non-response opens sources for bias. We must develop a strategy for handling missing data. -->
<!-- ### State a statistical model -->
<!-- The models you considered on Day 1 of this workshop are G-computation, Inverse Probability of Treatement Weighting, and Doubly-Robust estimation. -->
<!-- ### Reporting -->
<!-- Consider the following ideas about how to report your model: -->
<!-- - **Estimator**: Doubly robust where possible. -->
<!-- - **Propensity Score Reporting:** Detail the process of propensity score derivation, including the model used and any variable transformations. -->
<!-- - **WeightIt Package:** Explicitly mention the use of the 'WeightIt' package in R, including any specific options or parameters used in the propensity score estimation process. -->
<!-- - **Method Variations:** Report if different methods were used to obtain propensity scores, and the reasons behind the choice of methods such as 'ebal', 'energy', and 'ps'. -->
<!-- - **Continuous Exposures:** Highlight that for continuous exposures, only the 'energy' option was used for propensity score estimation. -->
<!-- - **Binary Exposure** Justify your cutpoints by referencing theory/existing knowledge *in advance* of conducting the analysis. -->
<!-- - **Subgroup Estimation:** Confirm that the propensity scores for subgroups were estimated separately, and discuss how the weights were subsequently combined with the original data. -->
<!-- - **Covariate Balance:** Include a Love plot to visually represent covariate balance on the exposure both before and after weighting. -->
<!-- - **Weighting Algorithm Statistics:** Report the statistics for the weighting algorithms as provided by the WeightIt package, including any measures of balance or fit. -->
<!-- - **Outcome Regression Model:** Clearly report the type of regression model used to estimate outcome model coefficients (e.g., linear regression, Poisson, binomial), and mention if the exposure was interacted with the baseline covariates. Do not report model coefficients as these have no interpretation. -->
<!-- - **Subgroup Interaction:** Address whether the subgroup was included separately as an interaction in the outcome model, and if the model successfully converged. -->
<!-- - **Machine Learning Using `lmtp`** If using the `lmtp` package, do a stratified analysis. (see today's lab) -->
<!-- - **Model coefficients:** note that the model coefficients should not be interpreted, as they are not meaningful in this context. -->
<!-- - **Confidence intervals and standard errors:** Describe the methods used to derive confidence intervals and standard errors, noting the use of the 'clarify' package in R for simulation based inference. -->
<!-- ### Example of how to report a doubly robust method in your report -->
<!-- The Doubly Robust Estimation method for Subgroup Analysis Estimator is a sophisticated tool combining features of both IPTW and G-computation methods, providing unbiased estimates if either the propensity score or outcome model is correctly specified. The process involves five main steps: -->
<!-- **Step 1** involves the estimation of the propensity score, a measure of the conditional probability of exposure given the covariates and the subgroup indicator. This score is calculated using statistical models such as logistic regression, with the model choice depending on the nature of the data and exposure. Weights for each individual are then calculated using this propensity score. These weights depend on the exposure status and are computed differently for exposed and unexposed individuals. The estimation of propensity scores is performed separately within each subgroup stratum. -->
<!-- **Step 2** focuses on fitting a weighted outcome model, making use of the previously calculated weights from the propensity scores. This model estimates the outcome conditional on exposure, covariates, and subgroup, integrating the weights into the estimation process. Unlike in propensity score model estimation, covariates are included as variables in the outcome model. This inclusion makes the method doubly robust - providing a consistent effect estimate if either the propensity score or the outcome model is correctly specified, thereby reducing the assumption of correct model specification. -->
<!-- **Step 3** entails the simulation of potential outcomes for each individual in each subgroup. These hypothetical scenarios assume universal exposure to the intervention within each subgroup, regardless of actual exposure levels. The expectation of potential outcomes is calculated for each individual in each subgroup, using individual-specific weights. These scenarios are performed for both the current and alternative interventions. -->
<!-- **Step 4** is the estimation of the average causal effect for each subgroup, achieved by comparing the computed expected values of potential outcomes under each intervention level. The difference represents the average causal effect of changing the exposure within each subgroup. -->
<!-- **Step 5** involves comparing differences in causal effects across groups by calculating the differences in the estimated causal effects between different subgroups. Confidence intervals and standard errors for these calculations are determined using simulation-based inference methods [@greifer2023]. This step allows for a comprehensive comparison of the impact of different interventions across various subgroups, while encorporating uncertainty. -->
<!-- ### Inference -->
<!-- Consider the following ideas about what to discuss in one's findings: -->
<!-- Consider the following ideas about what to discuss in one's findings. The order of exposition might be different. -->
<!-- 1. **Summary of results**: What did you find? -->
<!-- 2. **Interpretation of E-values:** Interpret the E-values used for sensitivity analysis. State what they represent in terms of the robustness of the findings to potential unmeasured confounding. -->
<!-- 3. **Causal Effect Interpretation:** What is the interest of the effect, if any, if an effect was observed? Interpret the average causal effect of changing the exposure level within each subgroup, and discuss its relevance to the research question. -->
<!-- 4. **Comparison of Subgroups:** Discuss how differences in causal effect estimates between different subgroups, if observed, or if not observed, contribute to the overall findings of the study. -->
<!-- 5. **Uncertainty and Confidence Intervals:** Consider the uncertainty around the estimated causal effects, and interpret the confidence intervals to understand the precision of the estimates. -->
<!-- 6. **Generalisability and Transportability:** Reflect on the generalizability of the study results to other contexts or populations. Discuss any factors that might influence the transportability of the causal effects found in the study. (Again see lecture 9.) -->
<!-- 7. **Assumptions and Limitations:** Reflect on the assumptions made during the study and identify any limitations in the methodology that could affect the interpretation of results. State that the implications of different intervention levels on potential outcomes are not analysed. -->
<!-- 8. **Theoretical Relevance**: How are these findings relevant to existing theories. -->
<!-- 9. **Replication and Future Research:** Consider how the study could be replicated or expanded upon in future research, and how the findings contribute to the existing body of knowledge in the field. -->
<!-- 10. **Real-world Implications:** Discuss the real-world implications of the findings, and how they could be applied in policy, practice, or further research. -->
<!-- ## Appendix A: Details of Estimation Approaches -->
<!-- ### G-computation for Subgroup Analysis Estimator -->
<!-- **Step 1:** Estimate the outcome model. Fit a model for the outcome $Y$, conditional on the exposure $A$, the covariates $L$, and subgroup indicator $G$. This model can be a linear regression, logistic regression, or another statistical model. The goal is to capture the relationship between the outcome, exposure, confounders, and subgroups. -->
<!-- $$ \hat{E}(Y|A,L,G) = f_Y(A,L,G; \theta_Y) $$ -->
<!-- This equation represents the expected value of the outcome $Y$ given the exposure $A$, covariates $L$, and subgroup $G$, as modelled by the function $f_Y$ with parameters $\theta_Y$. This formulation allows for the prediction of the average outcome $Y$ given certain values of $A$, $L$, and $G$. -->
<!-- **Step 2:** Simulate potential outcomes. For each individual in each subgroup, predict their potential outcome under the intervention $A=a$ using the estimated outcome model: -->
<!-- $$\hat{E}[Y(a)|G=g] = \frac{1}{N_g} \sum_{i:G_i=g} \hat{E}[Y|A=a, L=L_i, G=g; \hat{\theta}_Y]$$ -->
<!-- We also predict the potential outcome for everyone in each subgroup under the causal contrast, setting the intervention for everyone in that group to $A=a'$: -->
<!-- $$\hat{E}[Y(a')|G=g] = \frac{1}{N_g} \sum_{i:G_i=g} \hat{E}[Y|A=a', L=L_i, G=g; \hat{\theta}_Y]$$ -->
<!-- In these equations, $Y$ represents the potential outcome, $A$ is the intervention, $L$ are the covariates, $G=g$ represents the subgroup, and $\theta_Y$ are the parameters of the outcome model. -->
<!-- **Step 3:** Calculate the estimated difference for each subgroup $g$: -->
<!-- $$\hat{\delta}_g = \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g]$$ -->
<!-- This difference $\hat{\delta}_g$ represents the average causal effect of changing the exposure from level $a'$ to level $a$ within each subgroup $g$. -->
<!-- We use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- **Step 4:** Compare differences in causal effects by subgroups: -->
<!-- $$\hat{\gamma} = \hat{\delta}_g - \hat{\delta}_{g'}$$ -->
<!-- where, -->
<!-- $$\hat{\gamma} = \overbrace{\big( \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g] \big)}^{\hat{\delta}_g} - \overbrace{\big( \hat{E}[Y(a)|G=g'] - \hat{E}[Y(a')|G=g'] \big)}^{\hat{\delta}_{g'}}$$ -->
<!-- This difference $\hat{\gamma}$ represents the difference in the average causal effects between the subgroups $g$ and $g'$. It measures the difference in effect of the exposure $A$ within subgroup $G$ on the outcome $Y$.[^note_care] -->
<!-- [^note_care]: $A$ and $G$ on $Y$ might not be additive. We assume that the potential confounders $L$ are sufficient to control for confounding. See Appendix -->
<!-- We again use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- ### Inverse Probability of Treatment Weighting (IPTW) for Subgroup Analysis Estimator -->
<!-- **Step 1:** Estimate the propensity score. The propensity score $e(L, G)$ is the conditional probability of the exposure $A = 1$, given the covariates $L$ and subgroup indicator $G$. This can be modeled using logistic regression or other suitable methods, depending on the nature of the data and the exposure. -->
<!-- $$\hat{e} = P(A = 1 | L, G) = f_A(L, G; \theta_A)$$ -->
<!-- Here, $f_A(L, G; \theta_A)$ is a function (statistical model) that estimates the probability of the exposure $A = 1$ given covariates $L$ and subgroup $G$. Then, we calculate the weights for each individual, denoted as $v$, using the estimated propensity score: -->
<!-- $$ -->
<!-- v = -->
<!-- \begin{cases} -->
<!-- \frac{1}{\hat{e}} & \text{if } A = 1 \\ -->
<!-- \frac{1}{1-\hat{e}} & \text{if } A = 0 -->
<!-- \end{cases} -->
<!-- $$ -->
<!-- **Step 2:** Fit a weighted outcome model. Using the weights calculated from the estimated propensity scores, fit a model for the outcome $Y$, conditional on the exposure $A$ and subgroup $G$. This can be represented as: -->
<!-- $$ \hat{E}(Y|A, G; V) = f_Y(A, G ; \theta_Y, V) $$ -->
<!-- In this model, $f_Y$ is a function (such as a weighted regression model) with parameters $θ_Y$. -->
<!-- **Step 3:** Simulate potential outcomes. For each individual in each subgroup, simulate their potential outcome under the hypothetical scenario where everyone in the subgroup is exposed to the intervention $A=a$ regardless of their actual exposure level: -->
<!-- $$\hat{E}(Y(a)|G=g) = \hat{E}[Y|A=a,G=g; \hat{\theta}_Y, v]$$ -->
<!-- And also under the hypothetical scenario where everyone is exposed to intervention $A=a'$: -->
<!-- $$\hat{E}(Y(a')|G=g) = \hat{E}[Y|A=a',G=g; \hat{\theta}_Y, v]$$ -->
<!-- **Step 4:** Estimate the average causal effect for each subgroup as the difference in the predicted outcomes: -->
<!-- $$\hat{\delta}_g = \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g]$$ -->
<!-- The estimated difference $\hat{\delta}_g$ represents the average causal effect within group $g$. -->
<!-- **Step 5:** Compare differences in causal effects by groups. Compute the differences in the estimated causal effects between different subgroups: -->
<!-- $$\hat{\gamma} = \hat{\delta}_g - \hat{\delta}_{g'}$$ -->
<!-- where, -->
<!-- $$\hat{\gamma} = \overbrace{\big( \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g] \big)}^{\hat{\delta}_g} - \overbrace{\big( \hat{E}[Y(a)|G=g'] - \hat{E}[Y(a')|G=g'] \big)}^{\hat{\delta}_{g'}}$$ -->
<!-- This $\hat{\gamma}$ represents the difference in the average causal effects between the subgroups $g$ and $g'$. -->
<!-- We again use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- ### Doubly Robust Estimation for Subgroup Analysis Estimator -->
<!-- It appears that the Doubly Robust Estimation explanation for subgroup analysis is already clear and correct, covering all the necessary steps in the process. Nevertheless, there's a slight confusion in step 4. The difference $\delta_g$ is not defined within the document. I assume that you intended to write $\hat{\delta}_g$. Here's the corrected version: -->
<!-- ### Doubly Robust Estimation for Subgroup Analysis Estimator -->
<!-- Doubly Robust Estimation is a powerful technique that combines the strengths of both the IPTW and G-computation methods. It uses both the propensity score model and the outcome model, which makes it doubly robust: it produces unbiased estimates if either one of the models is correctly specified. -->
<!-- **Step 1** Estimate the propensity score. The propensity score $\hat{e}(L, G)$ is the conditional probability of the exposure $A = 1$, given the covariates $L$ and subgroup indicator $G$. This can be modeled using logistic regression or other suitable methods, depending on the nature of the data and the exposure. -->
<!-- $$\hat{e} = P(A = 1 | L, G) = f_A(L, G; \theta_A)$$ -->
<!-- Here, $f_A(L, G; \theta_A)$ is a function (statistical model) that estimates the probability of the exposure $A = 1$ given covariates $L$ and subgroup $G$. Then, we calculate the weights for each individual, denoted as $v$, using the estimated propensity score: -->
<!-- $$ -->
<!-- v = -->
<!-- \begin{cases} -->
<!-- \frac{1}{\hat{e}} & \text{if } A = 1 \\ -->
<!-- \frac{1}{1-\hat{e}} & \text{if } A = 0 -->
<!-- \end{cases} -->
<!-- $$ -->
<!-- **Step 2** Fit a weighted outcome model. Using the weights calculated from the estimated propensity scores, fit a model for the outcome $Y$, conditional on the exposure $A$, covariates $L$, and subgroup $G$. -->
<!-- $$ \hat{E}(Y|A, L, G; V) = f_Y(A, L, G ; \theta_Y, V) $$ -->
<!-- **Step 3** For each individual in each subgroup, simulate their potential outcome under the hypothetical scenario where everyone in the subgroup is exposed to the intervention $A=a$ regardless of their actual exposure level: -->
<!-- $$\hat{E}(Y(a)|G=g) = \hat{E}[Y|A=a,G=g; L,\hat{\theta}_Y, v]$$ -->
<!-- And also under the hypothetical scenario where everyone in each subgroup is exposed to intervention $A=a'$: -->
<!-- $$\hat{E}(Y(a')|G=g) = \hat{E}[Y|A=a',G=g; L; \hat{\theta}_Y, v]$$ -->
<!-- **Step 4** Estimate the average causal effect for each subgroup. Compute the estimated expected value of the potential outcomes under each intervention level for each subgroup: -->
<!-- $$\hat{\delta}_g = \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g]$$ -->
<!-- The estimated difference $\hat{\delta}_g$ represents the average causal effect of changing the exposure from level $a'$ to level $a$ within each subgroup. -->
<!-- **Step 5** Compare differences in causal effects by groups. Compute the differences in the estimated causal effects between different subgroups: -->
<!-- $$\hat{\gamma} = \hat{\delta}_g - \hat{\delta}_{g'}$$ -->
<!-- where, -->
<!-- $$\hat{\gamma} = \overbrace{\big( \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g] \big)}^{\hat{\delta}_g} - \overbrace{\big( \hat{E}[Y(a)|G=g'] - \hat{E}[Y(a')|G=g'] \big)}^{\hat{\delta}_{g'}}$$ -->
<!-- We again use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- ## Appendix B: G-computation for Subgroup Analysis Estimator with Non-Additive Effects -->
<!-- **Step 1:** Estimate the outcome model. Fit a model for the outcome $Y$, conditional on the exposure $A$, the covariates $L$, subgroup indicator $G$, and interactions between $A$ and $G$. This model can be a linear regression, logistic regression, or another statistical model. The goal is to capture the relationship between the outcome, exposure, confounders, subgroups, and their interactions. -->
<!-- $$ \hat{E}(Y|A,L,G,AG) = f_Y(A,L,G,AG; \theta_Y) $$ -->
<!-- This equation represents the expected value of the outcome $Y$ given the exposure $A$, covariates $L$, subgroup $G$, and interaction term $AG$, as modeled by the function $f_Y$ with parameters $\theta_Y$. -->
<!-- **Step 2:** Simulate potential outcomes. For each individual in each subgroup, predict their potential outcome under the intervention $A=a$ using the estimated outcome model: -->
<!-- $$\hat{E}(Y(a)|G=g) = \hat{E}[Y|A=a,L,G=g,AG=ag; \hat{\theta}_Y]$$ -->
<!-- We also predict the potential outcome for everyone in each subgroup under the causal contrast, setting the intervention for everyone in that group to $A=a'$: -->
<!-- $$\hat{E}(Y(a')|G=g) = \hat{E}[Y|A=a',L,G=g,AG=a'g; \hat{\theta}_Y]$$ -->
<!-- **Step 3:** Calculate the estimated difference for each subgroup $g$: -->
<!-- $$\hat{\delta}_g = \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g]$$ -->
<!-- **Step 4:** Compare differences in causal effects by subgroups: -->
<!-- $$\hat{\gamma} = \hat{\delta}_g - \hat{\delta}_{g'}$$ -->
<!-- where, -->
<!-- $$\hat{\gamma} = \overbrace{\big( \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g] \big)}^{\hat{\delta}_g} - \overbrace{\big( \hat{E}[Y(a)|G=g'] - \hat{E}[Y(a')|G=g'] \big)}^{\hat{\delta}_{g'}}$$ -->
<!-- This difference $\hat{\gamma}$ represents the difference in the average causal effects between the subgroups $g$ and $g'$, taking into account the interaction effect of the exposure $A$ and the subgroup $G$ on the outcome $Y$. -->
<!-- Note that the interaction term $AG$ (or $ag$ and $a'g$ in the potential outcomes) stands for the interaction between the exposure level and the subgroup. This term is necessary to accommodate the non-additive effects in the model. As before, we must ensure that potential confounders $L$ are sufficient to control for confounding. -->
<!-- ## Appendix C: Doubly Robust Estimation for Subgroup Analysis Estimator with Interaction -->
<!-- Again, Doubly Robust Estimation combines the strengths of both the IPTW and G-computation methods. It uses both the propensity score model and the outcome model, which makes it doubly robust: it produces unbiased estimates if either one of the models is correctly specified. -->
<!-- **Step 1** Estimate the propensity score. The propensity score $e(L, G)$ is the conditional probability of the exposure $A = 1$, given the covariates $L$ and subgroup indicator $G$. This can be modeled using logistic regression or other suitable methods, depending on the nature of the data and the exposure. -->
<!-- $$e = P(A = 1 | L, G) = f_A(L, G; \theta_A)$$ -->
<!-- Here, $f_A(L, G; \theta_A)$ is a function (statistical model) that estimates the probability of the exposure $A = 1$ given covariates $L$ and subgroup $G$. Then, we calculate the weights for each individual, denoted as $v$, using the estimated propensity score: -->
<!-- $$ -->
<!-- v = -->
<!-- \begin{cases} -->
<!-- \frac{1}{\hat{e}} & \text{if } A = 1 \\ -->
<!-- \frac{1}{1-\hat{e}} & \text{if } A = 0 -->
<!-- \end{cases} -->
<!-- $$ -->
<!-- **Step 2** Fit a weighted outcome model. Using the weights calculated from the estimated propensity scores, fit a model for the outcome $Y$, conditional on the exposure $A$, covariates $L$, subgroup $G$ and the interaction between $A$ and $G$. -->
<!-- $$ \hat{E}(Y|A, L, G, AG; V) = f_Y(A, L, G, AG ; \theta_Y, V) $$ -->
<!-- **Step 3** For each individual in each subgroup, simulate their potential outcome under the hypothetical scenario where everyone in the subgroup is exposed to the intervention $A=a$ regardless of their actual exposure level: -->
<!-- $$\hat{E}(Y(a)|G=g) = \hat{E}[Y|A=a,G=g, AG=ag; L,\hat{\theta}_Y, v]$$ -->
<!-- And also under the hypothetical scenario where everyone in each subgroup is exposed to intervention $A=a'$: -->
<!-- $$\hat{E}(Y(a')|G=g) = \hat{E}[Y|A=a',G=g, AG=a'g; L; \hat{\theta}_Y, v]$$ -->
<!-- **Step 4** Estimate the average causal effect for each subgroup. Compute the estimated expected value of the potential outcomes under each intervention level for each subgroup: -->
<!-- $$\hat{\delta}_g = \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g]$$ -->
<!-- The estimated difference $\hat{\delta}_g$ represents the average causal effect of changing the exposure from level $a'$ to level $a$ within each subgroup. -->
<!-- **Step 5** Compare differences in causal effects by groups. Compute the differences in the estimated causal effects between different subgroups: -->
<!-- $$\hat{\gamma} = \hat{\delta}_g - \hat{\delta}_{g'}$$ -->
<!-- where, -->
<!-- $$\hat{\gamma} = \overbrace{\big( \hat{E}[Y(a)|G=g] - \hat{E}[Y(a')|G=g] \big)}^{\hat{\delta}_g} - \overbrace{\big( \hat{E}[Y(a)|G=g'] - \hat{E}[Y(a')|G=g'] \big)}^{\hat{\delta}_{g'}}$$ -->
<!-- We again use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- ## Appendix D: Marginal Structural Models for Estimating Population Average Treatment Effect with Interaction (Doubly Robust) -->
<!-- Sometimes we will only wish to estimate a marginal effect. In that case. -->
<!-- **Step 1** Estimate the propensity score. The propensity score $e(L)$ is the conditional probability of the exposure $A = 1$, given the covariates $L$ which contains the subgroup $G$. This can be modelled using logistic regression or other functions as described in @greifer2023 -->
<!-- $$\hat{e} = P(A = 1 | L) = f_A(L; \theta_A)$$ -->
<!-- Here, $f_A(L; \theta_A)$ is a function (a statistical model) that estimates the probability of the exposure $A = 1$ given covariates $L$. Then, we calculate the weights for each individual, denoted as $v$, using the estimated propensity score: -->
<!-- $$ -->
<!-- v = -->
<!-- \begin{cases} -->
<!-- \frac{1}{e} & \text{if } A = 1 \\ -->
<!-- \frac{1}{1-e} & \text{if } A = 0 -->
<!-- \end{cases} -->
<!-- $$ -->
<!-- **Step 2** Fit a weighted outcome model. Using the weights calculated from the estimated propensity scores, fit a model for the outcome $Y$, conditional on the exposure $A$ and covariates $L$. -->
<!-- $$ \hat{E}(Y|A, L; V) = f_Y(A, L; \theta_Y, V) $$ -->
<!-- This model should include terms for both the main effects of $A$ and $L$ and their interaction $AL$. -->
<!-- **Step 3** For the entire population, simulate the potential outcome under the hypothetical scenario where everyone is exposed to the intervention $A=a$ regardless of their actual exposure level: -->
<!-- $$\hat{E}(Y(a)) = \hat{E}[Y|A=a; L,\hat{\theta}_Y, v]$$ -->
<!-- And also under the hypothetical scenario where everyone is exposed to intervention $A=a'$: -->
<!-- $$\hat{E}(Y(a')) = \hat{E}[Y|A=a'; L; \hat{\theta}_Y, v]$$ -->
<!-- **Step 4** Estimate the average causal effect for the entire population. Compute the estimated expected value of the potential outcomes under each intervention level for the entire population: -->
<!-- $$\hat{\delta} = \hat{E}[Y(a)] - \hat{E}[Y(a')]$$ -->
<!-- The estimated difference $\hat{\delta}$ represents the average causal effect of changing the exposure from level $a'$ to level $a$ in the entire population. -->
<!-- We again use simulation-based inference methods to compute standard errors and confidence intervals [@greifer2023]. -->
<!-- ### Machine Learning -->
<!-- Example from https://osf.io/cnphs -->
<!-- > We perform statistical estimation using semi-parametric Targeted -->
<!-- Learning, specifically a Targeted Minimum Loss-based Estimation (TMLE) -->
<!-- estimator. TMLE is a robust method that combines machine learning -->
<!-- techniques with traditional statistical models to estimate causal -->
<!-- effects while providing valid statistical uncertainty measures for these -->
<!-- estimates [@van2012targeted; @van2014targeted]. -->
<!-- > TMLE operates through a two-step process that involves modelling both -->
<!-- the outcome and treatment (exposure). Initially, TMLE employs machine -->
<!-- learning algorithms to flexibly model the relationship between -->
<!-- treatments, covariates, and outcomes. This flexibility allows TMLE to -->
<!-- account for complex, high-dimensional covariate spaces -->
<!-- \emph{efficiently} without imposing restrictive model assumptions; -->
<!-- [@vanderlaan2011; @vanderlaan2018]. The outcome of this step is a set -->
<!-- of initial estimates for these relationships. -->
<!-- > The second step of TMLE involves ``targeting'' these initial estimates -->
<!-- by incorporating information about the observed data distribution to -->
<!-- improve the accuracy of the causal effect estimate. TMLE achieves this -->
<!-- precision through an iterative updating process, which adjusts the -->
<!-- initial estimates towards the true causal effect. This updating process -->
<!-- is guided by the efficient influence function, ensuring that the final -->
<!-- TMLE estimate is as close as possible, given the measures and data, to -->
<!-- the targeted causal effect while still being robust to -->
<!-- model-misspecification in either the outcome or the treatment model -->
<!-- [@van2014discussion]. -->
<!-- > Again, a central feature of TMLE is its double-robustness property. If -->
<!-- either the treatment model or the outcome model is correctly specified, -->
<!-- the TMLE estimator will consistently estimate the causal effect. -->
<!-- Additionally, we used cross-validation to avoid over-fitting, following -->
<!-- the pre-stated protocols in Bulbulia [@bulbulia2024PRACTICAL]. The integration of TMLE -->
<!-- and machine learning technologies reduces the dependence on restrictive -->
<!-- modelling assumptions and introduces an additional layer of robustness. -->
<!-- For further details of the specific targeted learning strategy we -->
<!-- favour, see [@hoffman2022, @hoffman2023]. We perform estimation using the -->
<!-- \texttt{lmtp} package [@williams2021]. We used the \texttt{superlearner} library for semi-parametric estimation with the predefined libraries \texttt{SL.ranger}, -->
<!-- \texttt{SL.glmnet}, and \texttt{SL.xgboost} [@xgboost2023; @polley2023; @Ranger2017]. We created graphs, tables and output reports using the \texttt{margot} package -->
<!-- [@margot2024]. -->
<!-- ### Sensitivity Analysis Using the E-value -->
<!-- > To assess the sensitivity of results to unmeasured confounding, we -->
<!-- report VanderWeele and Ding's ``E-value'' in all analyses -->
<!-- [@vanderweele2017]. The E-value quantifies the minimum strength of association (on the risk ratio scale) that an unmeasured confounder would need to have with both the exposure -->
<!-- and the outcome (after considering the measured covariates) to explain -->
<!-- away the observed exposure-outcome association -->
<!-- [@linden2020EVALUE; @vanderweele2020]. To -->
<!-- evaluate the strength of evidence, we use the bound of the E-value 95\% -->
<!-- confidence interval closest to 1. -->
### Packages
```{r}
report::cite_packages()
```
---