I am performing internal-external validation, whereby I hold out one study centre in turn, to evaluate the performance of a logistic regression model trained on the other 6 centres. The study contains three time points, so the above process is repeated to train separate models for each of the three time-points.
The logistic regression models were fit using glm, and the AUC and variance of AUC using pROC:
Collated_Results[16,3] <- var(GST_roc_1, method = "bootstrap", boot.n = 1000)
I have the following summary table (Collated_Results, cut short for example):
| Centre | Timepoint | AUC | Variance |
|---|---|---|---|
| 1 | 1 | 0.73 | 0.016 |
| 1 | 2 | 0.85 | 0.004 |
| 1 | 3 | 0.78 | 0.0056 |
| 2 | 1 | 0.69 | 0.0028 |
| 2 | 2 | 0.77 | 0.003 |
| 2 | 3 | 0.96 | 0.0016 |
Using the Metafor pacakage rma.mv function, I fit a mixed-effects model meta-regression, with random effects for study-timepoint nested within study centre:
full.model <- rma.mv(yi = AUC,
V = Variance,
slab = Centre,
data = Collated_Results,
random = ~ 1 |Centre/Timepoint ,
test = "knha",
method = "REML", dfs='contain')
The model fits correctly and gives a pooled estimate AUC as follows:
Model Results:
estimate se tval df pval ci.lb ci.ub
0.8507 0.0270 31.4587 5 <.0001 0.7812 0.9202 ***
The problem is that when I call forest(full.model), the 95% confidence intervals for some time-points are over 1 (eg. 1.03), which does not make sense for the AUC metric.
I have tried using the valmeta package which is specifically designed for AUC meta-regression, and the issue doesn't occur there. However, that package doesn't allow the more complex multi-level random effects.
Is anyone aware of how to use rma.mv correctly for the AUC, providing confidence intervals that make sense for this metric?
Thank you in advance,
Ben.