Stock Ticker

On meta-analytic models and the effect of hydroxychloroquine use in COVID-19

Random-effect models and Hartung–Knapp adjustment

In ref. 2, it is stated that “In our protocol, we prespecified a random-effects model of the Hartung–Knapp–Sidik–Jonkman (HKSJ) approach, in order to provide more equality of weights between trials with moderate to large size (than, e.g., the DerSimonian–Laird approach).” This is a rather surprising justification because, by construction, the weights in the two approaches are exactly the same, meaning that the point estimates of the meta-analyses always agree and only the confidence intervals and p-values can differ.

In either case the average effect is given by

$$\hat{\mu }=\frac{{\sum}_{i}{w}_{i}{x}_{i}}{{\sum}_{i}{w}_{i}}\,\,,$$

(1)

where xi is the estimate of study i and wi is its weight, given by the inverse of its variance, i.e. \({w}_{i}=1/{s}_{i}^{2}\). It is assumed that xi is normally distributed, i.e \({x}_{i} \sim {{\mathcal{N}}}(\mu,{s}_{i}^{2})\), with \({s}_{i}^{2}={\sigma }_{i}^{2}+{\tau }^{2}\), where \({\sigma }_{i}^{2}\) is the within-study sampling variance and τ2 is the between-study variance to be estimated.

The standard way (“DL” meta-analysis) of calculating confidence intervals for μ consists in treating the weights as known parameters, and using a normal distribution. One can estimate the variance of \(\hat{\mu }\) as \(\hat{{{\rm{var}}}}(\hat{\mu })={({\sum}_{i}{w}_{i})}^{-1}\), and calculate confidence intervals

$$\hat{\mu }\pm {z}_{\alpha /2}\sqrt{\hat{{{\rm{var}}}}(\hat{\mu })}\,\,,$$

(2)

where α is the significance threshold and z are the quantiles of a normal distribution. In reality, there exists uncertainty on the parameters, because one uses point estimates for the variances rather than true values, which can lead to an increase of Type I error in many scenarios.

The Hartung–Knapp adjustment (ref. 10, also proposed by Sidik and Jonkman in ref. 11) usually improves the situation and generally gives confidence intervals with better coverage properties. The approach differs from the standard one in two ways: a rescaled variance is used (\({\hat{{{\rm{var}}}}}_{{{\rm{HK}}}}(\hat{\mu })=q\hat{{{\rm{var}}}}(\hat{\mu })\)) and a Student distribution is assumed. Within the Hartung–Knapp approach, the confidence interval for a meta-analysis of k studies is given by

$$\hat{\mu }\pm {t}_{k-1,\alpha /2}\sqrt{q\hat{{{\rm{var}}}}(\hat{\mu })}\,\,,$$

(3)

where tk−1,α/2 is the quantile of the Student distribution with k  −  1 degrees of freedom. The p-value can be evaluated from the approximate pivot \(\mu /\sqrt{{{{\rm{var}}}}_{{{\rm{HK}}}}(\mu )} \sim {t}_{k-1}\). The scale factor q is given by

$$q=\frac{1}{k-1}\sum\limits_{i}{w}_{i}{\left({x}_{i}-\hat{\mu }\right)}^{2}\,\,.$$

(4)

Using q calculated from Eq. (4) does not always come without difficulties, especially if there is great variability in study sizes. This is because small studies contribute equally to q, which tends to dilute the signal of the larger ones. In cases where \({\hat{\tau }}^{2}\) is estimated to be negative from either the DL or PM scheme, q derived from Eq. (4) with \({\hat{\tau }}^{2}\) truncated to zero will be less than unity and possibly arbitrarily small. Examining Eq. (3), it is clear that it can then give an unnaturally small variance, with an undesired increase of Type I error as a consequence. This is confirmed by simulation studies, that showed too short confidence intervals associated with this scheme for most of the scenarios that are relevant for practical purpose12, even though the model is exact in certain extreme limits (e.g. if the studies all have the exact same variance). For studies with few events, which are numerous in the HCQ meta-analysis, the assumption of a normal distribution for the log odds ratio is violated, which will skew the distribution of q towards small values and affect the coverage properties of the HKSJ model.

Several fixes have been proposed. In ref. 12, it was shown that simply substituting q by \({q}^{\prime}={{\rm{Max}}}(1,q)\) in Eq. (3) (i.e. truncating q similarly to \({\hat{\tau }}^{2}\)) gives satisfactory results, although there was a loss of power in some scenarios. Another possibility would be to present both standard and HKSJ confidence intervals, and consider the widest of the two as the main result.

HCQ effect in meta-analysis

Inspecting the cumulative meta-analyses of ref. 2 (Fig. 3 of Axfors et al.) reveals some startling results. It is rather surprising that adding only one (with at least one event) small study to the RECOVERY trial substantially reduces the confidence interval (1.01–1.20), even though this trial (NO COVID-19) carries negligible weight and adds only one event per arm. In Fig. 1a, we reproduced Fig. 3b of Axfors et al. to emphasize an even more striking result for the subgroup of published studies (which constitute more than 90% of the total weight). One can see that adding one more small trial (COVID-PEP, adding only one event per arm) leads to an exceptionally short 95% confidence interval (1.08–1.13) despite virtually no information being added. These results are conceptually problematic because one would normally require a much larger number of events to obtain such tight confidence intervals.

Fig. 1: Forest plots for the meta-analyses.
figure 1

a Cumulative meta-analysis (in chronological order) for published studies, using the Hartung–Knapp method as in ref. 2. The total number of events added by each study is also highlighted. Studies given zero weight are not shown on the plot. b Meta-analysis of HCQ effect on COVID-19 hospitalization, using the same data as in ref. 9.

In Table 1, we report the calculated scale factors q, for several subgroups of studies that were associated with a significant difference in Axfors et al., alongside the calculated p-values for both q  <  1 (calculated from Eq. (4)) and q  =  1. In all cases, the point estimate is close to OR ≈  1.11. As can be seen, the smaller p-values originate from the small scale factor rather than from the accumulated totality of evidence. The modified Hartung–Knapp approach with truncated q therefore gives results that are similar to other meta-analyses3,4,5. The 95% confidence interval for all studies is (0.97 –1.26, p  = 0.11), which is also similar to that of the fixed-effect model (0.98– 1.25, p  =  0.09).

Table 1 Calculated scale factors and two-sided p-values (for both untruncated and modified HKSJ) for various sets of studies from ref. 2

HCQ effect in COVID-19 outpatients

It is also instructive to investigate the effect of meta-analytic choices on the study of HCQ use in COVID-19 outpatients with uncomplicated disease. Given that HCQ was repurposed as an antiviral drug, there was probably a higher chance a priori to observe a benefit in early disease than for hospitalized patients. The largest phase 3 trial reported numerically less hospitalizations in the HCQ group (RR, 0.77, 95% CI, 0.52–1.12, p =  0.16)9. The paper contained a meta-analysis as well, pooling the results with other randomized trials in the same population, and the result was (RR, 0.77, 95% CI, 0.57–1.04, p  = 0.09). All but one small trial were double blind.

In Fig. 1b, we reproduce the meta-analysis using the same data as in ref. 9, testing both a standard approach (as done in the original paper) and the Hartung–Knapp one. The result is also a small scale factor (q =  0.28), so that the HKSJ model leads to a shorter confidence interval (95% CI, 0.62 –0.95, p  =  0.03) and an association, but this time favouring HCQ use. Therefore, if one accepts the demonstration of harm in inpatients, then one should in principle also accept the demonstration of a benefit of HCQ as an early outpatient therapy.

Source link

Get RawNews Daily

Stay informed with our RawNews daily newsletter email

Pres. Trump not looking to tap into the strategic petroleum reserve (SPR)

T20 World Cup 2026 semi final report, result, highlights

Myles Garrett Gets Apparent Break From Cop During February Traffic Stop

The USD is higher to start the North American session