AbstractOBJECTIVES:To survey the frequency of use of indirect comparisons in systematic reviews and evaluate the methods used in their analysis and interpretation. Also to identify alternative statistical approaches for the analysis of indirect comparisons, to assess the properties of different statistical methods used for performing indirect comparisons and to compare direct and indirect estimates of the same effects within reviews.DATA SOURCES:Electronic databases.REVIEW METHODS:The Database of Abstracts of Reviews of Effects (DARE) was searched for systematic reviews involving meta-analysis of randomised controlled trials (RCTs) that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE and other databases was carried out to identify published methods for analysing indirect comparisons. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results. Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken.RESULTS:Of the reviews identified through DARE, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis and some based analysis on the naive addition of data from the treatment arms of interest. Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only). Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons. Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable.CONCLUSIONS:Direct evidence from good-quality RCTs should be used wherever possible. Without this evidence, it may be necessary to look for indirect comparisons from RCTs. However, the results may be susceptible to bias. When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used employing the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data. There is a need to evaluate methods for the analysis of indirect comparisons for continuous data and for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect. Further study is needed into when it is appropriate to look at indirect comparisons and when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted. Investigations using individual patient data from a meta-analysis of several RCTs using different protocols and an evaluation of the impact of choosing different binary effect measures for the inverse variance method would also be useful.

Indirect comparisons of competing interventions / Glenny, A. M.; Altman, D. G.; F., Song; C., Sakarovitch; Deeks, J. J.; D'Amico, Roberto; M., Bradburn; Eastwood, A. J.; INTERNATIONAL STROKE TRIAL COLLABORATIVE, Group. - ELETTRONICO. - 9:26(2005), pp. 1-134. [10.3310/hta9260]

Indirect comparisons of competing interventions

D'AMICO, Roberto;
2005

Abstract

AbstractOBJECTIVES:To survey the frequency of use of indirect comparisons in systematic reviews and evaluate the methods used in their analysis and interpretation. Also to identify alternative statistical approaches for the analysis of indirect comparisons, to assess the properties of different statistical methods used for performing indirect comparisons and to compare direct and indirect estimates of the same effects within reviews.DATA SOURCES:Electronic databases.REVIEW METHODS:The Database of Abstracts of Reviews of Effects (DARE) was searched for systematic reviews involving meta-analysis of randomised controlled trials (RCTs) that reported both direct and indirect comparisons, or indirect comparisons alone. A systematic review of MEDLINE and other databases was carried out to identify published methods for analysing indirect comparisons. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses, with half of the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results. Further detailed case studies comparing the results from both direct and indirect comparisons of the same effects were undertaken.RESULTS:Of the reviews identified through DARE, 31/327 (9.5%) included indirect comparisons. A further five reviews including indirect comparisons were identified through electronic searching. Few reviews carried out a formal analysis and some based analysis on the naive addition of data from the treatment arms of interest. Few methodological papers were identified. Some valid approaches for aggregate data that could be applied using standard software were found: the adjusted indirect comparison, meta-regression and, for binary data only, multiple logistic regression (fixed effect models only). Simulation studies showed that the naive method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similarly sized trials are needed for the indirect approach to have the same power as directly randomised comparisons. Detailed case studies comparing direct and indirect comparisons of the same effect show considerable statistical discrepancies, but the direction of such discrepancy is unpredictable.CONCLUSIONS:Direct evidence from good-quality RCTs should be used wherever possible. Without this evidence, it may be necessary to look for indirect comparisons from RCTs. However, the results may be susceptible to bias. When making indirect comparisons within a systematic review, an adjusted indirect comparison method should ideally be used employing the random effects model. If both direct and indirect comparisons are possible within a review, it is recommended that these be done separately before considering whether to pool data. There is a need to evaluate methods for the analysis of indirect comparisons for continuous data and for empirical research into how different methods of indirect comparison perform in cases where there is a large treatment effect. Further study is needed into when it is appropriate to look at indirect comparisons and when to combine both direct and indirect comparisons. Research into how evidence from indirect comparisons compares to that from non-randomised studies may also be warranted. Investigations using individual patient data from a meta-analysis of several RCTs using different protocols and an evaluation of the impact of choosing different binary effect measures for the inverse variance method would also be useful.
2005
HEALTH TECHNOLOGY ASSESSMENT
NIHR JOURNALS LIBRARY
Indirect comparisons of competing interventions / Glenny, A. M.; Altman, D. G.; F., Song; C., Sakarovitch; Deeks, J. J.; D'Amico, Roberto; M., Bradburn; Eastwood, A. J.; INTERNATIONAL STROKE TRIAL COLLABORATIVE, Group. - ELETTRONICO. - 9:26(2005), pp. 1-134. [10.3310/hta9260]
Glenny, A. M.; Altman, D. G.; F., Song; C., Sakarovitch; Deeks, J. J.; D'Amico, Roberto; M., Bradburn; Eastwood, A. J.; INTERNATIONAL STROKE TRIAL COL...espandi
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/608190
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 518
  • ???jsp.display-item.citation.isi??? 465
social impact