Tuesday, September 25, 2012

Overestimates, underestimates, and things unseen - Sifting through pharmaceutical studies

The Gist:  Initial literature regarding a drug generally overestimates the benefit of the pharmaceutical while underestimating the risk of the intervention.  Listen to Dr. David Newman's excellent discussion on this topic on EM: RAP's June 2012 episode, using heparin as an example (note: paid subscription required).  The Cochrane reviews on major topics are excellent for analyzing the benefit of , although these are subject to problems as well (not as good for harms as benefit since they generally don't include comprehensive post-marketing data).

It's intuitive that early data on interventions demonstrate the most benefit.
  • Publication bias.  Journals tend to publish positive and statistically significant results (even if not clinically significant).  A Cochrane analysis demonstrated that only 63% of results from abstracts are published in full and demonstrated an increased odds that a paper would be published if it demonstrate a positive outcome (1).  It is well known that much trial data remains unpublished and is thus not available to clinicians and individuals compiling meta-analysis.  In fact, many groups and individuals are fighting for access to this data citing moral and ethical issues.
    • This BMJ study demonstrates that the conclusions of a meta-analysis is, in fact, altered by the inclusion of unpublished data (2).
    •  Mark Crislip of "Gobbets O' Pus" and "Persiflager's Puscast" fame frequently states the following analogy (rephrased) with regard to meta-analysis "if you put together a bunch of cowpies, you don't get gold, you still just have a load of poo."  This serves as a reminder that even systemic reviews and meta-analyses are fallible. 
Great TED talk on publication bias by Dr. Ben Goldacre (via The Poison Review)
  • Submission bias.  Studies that are not submitted for publishing are far less likely (ever?) published.  Would a drug company submit results for publishing that reflected poorly on their drug?  Not often.  Additionally, when studies are submitted, the data may be "interpreted" or presented in such a way that a positive outcome, even if it's not the primary outcome, is reported.
  • Industry funded data.  Following the above point, when an industry funds a study, they often only publish data that supports the efficacy of their medication.  This paper showed that many of the efficacy trials referenced in applications to the FDA for new drugs are not published after five years or published without reference to primary outcomes (3).  Dr. Ryan Radecki has written about this (here) and also frequently sorts through some of this on his excellent blog.
  • Excitement. Physicians, patients, and innovators want drugs and interventions to work.  We want patients to do well, fight illness, and be able to maximize their quality of life.  This can lead to bias among the studied individuals and those studying the the drug.  Also, there's a component of expectation bias, as we expect that expensive and novel agents will perform, especially if approved by governing authorities. 
It also make sense that later data often demonstrate more harm than initially recognized.
  • Longer follow up times capture longitudinal risks.  Some risks or adverse effects may not show up in shorter time frames initially investigated.
  • Data subjected to real-world use of intervention.  Initial studies investigate the pharmaceutical in demarcated, controlled groups of individuals.  When these products reach the market, physicians utilize these drugs in a broader population base and oftentimes in non-approved settings.  Similarly, patients take these medications with their other medications, highlighting interactions between drugs.  When medications reach the market, problems with patient compliance also become evident (missing doses, doubling up on medications, issues with reversal, etc), as these are not typically not accounted for in the well-controlled studies.
  • FDA withdrawals often demonstrate the ways in longitudinal data exposes harms of medications.  
The recent example  Rivaroxaban in Patients with a Recent Acute Coronary Syndrome.  In this paper, published in the NEJM in January 2012, the authors concluded that the increased bleeding in the rivaroxaban group is acceptable because the cardiovascular mortality was reduced in the cohort and there wasn't an increase in fatal bleeds.  

Safety Results:
  • Pre-defined safety endpoint:  TIMI major bleeding not related to CABG
    • Rivaroxaban arm 2.1% versus 0.6% in the placebo group
  • TIMI minor bleeding (1.3% vs. 0.5%, P=0.003)
  • TIMI bleeding requiring medical attention (14.5% vs. 7.5%, P=<0.001)
  • intracranial hemorrhage (0.6% vs. 0.2%, P=0.009)
  • No difference in fatal bleeds
So, there actually is an increase in clinically important bleeding.  Do the benefits actually outweigh this morbidity?  I asked this question of the material and, while sorting through this, had my question answered.

The Archives of Internal Medicine recently published several papers on Novel Oral Anticoagulants (NOACs), the Factor Xa inhibitors such as rivaroxaban and the direct thrombin inhibitors such as dabigatran, following acute coronary syndromes.  This systematic review and meta-analysis demonstrates that the NOACs have a net negative clinical impact, when accounting for the bleeding risk (see above) and the minimal reduction in ischemic events.  The authors of this paper also highlight the importance of absolute measures to contextualize relative effects (for more on this, check out this post).  They state that the reduction of ischemic outcome is reported as a risk reduction of 14% but the absolute reduction is only -1.3%.

This was a great reminder for me to exercise caution and encourage inquisition when looking at data or becoming excited about interventions.

References:
1.  Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstractsCochrane Database of Systematic Reviews 2007, Issue 2.
2.Hart B, et al.  Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses BMJ 2012;344:d7202
3.  Rising K, Bacchetti P, Bero L (2008) Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation. PLoS Med 5(11): e217. 

No comments:

Post a Comment