One commonly used trick in drug trials is to exclude any group that might make the drug look worse, such as those that are more likely to experience side effects. A good recent example of this is the covid vaccine trials, which largely excluded people with auto-immune diseases (more likely to develop an auto-immune disease after vaccination), people with allergies (more likely to have an allergic reaction to the vaccine), and, of course, the elderly (less likely to develop immunity after getting the vaccine, and more likely to become seriously sick from it).
These three groups are all frequently excluded from trials, and the exclusion is particularly galling when it comes to the elderly, because they are a big segment of the population, and they are also usually the most likely to end up actually using the drugs being tested.
When drug companies have gotten a drug approved, and move on to market the drug, they will studiously avoid mentioning the fact that large segments of the population were excluded from the trials. When drug reps show their flashy powerpoints to gatherings of doctors, say for a new drug to lower blood pressure, they will always present impressive looking graphs of benefit, and they will of course point out how safe their drug was shown to be in the trials. Not once will they mention that the groups of patients the doctors will primarily be prescribing the drug to weren’t even included in the trials.
The doctors will then happily go off and prescribe the drug to multi-morbid 90 year olds, which might explain why prescription drugs are now the third leading cause of death in the western world.
The manipulation of who is included in trials is probably one of the main reasons why findings of side effects always end up being much higher in reality than in clinical trials. It might explain, for example, why muscle pain is a massively common side effect of statins in the real world, while being vanishingly rare in the statin trials (as Dr. Malcolm Kendrick has written about in detail).
A study recently published in the Lancet Healthy Longevity sought to estimate the extent to which drug trials underestimate side effects. It was funded by the UK Medical Research Council and the Wellcome Trust. The study chose as its particular focus people being treated for high blood pressure with a certain class of blood pressure lowering drugs known as RAAS blockers (which includes all drugs with names ending in -pril and all drugs with names ending in -sartan). The advantage with looking at this particular class of drugs is that there are a ton of trials. Every major pharmaceutical company has its own RAAS-blocker. It should therefore be possible to draw relatively broad conclusions about the results – whatever they show, they apply to the entire pharmaceutical industry, not just to a few specific companies. It’s also reasonable to think that the results apply to other classes of drugs too – there’s no reason to think trials of RAAS-blockers have been done differently than trials of other drug classes.
What the study sought to do more specifically was compare the rate of serious adverse events in clinical trials of RAAS-blockers with the rate observed in the real world. A serious adverse event is any event that is potentially life threatening or that results in death, hospitalization or lasting disability. If a trial has been designed in such a way that it is representative of reality, then the rate of serious adverse events in the trial should largely mirror that seen in the real world.
110 trials of RAAS-blockers were identified by the researchers. Of these, 11 were specifically designed to look at older people (i.e. didn’t recruit anyone under the age of 60). The data on serious adverse events from these 110 trials was extracted and compared to real world data on deaths and hospitalizations taken from a UK government funded database of 55,000 people living in Wales, who were being treated with RAAS-blockers. Deaths and hospitalizations are not exactly the same thing as serious adverse events (which as mentioned above also include “life threatening events”, and could for example include someone who is treated in an emergency department after a fall but not admitted to the hospital), but they’re close enough to allow a reasonable comparison.
So, what were the results?
Let’s begin with comparing the trials of older people with the “standard” trials. The relative rate of serious adverse events in the trials of older people was 76% higher than the rate in the standard trials. This shows the importance of including elderly people in drug trials – they are much more likely to experience adverse events of all kinds (including those actually caused by the drug being tested), and excluding them will therefore likely underestimate side effects.
Considering that many of the drugs in common use show marginal benefits at best (statins have, for example, only been shown to prolong life by a few days on average), this is important information. Why? Because a drug that is beneficial, on balance, to a fifty year old, who has a fully functioning kidney and liver, and is therefore unlikely to suffer side effects, could easily be harmful, on balance, to an 80 year old.
That’s why drug studies done on younger people should not be used to guide treatment of older people. No shock there. Everyone already knows that we shouldn’t be extrapolating results from one group to another (even though it happens all the time, as we’ve seen most recently with the covid vaccine trials).
Next we come to the more important, and perhaps more shocking finding.
The real world patients were between 300% and 400% more likely to experience a serious event than the participants in the trials! That is in spite of the fact that the trials, as mentioned above, were using a broader definition of what constituted a serious event. If the trials were representative of reality, then they should have a higher rate of events than is seen in the real world data. Instead they have a rate that is several times lower!
Interestingly, the trials of older people were just as far from the real world results as the trials of younger people. Clearly, doing trials on the elderly is not enough on its own to produce trials that are representative of reality. What’s happening here exactly?
There are three possible explanations, as far as I can see. The first explanation is that the trials are representative of reality, but that the Welsh die and are hospitalized at a rate that is several times higher than people in the countries where the studies were conducted. Many of the trials were conducted in the US, not in Wales. But Wales has a higher life expectancy than the United States, so that seems unlikely. I think we can discount that explanation.
The second explanation is that the trials are unrepresentative in so many different ways that just correcting the age issue doesn’t make a noticeable difference. That’s probably part of the explanation. The average age even in the trials of “older people” was 73, which isn’t very old from my perspective. And those 73 year olds included in the trials were probably at the healthier end of the spectrum.
The third, more sinister explanation, is that the pharmaceutical companies are hiding serious adverse events… But wait a minute, the trials are randomized and blinded, so the people running the drug trials have no way of knowing if someone experiencing a possible side effect is in the treatment group or the placebo group, right?
Yes, that’s right, so the easiest solution, if you want to avoid finding nasty side effects, is to not report them, regardless of which treatment group the participant is in. That will cut down on total adverse events in both groups, which will make any difference between the groups that does exist smaller in absolute terms, and also less likely to reach the level of statistical significance. Voila – the treatment group and the placebo group end up having similar rates of side effects, and the drug company can conclude that the drug is completely safe.
Is that what’s happened here? Are the pharmaceutical companies hiding adverse events? Well, it’s very strange that the real world data shows a rate of serious adverse events that is several times higher than is found in the trials. It’s hard to see how that massive difference could be explained in any other way.
So, how big a problem is this?
Big. Very big. It should shake the very foundations of evidence based medicine. If the drug trials and the real world data show such wildly different rates of adverse events, then it really begs the question how much we can trust the trials at all. It would be perfectly reasonable in this situation to say that all “evidence” produced by pharmaceutical companies is so suspect that it should be dismissed out of hand, and that only independently funded trials should be used as a basis for medical treatment decisions.
The problem with that is that it would mean saying goodbye to most of the trials that form the basis of modern medical treatment, and there is not much to replace them with. This issue could be solved over the longer term through large tax payer funded investments in new independent trials. But there’s no quick fix.
The problem is most acute when it comes to the many drugs in common use that only show marginal benefits, such as statins. If the rate of side effects is actually 300% to 400% higher than seen in the trials, then the harms of these drugs could easily outweigh the benefits. In other words, the cost-benefit calculation could shift entirely for many of the most commonly used drugs.
Ok, let’s wrap this up. What can we conclude?
Drug trials do no accurately represent rates of adverse events. It is likely that the true rate of side effects is often many times higher than that seen in drug trials.