Many vocal US vaccine scientists, like Paul Offit, claim that the safety of ALL aproved vaccines has been proven beyond doubt. I demonstrate that Paul uses very unreliable evidence to make claims of vaccine safety. I show concrete examples from nutrition and cardiology where similarly weak evidence led us astray. I also show that he cherry-picks only the studies that validate his biases with weak evidence and completely ignores similarly weak or sometimes even much stronger evidence that goes against his biases. Sometimes, he even misrepresents the studies he cherry-picks. He makes the case for the safety of vaccines based only on very inadequately powered (sized) animal experiments and human observational studies which, although often large, are prone to confounders and P-hacking.

In contrast, I present the harms of some non-live vaccines using evidence from Randomized Controlled Trials (RCTs). In all other fields of science, they are considered the gold standard of evidence that can separate correlation from causation due to randomization, but somehow Paul ignores them even when they are available and instead cites much less reliable evidence.

Outline: First, I review the evidence Paul Offit presents to claim that the safety of childhood vaccines is settled science. Although I mainly focus on this paper in which he claims to settle the question, I also discuss some of his more recent writings. Finally, I present the evidence, including RCTs and quasi-RCTs, that in my opinion, provide the strongest evidence of the harms of some non-live vaccines, e.g. DTP, flu shot, rabies vaccine.

Observational Studies

Most efforts in proving that vaccines do not cause harms has been focused on showing that the MMR vaccine doesn’t cause autism. This is likely a result of the fact that the first claim about a link between vaccines and autism was made for the MMR vaccine, by Andrew Wakefield. Paul mainly cites large observational studies to claim that MMR does not cause autism. He finds this study to be the most compelling. It is a large retrospective observational study. Since then, other larger retrospective observational studies have come out, e.g. this one, which many scientists have cited recently to make the claim that vaccines do not cause autism. While retrospective observational studies are very unreliable evidence because of the likelihood of confounders and P-hacking, which I will explain below, other studies – including some RCTs – have shown that similar live-virus vaccines show dramatic benefits in some very important outcomes like death rate in children in low-income countries. So I personally think MMR is overall beneficial and move on to the evidence Paul cites for the safety of non-live vaccines.

After discussing MMR, Paul considers the issue of thimerosal in vaccines. For the safety of thimerosal-containing-vaccines, Paul cites his own previous review paper on vaccines. More recently, in his substack post about the safety of thimerosal, he cites 6 observational studies that he claims to show a lack of association between thimerosal-containing-vaccines and autism.

My goal here is to look at the best evidence he provides for the safety of a non-live vaccine. 3 of the 6 papers (this, this, and this) look at the change in the incidence of autism when thimerosal was added/removed from vaccines. That research is of very limited value to me because I think most parents don’t give a shit about whether the safety issues of vaccines come from thimerosal, aluminium, preservatives, or whatever is there in the vaccines: what they want to see evidence of, is the safety of vaccines.

The 3 other studies are observational studies comparing autism between those who got certain vaccines which contained thimerosal at the time and those who didn’t get them at all (not even the thimerosal-free version). Below, I look deeper into the largest of those 3 studies, which is the Verstraeten study. It looked at the records of 3 US HMOs (pseudo-named A, B, C) to compare the rate of neurodevelopment issues between those who were more exposed to thimerosal via 3 non-live vaccines: DTP (Diptheria, Tetanus, Pertussis), Hepatitis B, HiB and those who were less exposed, i.e. got fewer or no doses of those 3 non-live vaccines. Although they measured the exposure to thimerosal and not exposure to doses of non-live vaccines, which I am mainly interested in, because the thimerosal exposure calculation exclusively uses doses of these 3 non-live vaccines, their calculated thimerosal exposure is a surrogate for exposure to doses of non-live vaccines.

The results of the study weren’t what I was expecting, given that Paul cited this paper as evidence of the safety of vaccines:

In phase I at HMO A, cumulative exposure at 3 months resulted in a significant positive association with tics (relative risk [RR]: 1.89; 95% confidence interval [CI]: 1.05–3.38). At HMO B, increased risks of language delay were found for cumulative exposure at 3 months (RR: 1.13; 95% CI: 1.01–1.27) and 7 months (RR: 1.07; 95% CI: 1.01–1.13). In phase II at HMO C, no significant associations were found. In no analyses were significant increased risks found for autism or attention-deficit disorder.

I thought that perhaps Paul reasons that because the results are inconsistent between different HMOs, it must not be causation. As I will discuss below with many concrete examples from the history of medical/nutrition science, inconsistent/often-negative correlations have often turned out to be causation. But digging deeper, what I found suggests more biases and double standards than faulty reasoning. The sizes of HMO A, B, and C after applying the exclusion criteria were as follows: 13,337, 110,833, and 16,717. Usually, the larger the dataset, the more reliable the findings, even though for retrospective observational studies, confounders and P-hacking are almost always a reliability problem. Also, a smaller study may not be able to find a small effect: non-live vaccines may be one of several factors causing language delays. We can’t really rule out the case where HMOs A and C didn’t find the effect in HMO B because they were at least 5x smaller.

Even though here the paper authors conclude that the findings are inconsistent, in other cases, when a larger study shows the results they like, they completely run with it and do not conclude inconsistency. For example, flu vaccine during the first trimester of pregnancy has previously been associated with autism, but when in 2020 a larger study came along showing a lack of association, Paul wrote what appears to at least be a double standard if not a blatant lie:

The flu vaccine is safe during pregnancy. This study, as well as many others, have consistently shown that flu vaccine is safe.

(In the next section, I will show that RCTs of flu vaccines tell a different story)

In summary, I think Paul should not have used the Verstraeten study to argue the safety of thimerosal or non-live vaccines. I then looked at the Andrews study, which was the next largest in his list. This was also a retrospective observational study, of 109,863 children in the UK. It compared various development outcomes between those who got more doses of DT or DTP vaccines vs those who got fewer. Results:

Only in 1 analysis for tics was there some evidence of a higher risk with increasing doses (Cox’s HR: 1.50 per dose at 4 months; 95% confidence interval [CI]: 1.02–2.20). Statistically significant negative associations with increasing doses at 4 months were found for general developmental disorders (HR: 0.87; 95% CI: 0.81–0.93), unspecified developmental delay (HR: 0.80; 95% CI: 0.69–0.92), and attention-deficit disorder (HR: 0.79; 95% CI: 0.64–0.98). For the other disorders, there was no evidence of an association with thimerosal exposure.

Although they found more doses to be associated with tics, just like the previous study found in HMO A, both of those findings could be due to confounders. Interestingly, for attention-deficit disorder, they found a negative association!

Although this may suggest that more doses of the DT/DTP vaccine at that time did not cause attention deficit disorder (ADD), we need to understand the limitations of the design of this study, a design that is very common in studies used to “prove” safety of vaccines :

observational studies to prove/disprove causality: pitfalls

It is well known that correlation does not imply causation. What appears less known among vocal vaccine scientists like Paul is that a negative/0 correlation also does not imply lack of causation, even though that seems obvious to statisticians. The reason for both is the same: confounding variables can alter the impact of the variables being tested in either direction: they can both increase or decrease the correlation from the causal strength and the change can cross 0 and flip the direction. The only way that obviously works is to do a Randomized Controlled Trial (RCT), where we toss a coin to determine whether someone gets the vaccine or a placebo. In a large trial, randomization can ensure that all confounders are about the same in both vaccine and placebo groups.

All other methods require making a lot of assumptions which are certainly worthy of questioning. My concern is not theoretical: in the field of medical drugs and nutrition, there are many, many cases where some intervention (e.g. Vitamin E, Hormone Replacement Therapy) was found to be slightly helpful in large observational studies but turned out to be somewhat harmful in large RCTs, which can reliably judge causality. I will summarize those cases and sketch the parallels with the vaccine safety evidence. I will use the Andrews study to illustrate the pitfalls of relying on observational studies (instead of RCTs) for proving safety, as Paul does.

The negative correlation seen above may be because of one of the many possibilities (not an exhaustive list):

  1. DTP has nothing to do with ADD
  2. DTP causes a reduction in the chance of ADD, close to the mean HR of 0.79
  3. DTP causes a large reduction in the chance of ADD and some other confounding factor cancels out most of that benefit. A possible confounder could be that people who reject vaccines tend to be “naturalists” and thus also do some other beneficial stuff like reject tylenol (associated with poor neuro-development in early life, but may or may not be causal) or pesticide-laden food. At least some pesticides have been proven to be harmful after decades of use.
  4. DTP causes an increase in the chance of ADD and some other confounding factors more than cancel out the slight increase. There can be many confounding factors, e.g. children who are vaccinated may have mothers who have had better access to care/adequate-nutrition during pregnancy. Nutrition, e.g. folate and B12 are known to play an important role in neurodevelopment in the first trimester.

Without randomization, exactly which possibility we are in is hard to determine. To reduce the skew due to confounders, observational studies can “adjust” the analysis on various suspected confounding variables to ensure that those suspected confounding factors are roughly the same in the comparisons. While this can help, it can introduce its own problems, especially in retrospective observational studies, as we will discuss below. Also, nobody knows all the factors that can cause autism/ADD. In contrast, randomization evens out even the unknown factors in both groups. (There is a quasi-randomized trial for DTP and it shows safety concerns. we will discuss it below)

How similar quality evidence led us astray: HRT

The safety evidence in the Andrews study looks similar to what the evidence of the safety of Hormone Replacement Therapy (HRT) looked like prior to the first RCT about it: large observational studies consistently showed that HRT was associated with fewer cardiovascular events (e.g. heart attacks, strokes). For example, the Nurses Health Study (NHS) found that “the risk for major coronary events was lower among current users of hormone therapy, including short-term users, compared with never-users (relative risk, 0.61 [95% CI, 0.52 to 0.71])”. The negative correlation of HRT with cardiovascular outcomes in NHS was even stronger than the slightly negative correlation of DTP with ADD in the Andrews study. Yet, when the Women’s Health Initiative (WHI) did the first Randomized Trial to reliably assess the causality, it was found to actually increase the risk of not only heart disease but also breast cancer:

Estimated hazard ratios (HRs) (nominal 95% confidence intervals [CIs]) were as follows: Coronary Heart Disease, 1.29 (1.02-1.63) with 286 cases; breast cancer, 1.26 (1.00-1.59) with 290 cases; stroke, 1.41 (1.07-1.85) with 212 cases; Pumonary Embolism, 2.13 (1.39-3.25) with 101 cases

If confounders can cause something causing more heart disease to be associated with less heart disease, it can surely cause something causing more ADD to be associated with less ADD. Had Cardiologists been as biased as Vaccinologists and declared “settled science” very prematurely at the consistent negative correlation of HRT with heart disease, many women would have lost their lives to heart disease and breast cancer.

How similar quality evidence led us astray: beta-carotene for cancer

For many decades beta-carotene food/supplementation was overall associated with a lower risk of cancer in observational studies, although there were some conflicting studies:

missing image. please report to the author

But when the CARET study tested beta-carotene and retinol supplementation for cancer prevention in an RCT, it found them to actually increase the risk of many cancers. (One caveat is that it is possible that the addition of retinol screwed up the benefits of beta-carotene)

Retrospective observational studies: additional pitfalls

Observational studies can be prospective or retrospective. In prospective studies, ideally, the scientists pre-declare exactly what things they would measure and what are the few hypotheses they are testing. Then they start the study and observe the differences in the future. In retrospective studies, scientists go back and look into historical records to find the differences. Retrospective observational studies bring in their own set of additional pitfalls:

P-hacking

Designing a retrospective observational study requires making many, many choices: which source to collect data from, how to verify the data accuracy, which study subjects to include/exclude, which time period to consider, how to precisely define the variable being measured (e.g. what exactly is considered autism), which suspected confounding factors to adjust. It is often very easy to make these choices to obtain any conclusion you want: often there are sets of choices that arrive at completely opposite conclusions. In RCTs or prospective observational studies, this can be avoided by pre-specifying/publishing the exact trial and design before starting the trial, and not changing it later after seeing how things turn out in the future. This does not work for retrospective designs because we are looking into the past and there is usually no reliable way to ensure that the trial analysis and protocol were never modified after the researchers saw the data, which typically already existed even before the researchers conceived the trial.

For example, in the Andrews study, 4216 children were excluded from the analyses, as shown in the flowchart below: missing image. please report to the author

To put this in perspective, there were a total 230 ADD diagnoses. Were these exclusions not unintentionally biased towards vaccinated or unvaccinated? Was the exclusion criteria cherry-picked to obtain the conclusion? Were the variables chosen to adjust cherry-picked to obtain the conclusion? There are impossible to know for sure in a retrospective design. Just for example, they do not give me a convincing rationale for excluding children who took the HepB or flu vaccines:

Children were also excluded when they received either hepatitis B or influenza vaccination in the first 6 months of life because such children are likely to be an atypical subgroup

Again, my concerns here are not theoretical. A Cornell Professor in nutrition was found to actually encourage his students to indulge in P-hacking. He was eventually fired. But the CDC was luckier when it indulged in P-hacking to show that mask mandates were associated with fewer pediatric Covid-19 cases. They likely had data about mask mandates and pediatric cases of Covid for most of the US but only published papers about snippets of the data that supported its policy of mask mandates. For example, when non-CDC public health researchers looked at a CDC paper supporting masking mandates for kids in schools, they found that extending the analysis by just a few months flipped the conclusion. CDC never updated its paper and continued to use it to promote masking in schools and censoring its critics even after the new data came out.

One way to mitigate p-hacking issues is to release the full raw data (after pseudonymization), so that others can find if there are reasonable analyses that come to the opposite conclusion. In the case of the CDC masking study, the open data is what enabled other researchers to spot the P-hacking. But unlike in fields like computer science, openness and reproducibility in medical science has a dismal situation.

animal experiments

Often, animal experiments are easier and/or more ethical to do than human experiments. The most interesting question to ask is whether vaccines, especially non-live vaccines cause any harm to animals. One could do a large randomized trial and give weight-adjusted doses of the entire vaccine schedule to the experiment group and a saline placebo to the control group. This will provide solid evidence of the safety of human vaccines in animals. But even that hasn’t been done, unlike what Paul claims. Once it is done, the “only” trouble would be to generalize the animal results to humans.

To argue the safety of the entire vaccine schedule, Paul mainly cites this paper, which gave the full 2008 vaccine schedule to 12 rhesus macaques and measured various biochemical and behavioural outcomes. The paper itself claims: “Autism is a childhood neurodevelopmental disorder affecting approximately 1 in 70 children in the United States.”. Even people with no formal education in math or probability theory can understand that an experiment on only 12 subjects cannot detect a condition that is as rare as 1 in 70 subjects. The size needs to be several times 70 to detect the effect with P<0.05. So, this experiment essentially only tells me that vaccines do not cause autism in almost everyone that receives them, something that I already knew. Also, random allocation to experiment vs placebo group could have reduced chances of some biases but in such a small experiment, even randomization cannot rule out confounders.

The paper further says:

No neuronal cellular or protein changes in the cerebellum, hippocampus, or amygdala were observed in animals following the 1990s or 2008 vaccine schedules.

Even this doesn’t tell me much. Nobody knows the exact biochemical mechanisms by which autism/ADD occurs. So to imply that we can detect the biochemical changes that cause autism is hubris. The size issue anyway makes this largely irrelevant.

Finally, the paper says:

Analysis of social behavior in juvenile animals indicated that there were no significant differences in negative behaviors between animals in the control and experimental groups.

The behaviours they measured were very subjective: Passive, Explore, Play, Sex, Aggression, Withdrawal, Fear-disturbed, Rock-huddle-self-clasp, Stereotypy. Even if they had a good way to quantify these, there is huge room for P-hacking unless they pre-registered their protocol, which doesn’t seem to be the case. Also, are these relevant to autism? Another study done in the same animal species measured different outcomes and they did find a statistically significant delay in acquisition of 3 of the 9 reflexes in the vaccinated group. Data for all reflexes is below:

missing image. please report to the author

Both of the two animal studies are too small. It is certainly possible that the delays found in the latter experiment was a fluke finding because of the small size. Larger animal trials should be done after experts of animal behaviour and growth, scientists on both sides of the vaccine debate, and scientists who specialize in the reliability of medical evidence should come together to define good criteria and trial designs to measure autism or other harms in animals. In the next section, I will describe a large dog RCT showing harms of the rabies vaccine in dogs.

studies/RCTs showing harms of non-live vaccines

Some of the most reliable evidence of harms comes from the Bandim research group which was responsible for the rollout and meticulous health recording of children in Africa for several decades, going all the way back to the first time the vaccination campaigns began there. They made many extremely surprising observations about the effects of vaccines: they have a broader impact on the immune system: the impact affects even many other communicable and non-communicable diseases.

Many of these findings have been replicated by other groups around the world. Some of these findings have recently led to trials evaluating new ways to cure old diseases like type-1 diabetes. They found that live-virus vaccines train the immune system to fight even diseases unrelated to the target disease. To some extent, this is similar to how a machine learning system in computer science trained to detect cars can generalize from the training set to detect cars that are significantly different from the cars in the training set. It also makes evolutionary sense as a live-virus vaccine works just like a regular infection: the virus goes into the body and replicates like a regular infection, only except that the virus is a variant attenuated to not cause disease. For billions of years, our immune systems have learned to generalize from such “training data” to be able to better fight even other diseases that the body encounters in the future.

There is solid proof of the above phenomena, not like the unreliable correlational evidence Paul cites. For example, in low-birthweight children (<2500g), the BCG vaccine was found in an RCT to cause a whopping 43%(!) reduction in infectious disease mortality rate (MRR, 0.57; 95% CI, .35–.93). Tuberculosis, which is the intended target of BCG, does not cause that many deaths in Denmark, so the BCG vaccine appears to be teaching the newborn’s immune system to better fight even other infections. There is a minor fully-disclosed concern of P-hacking here as the original trial protocol pre-declared here before the trial listed the overall death rate as the main outcome, although death rate due to some specific infections was listed as a secondary outcome. BCG did not reduce non-infectious deaths and thus when considering all-cause deaths (combining infectious and non-infectious deaths), the relative reduction becomes diluted to 30% with the 95% confidence interval slightly crossing 1: (MRR, 0.70; 95% confidence interval, 0.47–1.04).

In a more recent RCT, the BCG vaccine was given to elderly people and it increased the time to first post-vaccine infection from a median of 11 weeks in the placebo group to 16 weeks in the experiment group. This trial was also pre-registered (here) and the time to first post-vaccine infection was indeed the primary declared outcome. Unplanned analysis showed: “most of the protection was against respiratory tract infections of probable viral origin (hazard ratio 0.21, p = 0.013)”. In contrast, tuberculosis, the intended target of BCG, is a bacteria.

The above promising research has started to get some attention in the US. Mass General Hospital in Boston is currently doing a trial to investigate whether the BCG vaccine can cure type-1 diabetes.

The Bandim research group also found some unexpected harms, but only for non-live vaccines, e.g. IPV, HepB, and DTP. (The virus in IPV (inactivated polio vaccine) and HepB vaccines have been killed and the immune system normally does not respond significantly to those dead viruses unless an adjuvant like aluminium is added to kill a “small number” of human cells at the injection site, which makes the immune system mount a strong response. It may be a very clever idea but human bodies are so complex that even clever ideas usually have unforeseen consequences.) Live virus vaccines do not need any adjuvant as the body already mounts a strong immune response to the live replicating but attenuated virus in the vaccines. The Bandim research group found that the non-live vaccines have the exact opposite effect on non-target diseases as the live vaccines. Although they protect against the target disease, they make children more vulnerable to death from other diseases.

As far as I know, the most solid proof of this phenomenon comes from this paper. It is an accidental quasi-randomized trial enabled by the way the DTP vaccine was first rolled out in Guinea-Bissau in 1981. The vaccine appointments were made based on birthdays. As a result, some kids got the DTP vaccine at 3 months of age and some got it at 5 months of age (and some in between), leaving a small window to do a comparison. They found that getting DTP was associated with 5x more deaths than the not-yet-DTP-vaccinated children (9.98x among girls, 3.93x among boys).

The only possible confounder here is the birthday of a child. It is theoretically possible that somehow the parents likely to have healthier babies conceived their babies later during the 2-3 month period exactly 9+3 months before the DTP rollout period. But conception itself is quite random, and it seems extremely likely that the birthdays within the small period were independent of any factor affecting mortality, thus making it essentially a randomized controlled trial, capable of judging causality, unlike the studies Paul cited.

Another criticism could be that this analysis was done retrospectively so there is a chance of P-hacking. The outcome they measure (death rate) is the most important in any medical experiment and is very objective, so there were no complex choices to be made there anyway. Also, their exclusion criteria was extremely straightforward and basically only excluded children with unknown relevant information (e.g. vaccination dates). You can argue why they excluded orphans, but there were only 4 anyway. Compare the simplicity of this design to the complex exclusion criteria and subjective evaluation metrics (evaluated in an unblinded manner) of the studies that Paul cited.

At first, the 5x magnitude of increase in deaths may look extreme. But the same phenomena (increase in death rate, especially in girls) has been observed in many other low-income countries when DTP was rolled out, as summarized by figure 3 in this meta-analysis of observational studies (not RCTs), which I have reproduced below:

missing image. please report to the author

In all the countries, most of the increase in deaths happened in girls. You can listen to a 25-minute video lecture where the lead of the Bandim research group, Peter Aaby, describes his decades of research and the negligent reaction from WHO. One caveat is that all these studies were in low-income countries and some aspects of high-income countries (e.g. better medical/urgent care/infrastructure, better sanitation, better nutrition…..) may be able to prevent some/most of the deaths from the increased susceptibility to infections caused by DTP.

The phenomenon has been observed not just for DTP but even for other non-live vaccines. For example, when all the RCTs of non-live flu vaccines (IIV shots, not the FluMist) in pregnant women are combined in a meta-analysis, they found 2x increased risk of non-influenza infectious adverse events in mothers and 1.36x increase in infants:

The meta-analysis for maternal all-cause mortality provided a RR of 1.48 (95% CI = 0.52–4.16). The estimates for miscarriage/stillbirth and infant all-cause mortality up to 6 months of age were 1.06 (0.78–1.44) and 1.11 (0.87–1.41), respectively. IIV was associated with a higher risk of non-influenza infectious adverse events, with meta-estimates of 2.01 (1.15–3.50) in women and 1.36 (1.12–1.67) in infants up to 6 months of age. Thus, following a pattern seen for other non-live vaccines, IIV was associated with a higher risk of non-influenza infectious adverse events. To ensure that scarce resources are used well, and no harm is inflicted, further RCTs are warranted.

The flu vaccine was supposed to protect the mother and the infant, but according to the most reliable evidence, the maternal and infant death rate trended in the opposite direction.

The phenomenon has been confirmed even in animals! An owner-blinded RCT of the (non-live) rabies vaccine in dogs found that the rabies vaccine increases the death rate of female dogs by 3.09x (95% CI 1.24–7.69). Thankfully, the human rabies vaccine is not given to all humans and only to high-risk individuals.

One clean experiment where the only difference was the vaccine technology (live vs non-live) was this prospective observational study, where 64/314 children were given the oral polio vaccine (live-virus vaccine) instead of IPV (inactivated/killed-virus vaccine), and the rates of ear infections was compared. “A significant difference was seen at the age of 6–18 months (IRR = 0.76 [95% CI 0.59–0.94], P = 0.011) and was particularly clear among children, who attended daycare (IRR 0.37 [95% CI 0.19–0.71], P = 0.003).” It is sad that they did not randomize and instead matched the 2 groups on age, gender, and HLA type. Unlike the other live-virus vaccines, very rarely, the virus in the oral polio vaccine seems to mutate and regain the ability to cause disease so it is not clear whether OPV is overall better than IPV.

Of the 3 (non-live) vaccines above for which I showed evidence of harm, only 1 is still in use in the US (flu shot). DTP was replaced by DTaP in the US. (But it is still being used in many low-income countries.) The latter has only parts of the dead pertussis bacteria cells instead of the whole dead bacteria cells in DTP. Whether DTaP is just as bad as DTP remains to be seen. If/when DTaP replaces DTP in low-income countries, it could be done in a randomized and blinded manner to get high-quality evidence comparing DTP vs DTaP, especially if the rollout cannot happen all at once.

In this large retrospective observational study of 883,160 children in Denmark, delaying DTaP doses by 1 or more months was found to be associated with reduced chances of atopic dermatitis in a dose-dependent manner, confirming an earlier smaller Australian study. This study is much larger than the studies Paul cites for the safety of non-live vaccines. But my criticisms of his studies apply to this one as well: possibility of confounders and P-hacking.

When there is such weak evidence of harm, e.g. for DTaP, instead of the rat race of low-reliability studies which dominate American vaccine science/religion, we should do randomized trials to reliably assess whether there is a safety concern. A large randomized trial to measure the atopic dermatitis reduction by delaying DTaP by a few months should be easy to design.

Conclusion

In summary, although many live vaccines like MV, and BCG are very beneficial and have likely overall saved millions of lives, the evidence of safety of non-live vaccines provided by vocal vaccine advocates like Paul Offit is very unreliable in ways that have already misled in fields like nutrition and cardiology. There is much more reliable evidence (RCTs) of harms of some non-live vaccines. Inspired by the Aaby DTP quasi-RCT, when new vaccines are introduced, the rollout should be randomized at a population level, so that we can detect even the “rare” short-term adverse events (post-mrna-2nd-dose myocarditis happened usually within a week), if not the long-term ones. For many existing non-live vaccines, we urgently need long-term randomized trials to obtain reliable evidence of safety. Meanwhile, like the rabies vaccine, some non-live vaccines like HepB may be delayed or more narrowly targeted to the population actually at significant risk, e.g. children of parents who have not tested negative for Hepatitis B.

Found something wrong in this article? Please make an issue here or submit a pull request to this file.