Some of you will recall the story a few months ago that cell phones had been linked to a type of brain cancer. It made the rounds for several days, was a popular topic of news blogs and discussion forums. But why hasn't there been any followup? Well, actually there has been, but only in research circles, and it's not too surprisingly reversed the previous discovery.
It appears that the prior evidence was overstated. Cell phones may yet be demonstrated to increase the risk of certain cancers, but the key point is that no research has reliably demonstrated this.
This is so typical and expected, at least expected by me, that it can serve as a useful lesson in skepticism of media reporting of science stories. It can also serve as an example of the difference between evidence based medicine and science based medicine. When the initial story hit, the coverage was very focused on authority figures and agency statements, which is typical of this kind of reporting. For more discussion of this phenomenon, I refer you to my video version of Ben Goldacre's essay on Bad Science journalism.
The World Health Organization put cell phones on the list of possible carcinogens, alongside such things as dry cleaning and artificial sweeteners. The level of evidence was ranked very low, which to a scientist puts it on a sort of global "to-do" list of promising research leads, but it was hardly a definitive identification of risk. That takes verification and a different type of analysis.
Let's go over the types of studies that scientists use to establish correlations. My goal is that the next time you're presented with these kinds of shocking discoveries, you'll be able to determine what kind of evidence it's based on.
The first two are a type of epidemiology, retrospective and prospective studies:
1. The retrospective cohort study In vastly simplified terms, grab 1000 people with disease A and compare them to 1000 people with no disease A. Identify factors that may be correlated with a higher risk for disease. The biggest problem with a retrospective study is what is called self-selection bias. Your experimental, "sick" group are already diagnosed with a disease, and because they already have the disease, whatever is the risk factor has already happened. It's very easy to misidentify confounding factors.
If you look at the 1000 people in the cancer group and find that 721 of them are heavy cell phone users, while only 614 of the non-cancer group are, how strong is that evidence, considering all the other possible reasons for the differences?
2. The prospective cohort study Take 1000 people, and follow them for a defined period of time. Document disease outcomes (did the people get brain cancers?), try to identify which risk factors were most predictive of the outcome. This is the retrospective study, but done in advance of the condition being studied. Did the highest quartile of meat-eaters get more gastric cancer, did the lowest exercisers develop more diabetes?
The populations are often heterogenous (or complex), and what looks like a simple association can often be a linked series of categories, like finding people who use cell phones to be at higher risk of neural-derived cancer, when the actual association may be with something about their income or profession or other environmental factor or genetic factor. It could be a multifactorial linkage where stress and urbanization plus better medical care is responsible for the increase, and cell phone use is really just a proxy.
3. Case-control trials In a case-control RCT, or randomized control trial, the researcher gets to control a lot more about the comparisons being made. You can take humans or non-human animals and mix them up until the populations are evenly randomized, then expose half the group to the experimental factor... that eliminates the self-selection and the confounding bias, because the researchers are intervening artificially.
This type of study is not always an option for ethical reasons. We choose not to expose humans to excessive risk if we can help it. This is a big improvement over purely epidemiological studies, but it also has some weaknesses. We still have the multifactorial problem, where multiple behaviors can be linked, and it's very difficult to untangle all the variables of genetics, environment, nutrition, case history, and sheer luck. There could still be a crazy mixture of lots of risks. Which is why we need...
4. Mechanistic research This is the kind of work that I've mostly done. The scientist, guided by less precise observational or epidemiological data, and complementing the experimentally rigorous case-control data, look for ways that the experimental factor could cause the correlated effect. This is usually done in some format that allows for almost complete control. Identical, clonal cells grown in identical flasks, transgenic mice, or sibling animals. Even within this category there are shades of resolution. Molecular biology is usually pretty good at sensitive and reproducible results.
Very small margins of error, and results that can be reproduced by different labs or different techniques.
We focus on something called method concordance, which is where I take my username. Arriving at the same conclusion by multiple independent lines of inquiry... In the case of cell phones, we'd need to identify a plausible and testable way that the radiation or heat or emotional cues associated with cell phones can interefere with normal cellular development. We'd attempt to replicate that finding in the simplified model of genetically identical animals or cell culture.
If the mechanistic outcomes contradict the prospective study, then the smart money is on the more rigorous lab research, less on the epidemiology. Notice how we've gone from the very complex and poorly controlled to the very tightly controlled and rigorous design. That doesn't mean the mechanistic research is better than a prospective study. After all, it would take a very long time to test all the possible hypotheses, and prospective studies provide great clues about what's worth further investigation.
The problem arises when epidemiological data is presented as confirmed fact, rather than the first step in a long process. I also often see a sort of reverse: mechanistic data on some new drug that shows that it can cure cancer or other disease in a flask of cells, or some poor mouse with a gutful of injected cancer cells, is touted as a major breakthrough in cancer research before it goes out to a control trial and then on to prospective studies.
None of these studies is without merit, and they add up to a total picture of the real underlying causality.
How do we differentiate between Evidence based medicine and Science Based Medicine?
Evidence based medicine is any medical practice which is based on a preponderance of evidence for its efficacy and safety. It's a pretty broad umbrella, though, and it includes things that we don't understand well, or possibly applications of practices that seem to work, but shouldn't. There's nothing inherently wrong with EBM, but it's not on the same footing as science based medicine.
Science based medicine is a subclass of EBM. It includes only those practices that we can understand to a finer degree. We know that they're effective, but we also understand WHY they're effective. The mechanisms are clear enough to us that we can understand how best to use them in the clinic, when they might not be effective, and what improvements can be made to improve efficacy and safety.
Key to distinguishing the two is that we have more than just evidence that something causes an improvement, we can evaluate that epidemiological data in light of a plausible mechanism. For example, laetrile, a popular alternative cancer treatment derived from the cyanide-like compounds found in peach pits, is well characterized in terms of chemical structure and bioactivity.
It possesses no properties that would make it a good cancer drug. We can predict that with some confidence. However, it has a long and storied history with quack medicine of the mid-1920's, and so when evaluated in randomized control trials, sometimes it show evidence of effectiveness, and sometimes it doesn't. It depends on the power of the test and who's running it.
Laetrile might be considered evidence based medicine, albeit very, very weak evidence. It is not, however science based medicine, because there is absolutely nothing about how it acts in the body that would make it effective. Cases like laetrile, chelation therapy for heart disease or autism, or vitamin supplements are sometimes indicated by evidence, but not by a deep evidence of mechanisms.
So, returning to the example of cell phones, a big challenge is the lack of mechanism. The amount of radiation given off by modern cell phones is too small to reliably damage DNA or cross-link proteins. There's never been a good, well controlled study on cells that was able to replicate the effect suggested by prospective or retrospective epidemiological studies.
This makes scientists a little hesitant to jump behind a single study of a type that is notorious for false associations and lacks rigorous experimental control. Now, there is no process to show that something does NOT cause cancer, that's simply an untestable hypothesis in practical terms. However, how does a proper skeptic view the outcomes of these contradictory studies? Simple. With an open mind, and a rational estimate of risks.
We don't have to cast our vote for guilty or not guilty before the trial is finished. Like good jurors, we can wait until all the evidence is in. That's what rational skepticism is all about: withholding judgment until sufficient evidence is presented to accept or reject a given hypothesis. We understand this when we're in the jury box, but forget it the moment we step out of the courtroom.
It's more than a coincidence that so many scientists are skeptics and rationalists. The process becomes a way of life, a way of thinking, and a way of viewing the world. Turning those gears off on the weekends is just something some of us can't do. We care about what's true, and we know a reliable process for determining this. Credited:
C0nc0rdance