The exercise they were engaged in was what's called a meta-analysis. This is a useful tool in standard medical research, because it pools all the clinical data about a particular medicine or treatment, in order to quantify its overall benefit or effect. In theory, the entire process ought to be objective, but in practice it's not. The "rules" of meta-analysis allow the quality of the individual bits of data to be taken into account, thus compromising the objectivity of the process.
The 2005 Swiss study on homeopathy is a case in point. The researchers initially analyzed 110 trials, and found "a beneficial effect", i.e., homeopathy worked. However, they decided to reject 102 of these trials as being of inferior quality. Among those rejected were eight trials on upper respiratory tract infection, whose findings were so positive that the authors decided "the results cannot be trusted". Ultimately, therefore, their final meta-analysis was confined to just eight studies, which unsurprisingly, showed no beneficial effect of homeopathy.
"This was a dubious and biased study," says Dr Peter Fisher, clinical director of the Royal London Homeopathic Hospital. "If they had chosen nine or even seven of the very best trials, they would have got a positive result." That was the headline criticism levelled at the Swiss study, but there were many others-"lack of transparency", "did not follow accepted guidelines", "unacceptable lack of detail", "false conclusions based on false premises" were some of the adverse comments from a wide variety of experts (Lancet, 2005; 366: 2081-6).
The critics' general thrust was that the theoretically dispassionate meta-analysis process had been hijacked by a group of medical researchers with a strong bias against homeopathy from the outset. Indeed, the Swiss authors admitted their prejudice in black and white, commenting that homeopathy seemed "implausible", and that any positive clinical findings could be explained by "bias in the conduct and reporting of trials".
Fortunately, in the last few years, there have been a number of less prejudiced tests of homeopathy, and these offer good evidence that it works.
The first truly comprehensive review of homeopathy was done about 16 years ago by a team of experts at Limburg University in Holland. It was a two-year study, funded by the Dutch government, which wanted an independent assessment of homeopathy's effectiveness.
The researchers unearthed a total of 105 clinical trials satisfying the basic criteria of being "controlled", i.e., in which homeopathy was compared to a placebo (a dummy pill). Of these, 81 trials showed a positive result in homeopathy's favour.
Although the researchers criticized the "low quality" of most of the trials, there were "many exceptions". This enabled them to conclude that "homeopathy can be efficacious", and so is probably justified "as a regular treatment for certain conditions" (BMJ, 1991; 302: 316-23).
Eight years later, seven medical researchers from the University of Munich carried out a very similar exercise, concluding that 89 trials of homeopathy (out of 185) were suitable for analysis. They computed that homeopathy gave a "pooled-odds ratio" of 2.45, meaning that the clinical benefits were more than twice as good as a placebo.
They concluded with a modestly expressed double negative: "the results are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo". In other words, homeopathy works (Lancet, 1997; 350: 834-43).