Tag Archives: Randomised controlled trial

Fat Mate: Fat chance lost

I expect New Zealand tax dollars science spending to be better than what I read in the media this morning. Headlined in the Press was “Blenheim ‘fat mate’ loses 13.5kg in 8 weeks.”  The story was of someone on a trial of a locally produced diet supplement having lost weight.  So far nothing to peak my interest, but then I came across the statements “Satisfax was the result of $12 million research over four years with support from Crown Research Institute Plant & Food Research.” (Satisfax is the trademark)  and “Huge demand for the trial saw it expanded from 100 “fat mates” to 200.”  The second of the links goes to an October article in the Marlborough express which includes the statement that “The trial had been approved by the Health Ministry’s health and disability ethics committee and was partially funded by Callaghan Innovation.

So, your and my taxes are being spent by Callaghan Innovation on a trial of a diet pill the development of which received other tax dollars through Plant & Food. Worth a little more investigation.  The trial went through an ethics committee – big tick.  It was also (a little late) registered on the Australia New Zealand Clinical Trials network (here) – tick.

BUT, it fails miserably as an efficacy trial.

There is no control group.  ie the pill is not compared against a placebo. I can think of no practical reason why there was not a control group taking a placebo (randomised and blinded of course).  Instead, the trial just looks at the average change in weight change over eight weeks and tries to establish if this is non-zero.  Given that these people are doing something hoping to loose weight, there may well be an average loss of weight that has nothing to do with the pill. The press article suggests a biostatistician is going to somehow “account” for the placebo affect (something not mentioned on the trial registration).  I pity the biostatistician as this is involves trying to convince someone that a study run elsewhere with a placebo group, at a different time, under different circumstances could actually serve as a control for this study.

Incredibly, that is not the only major issue.  I read that part way through the trial the publicity was such that there was a demand from people to enter the trial and so they doubled the number of participants from 100 to 200.  Ahhhhhh….. this is classic introduction of bias and should never have been allowed.  Those extra 100 people are not the same as the first group… they have elevated expectations that may well bias the results.  Furthermore, it is always dangerous to talk about the trial efficacy part way through as this may influence the behaviour of those already in the trial.   Grrrrr….

In short – a chance lost and waste of Plant and Food & Callaghan Innovation funding.  There is hope though – a proper randomised controlled trial could be conducted.  But, I won’t be holding my breath.

ps.  The Marlborough Express and Press should be ashamed of such blatant product placement – diet pills on January 2nd are so cliche.  I wonder if it was the reporter or the company who initiated this piece?

What the HRC should have done

The system is broke.  It is no better than a lottery.  The Health Research Council tacitly acknowledged this last year when they introduced a lottery to their grant funding round.  The lottery was for three grants of $150,000 each.  These “Explorer Grants” are available again this year.  The process went thus: HRC announced the grant and requested proposals;  proposals were required to meet simple requirements of transformative, innovative, exploratory or unconventional, and have potential for major impact;  proposals were examined by committees of senior scientists;  all that met the criteria were put in a hat and three winners were drawn out.

116 grants were received, 3 were awarded (2.6%!!!). There were several committees of 4-5 senior scientists. Each committee assessed up to 30 grants.  I’m told it was a couple of days work for each scientist. I’m also told that, not surprisingly given we’ve a damned good science workforce, most proposals met the criteria. WHAT A COLOSSAL WASTE OF TIME AND RESOURCES.

Here is what should have happened:  All proposals should have gone immediately into the hat.  Three should have been drawn out.  Each of these three should have been assessed by a couple of scientists to make sure they meet the criteria.  If not, another should be drawn and assessed.  This would take about a 10th of the time and would enable results to be announced months earlier.

Given that the HRC Project grants have only about a 7% success rate and that the experience of reviewers is that the vast majority of applications are worthy of funding  I think a similar process of randomly drawing and then reviewing would be much more efficient and no less fair.  Indeed, here is the basis of a randomised controlled trial which I may well put as a project proposal to the HRC.

Null Hypothesis:  Projects assessed after random selection perform no differently to those assessed using the current methodology.

Method:  Randomly divide all incoming project applications into two groups. Group 1: Current assessment methodology.  Group 2: Random assessment methodology.  Group 1: assess as per normal aiming to assign half the allocated budget.  Group 2: Randomly draw 7% of the Group 2 applicants;  assess;  draw more to cover any which fail to meet fundability (only) criteria;  fund all which meet this criteria in order they were drawn until half the allocated budget is used.

Outcome measures:  I need to do a power calculation and think about the most appropriate measure, but this could be either a blinded assessment of final reports or a metric like difference in numbers of publications.

Let’s hope that lessons are learnt when it comes to the processes used to allocate National Science Challenges funds.

Why you should care about Oregon

I love the way this med student writes. This is a fascinating account of an “accidental” randomised controlled trial of government spending on health (Medicaid in the US).

Faith justified? – a vital tale

Expensive pee or elixir of life?  The two extreme views of multivitamins.  I’ve been taking multivitamins for a number of years now.  I’ve taken them on faith backed by a little evidence.  This week, I think for the first time, a randomised controlled trial has provided high quality evidence that my faith is justified.  More on that in a minute.

Most trials of vitamin supplements to date have tested vitamins in isolation.  The trials were justified on the observation that people with certain diseases lacked specific vitamins and/or the scientists’ understanding of biochemical pathways that require the vitamin in question to work well.  This is well and good.  From what I understand most of these trials have failed to show a clinical difference (ie in health outcomes) (see, eg, my report on the Vitamin D trial in Christchurch).

Vitamins (and trace minerals), of course do not exist in us in isolation.  They work together with each other and along with all the other chemicals in us with names that only a biochemist could love.  The theory, which I’ve accepted largely by faith, is that vitamin supplementation works best when it is multiple vitamins together.  Studies of multivitamin supplementation have largely been short term or retrospective observational.  That is, scientists have surveyed people on vitamin use and drawn conclusions based on that.  One such study, the Iowa women’s study(1), caused me to pause and reassess last year when it seemed to indicate supplementation including copper increased mortality in post-menopausal women. Being neither a woman nor post-menopausal I did not panic.

The prospective randomised controlled trial (RCT) is regarded as a much higher level of evidence than retrospective observational studies.  Published this week in the Journal of the American Medical Association (JAMA) is an RCT of multivitamin supplementation in men (2).  Briefly, 14641 men aged 50+ were enrolled in a trial in 1997 and followed until 2011. Participants were randomly chosen to receive either a multivitamin or a placebo.  Neither the participants nor the people running the study knew which people received placebo and which received multivitamin.  This is known as “double-blind.”  Only a statistician knew and he or she did not reveal anything until all the data was in.  The primary outcome was to compare the rates of cancer and cardiovascular disease in both groups.  Secondary outcomes (ie ones that the statistics can not be so precise about because of the numbers) were the rates of some specific cancers (eg prostate cancer).  There was amongst the 14641 men a subgroup of about 1300 men with a pre-existing history of cancer.

The results:

Men taking multivitamins had a modest reduction in total cancer incidence (HR, 0.92; 95% CI, 0.86- 0.998; P = .04)

My interpretation:  Those taking multivitamins were about 8% less likely to get cancer.  The statistics show that they are 95% confident that the amongst all men with the same characteristics as the men in their sample the true reduction in probability of getting cancer over the 11 year follow up period is between 0.2 and 14%.

A little frighteningly whilst major cardiovascular events were mentioned as part of the primary outcomes they were not reported on!

The strengths of the study are its size, that it is an RCT and double-blind, that it has good length, that all participants who received the multivitamin received the same one and that the multivitamin manufacturer had no role in designing or running the study, or analysing the data.

The weaknesses are that it is all men, all over the age of 50, and all physicians.

S0, is my faith justified?  If by that do you think I mean “proven” then think again. Proof or proven are words that should never be used in the company of good scientists.  Rather, I think there is some more good quality evidence to support the taking of multivitamins – so I shall continue to do so.  I must, though, remain open to evidence of the opposite variety and be aware that like all studies there is a probability that the conclusions will not be backed up by future studies.

Of course not all multivitamins are created equal (beware of fillers), they have different compositions and some are less likely to be absorbed than others, so do some homework before you rush out an buy some.

(1)  Mursu J, Robien K, Harnack LJ, Park K, Jacobs DR. Dietary supplements and mortality rate in older women: the Iowa Women’s Health Study. Arch Intern Med 2011;171(18):1625–33.

(2) Gaziano JM. Multivitamins in the Prevention of Cancer in MenThe Physicians’ Health Study II Randomized Controlled TrialMultivitamins in the Prevention of Cancer in Men. JAMA : the journal of the American Medical Association 2012;:1.

[Conflict of interest:  My wife’s business includes the selling of multivitamin supplements]

Vitamin D: “Silver bullet or fool’s gold?”

Vitamin D has had big raps lately.  We know that low levels of it correlate with higher levels of some diseases, but does taking a supplement help?  An article in the Herald this morning by Martin Johnson nicely outlines a study  being undertaken by Professor Robert Scragg of the University of Auckland.  His is the quote in the title.

Why is there need for an expensive trial when lots of observation studies show low levels of Vit D mean you are more likely to get Cardiovascular (and other) diseases, high levels mean you are less likely?  Isn’t it obvious that by taking supplements that health outcomes will improve?  Sadly, no it isn’t.  Correlation does not mean causation (or “Post hoc ergo propter hoc” for you latinistas out there – I learnt this from a re-run of West Wing this week).  What this means is that there is more than one reason for the correlation ie:

  1.  Illnesses are because Vit D is an essential component in the biochemical pathway’s that provide a defense against these illnesses (causation), or
  2.  Low Vit D is a consequence of something else that has gone wrong that also causes the diseases (ie Vit D is a “flag” or “marker” for something else).

If 1 is true, then raising Vit D levels may help.  If 2 is true, then raising levels probably won’t help.  For the moment assume 1 is true, then the next question is “does supplementation help?”  Again, most would think “Of course.”  However, it is possible that by bypassing the mechanism by which the body makes its own Vit D (ie beginning with exposure to the sun) the body’s response to the increased Vit D is different.  These, and others, are reasons why a Randomised Controled Trial (RCT) in which some participants get Vit D and some get Placebo (in this case sunflower lecithin) is conducted.  There is some information about the trial in the Herald article, more can be found on the Aust NZ Clinical Trials Registry here.  Briefly, participants (50 to 84 years of age) will receive 1 capsule a month for 4 years.  The incidence rate of fatal and non-fatal cardiovascular disease is the primary outcome. Secondary outcomes include the incidence of respiratory disease and fractures. They need to recruit 5100 people (so get involved!).

Why so many people?  This is because they want to avoid making two mistakes.  They want to know with high certainty that if they see a difference in the rates of cardiovascular disease between the Vit D and Placebo group, the that it is not a difference that occurred randomly (ie seeing a difference when there really is no difference).  It is most common to accept a 5% chance of seeing a difference by chance (tossing 4 heads in a row is about a 6% chance).  The second mistake is if the trial were to show no difference between the groups, but for this to be a false conclusion (ie not seeing a difference when there really is a difference).  It is common to accept about a 10% chance of this happening.  Notice, I have talked about “difference” not Vit D being “better” than placebo.  This is very important, because it is possible that Vit D is worse and scientists must take into account that possibility.  That is why scientists also start with what we call the “null hypothesis” – the presumption, in this case, that there is “no difference” in the rates of cardiovascular disease between those taking Vit D and those taking placebo.

I liked the quote of Prof Scragg in the Herald:

“GPs are very supportive of it and I know they are prescribing it extensively to patients. Hospital specialists are sceptical. Me, I’m in the middle. My heart says I want it to work. My head says I have to keep an open mind.”

I too often find myself in the “middle” – hoping with my heart that something works for the good of all, but working with my head so that we don’t end up peddling false hope or worse.