Monthly Archives: June 2013

Cheesecake files: Too little pee

This week’s post is really about the coloured stuff & why too little of it is dangerous.  Note, I say coloured stuff because it aint just yellow – check out this herald article if you don’t believe me (or just admire this beautiful photo).

 A rainbow of urine from a hospital lab. Credit:  laboratory scientist Heather West.

A rainbow of urine from a hospital lab.
Credit: laboratory scientist Heather West.

Story time

A long time ago, when Greeks wore togas, and not because they couldn’t afford shirts, a chap named Galen* noted that if you didn’t pee you’re in big trouble.  It took 1800 more years before the nephrologists and critical care physicians got together to try and decide just how much pee was too little.  This was at some exotic location in 2003 where these medics sat around for a few days talking and drinking (I’m guessing at the latter, but I have good reason to believe…) until they came up with the first consensus definition for Kidney Attack (then called Acute Renal Failure, now called Acute Kidney Injury)1.  It was a brilliant start and has revolutionised our understanding of just how prevalent Kidney Attack is.  It was, though, a consensus rather than strictly evidence based (that is not to say people didn’t have some evidence for their opinions, but the evidence was not based on systematic scientific discovery).  Since then various research has built up the evidence for or against the definitions they came up with (including some of mine which pointed out a mathematical error2 and the failings of a recommendation of what to do when you don’t have information about the patient before they enter hospital3).  One way they came up with to define Kidney Attack was to define it as too little pee.  Too little pee was defined as a urine flow rate of less than half a millilitre per kiliogram of body weight per hour over six hours (< 0.5ml/kg/h over 6h).  Our groups latest contribution to the literature shows that this is too liberal a definition.

The story of our research is that as part of a PhD program Dr Azrina Md Ralib (an anaesthesist from Malaysia) conduct an audit of pee of all patients entering Christchurch’s ICU for a year.  She did an absolutely fantastic job because this meant collecting information on how much every patient peed for every hour during the first 48 hours as well as lots of demographic data etc etc etc. Probably 60-80,000 data points in all!  She then began to analyse the data.  We decided to compare the urine output data against  meaningful clinical outcomes – namely death or need for emergency dialysis.  We discovered that if patients had a flow rate of between 0.3 to 0.5 ml/kg/h for six hours it made no difference to the rates of death or dialysis compared to those with a flow rate greater than 0.5.  Less than 0.3, though, was associated with greater mortality (see figure).  For the clinician this means they can relax a little if the urine output is at 0.4 ml/kg/h.  Importantly, they may not give as much fluid to patients. Given that in recent times a phenomenon called “fluid overload” has been associated with poor outcomes, this is good news.

The full paper can be read for free here.

Proportion of mortality or dialysis in each group. Error bars represent 95% confidence intervals.From Ralib et al Crit Care 2012.

Proportion of mortality or dialysis in each group. Error bars represent 95% confidence intervals.From Ralib et al Crit Care 2013.

———————————————————

*Galen 131-201 CE.  He came up with one of the best quotes ever: “All who drink of this remedy recover in a short time, except those whom it does not help, who all die.”

1.     Bellomo R, Ronco C, Kellum JA, Mehta RL, Palevsky PM, Acute Dialysis Quality Initiative workgroup. Acute renal failure – definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care 2004;8(4):R204–12.

2.     Pickering JW, Endre ZH. GFR shot by RIFLE: errors in staging acute kidney injury. Lancet 2009;373(9672):1318–9.

3.     Pickering JW, Endre ZH. Back-calculating baseline creatinine with MDRD misclassifies acute kidney injury in the intensive care unit. Clin J Am Soc Nephro 2010;5(7):1165–73.

Advertisements

Significantly p’d

I may be a pee scientist, but today is brought to you by the letter “P” not the product.  “P” is something all journalists, all lay readers of science articles, teachers, medical practitioners, and all scientists should know about.  Alas, in my experience many don’t and as a consequence “P” is abused. Hence this post.  Even more abused is the word “significant” often associated with P; more about that later.

P is short for probability.  Stop! – don’t stop reading just because statistics was a bit boring at school; understanding maybe the difference between saving lives and losing them.  If nothing so dramatic, it may save you from making a fool of yourself.

P is a probability.  It is normally reported as a fraction (eg 0.03) rather than a percentage (3%).  You will be familiar with it when tossing a coin.  You know there is a 50% or one half or 0.5 chance of obtaining a heads with any one toss.  If you work out all the possible combinations of two tosses then you will see that there are four possibilities, one of which is two heads in a row.  So the prior (to tossing) probability of two heads in a row is 1 out 4 or P=0.25. You will see P in press releases from research institutes, blog posts, abstracts, and research articles, this from today:

“..there was significant improvement in sexual desire among those on  testosterone (P=0.05)” [link]

So, P is easy, but interpreting P depends on the context.  This is hugely important.  What I am going to concentrate on is the typical medical study that is reported.  There is also a lesson for a classroom.

One kind of study reporting a P value is a trial where one group of patients are compared with another.  Usually one group of patients has received an intervention (eg a new drug) and the other receives regular treatment or a placebo (eg a sugar pill).  If the study is done properly a primary outcome should have been decided before hand.  The primary outcome must measure something – perhaps the number of deaths in a one year period, or the mean change in concentration of a particular protein in the blood.  The primary outcome is how these what is measured differs between the group getting the new intervention and the group not getting it.  Associated with it is a P value, eg:

“CoQ10 treated patients had significantly lower cardiovascular mortality (p=0.02)” [link]

To interpret the P we must first understand what the study was about and, in particularly, understand the “null hypothesis.”  The null hypothesis is simply the idea the study was trying to test (the hypothesis) expressed in a particular way.  In this case, the idea is that CoQ10 may reduce the risk of cardiovascular mortality.  Expressed as a null hypothesis we don’t assume that it could only decrease rates, but we allow for the possibility that it may increase as well (this does happen with some trials!).  So, we express the hypothesis in a neutral fashion.  Here that would be something like that the risk of cardiovascular death is the same in the population of patients who take CoQ10 and in the population which does not take CoQ10.  If we think about it for a minute, then if the proportion of patients who died of a cardiovascular event was exactly the same in the two groups then the risk ratio (the CoQ10 group proportion divided by the non CoQ10 group proportion) would be exactly 1.  The P value, then answers the question:

If the risk of cardiovascular death was the same in both groups (the null hypothesis) was true what is the probability (ie P) that the difference in the actual risk ratio measured from 1 is as large as was observed simply by chance?

The “by chance” is because when the patients were selected for the trial there is a chance that they don’t fairly represent the true population of every patient in the world (with whatever condition is being studied) either in their basic characteristics or their reaction to the treatment. Because not every patient in the population can be studied, a sample must be taken.  We hope that it is “random” and representative, but it is not always.  For teachers, you may like to do the lesson at the bottom of the page to explain this to children.  Back to our example, some numbers may help.

If we have 1000 patients receiving Drug X, and 2000 receiving a placebo.  If, say, 100 patients in the Drug X group die in 1 year, then the risk of dying in 1 year we say is 100/1000 or 0.1 (or 10%).  If in the placebo group, 500 patients die in 1 year, then the risk is 500/2000 or 0.25 (25%).  The risk ratio is 0.1/0.25 = 0.4.  The difference between this and 1 is 0.6.  What is the probability that we arrived at 0.6 simply by chance?  I did the calculation and got a number of p<0.0001.  This means there is less than a 1 in 10,000 chance that this difference was arrived at by chance.  Another way of thinking of this is that if we did the study 10,000 times, and the null hypothesis were true, we’d expect to see the result we saw about one time.  What is crucial to realise is that the P value depends on the number of subjects in each group.  If instead of 1000 and 2000 we had 10 and 20, and instead of 100 and 500 deaths we had 1 and 5, then the risks and risk ratio would be the same, but the P value is 0.63 which is very high (a 63% chance of observing the difference we observed).  Another way of thinking about this is what is the probability that we will state there is a difference of at least the size we see, when there is really no difference at all. If studies are reported without P values then at best take them with a grain of salt.  Better, ignore them totally.

It is also important to realise that within any one study that if they measure lots of things and compare them between two groups then simply because of random sampling (by chance) some of the P values will be low.  This leads me to my next point…

The myth of significance

You will often see the word “significant” used with respect to studies, for example:

“Researchers found there was a significant increase in brain activity while talking on a hands-free device compared with the control condition.” [Link]

This is a wrong interpretation:  “The increase in brain activity while talking on a hands-free device is important.” or  “The increase in brain activity while talking on a hands-free device is meaningful.”

“Significant” does not equal “Meaningful” in this context.  All it means is that the P value of the null hypothesis is less than 0.05.   If I had it my way I’d ban the word significant.  It is simply a lazy habit of researchers to use this short hand for p<0.05.  It has come about simply because someone somewhere started to do it (and call it “significance testing”) and the sheep have followed.  As I say to my students, “Simply state the P value, that has meaning.”*

sig

_____________________________________________________________

For the teachers

Materials needed:

  • Coins
  • Paper
  • The ability to count and divide

Ask the children what the chances of getting a “Heads” are.  Have a discussion and try and get them to think that there are two possible outcomes each equally probable.

Get each child to toss their coin 4 times and get them to write down whether they got a head or tail each time.

Collate the number of heads in a table like.

#heads             #children getting this number of heads

0                      ?

1                      ?

2                      ?

3                      ?

4                      ?

If your classroom size is 24 or larger then you may well have someone with 4 heads or 0 (4 tails).

Ask the children if they think this is amazing or accidental?

Then, get the children to continue tossing their coins until they get either 4 heads or 4 tails in a row.  Perhaps make it a competition to see how fast they can get there.  They need to continue to write down each head and tail.

You may then get them to add all their heads and all their tails.  By now the proportions (get them to divide the number of heads by the number of tails).  If you like, go one step further and collate all the data.  The probability of a head should be approaching 0.5.

Discuss the idea that getting 4 heads or 4 tails in a row was simply due to chance (randomness).

For more advanced classes, you may talk about statistics in medicine and in the media.  You may want to use some specific examples about one off trials that appeared to show a difference, but when repeated later it was found to be accidental.

_____________________________________________________________

*For the pedantic.  In a controlled trial the numbers in the trial are selected on the basis of pre-specifying a (hopefully) meaningful difference in the outcome between the case and control arms and a probability of Type I (alpha) and Type II (beta)  errors.  The alpha is often 0.05.  In this specific situation if the P<0.05 then it may be reasonable to talk about a significant difference because the alpha was pre-specified and used to calculate the number of participants in the study.

Science New Zealand

image

I’ve started a new Flipboard magazine called “Science New Zealand.”  Does anyone want to be a co-curator so we can collect news & commentary about NZ science & scientists in one place?

See

http://flip.it/6v48t

A more complete equation

An equation for decision making on public health interventions

An equation for decision making on public health interventions

 

Lots of chat from fellow science bloggers (see here, herehere and here) about fluoridation following the recent Hamilton City Council decision. Naturally most of the posts focus on the science and the logic (or otherwise) of the arguments around fluoridation.  I have no knowledge about fluoridation per se and have nothing to add to the science.  What I did think was necessary was to posit an equation which gives the debate and many others like it (folic acid, vaccinations etc etc etc) a wider context.

My simplistic equation points out that any decision on a public health intervention involves far more than scientists and far more than science.  Obviously, financial costs ranging from what it may cost to provide an intervention, to the impact of less ongoing health associated costs and greater productivity in those benefiting from the intervention are an essential part of the decision making.  I would love a health economist to weigh in and give us a better idea of what an equation may look like. Questions of rights and responsibilities are harder to quantify, but no less important than the scientific and economic ones.  Indeed, I think they are the most important as how we deal with them defines who we are as a society.  In the case of folic acid, for example, this means balancing the rights of the unborn child against the rights of the mother and of the rest of society.  While not a complete parallel to the abortion debate, it is familiar territory.  At its heart is how society cares for the most vulnerable, whilst also acknowledging the rights of others to make choices for themselves.  The final part of the equation involves the decision makers, spare some sympathy for the politicians here as they grapple with the complexities of science, economics and ethics.  This year is local body election year, and next year we have a general election.  My challenge is that if you care enough about these issues to read a blog post, spend a little more time getting to know the candidates and try to figure out if they are up to making complex decisions on your behalf.  If so, give them your vote.

 

Annual Academic Spam Awards

More annoying than those who boast of the number of unread emails in their inbox are the spammers who contribute to that number.  I’m fortunate to have a university IT department that effectively filters mountains of spam.  Nevertheless, some make it through to my inbox.  In the forlorn hope that I will shame these spammers into disappearing in a puff of smoke I hereby announce my Annual Academic Spam Award winners.

The Robert the Bruce award for persistence.

The Omics Group.

Like Coalgate… they really get in…despite 135 automatic deletes they still sneak through inviting me to write for journals or participate in conferences on topics I don’t know how to spell let alone am able to pontificate about.

The Serpent award for the most tempting conference title of the year

BABE-2013… Omics group!

Dear Dr. John W Pickering,

It is my great pleasure to invite you on behalf of organizing committee for the 4th World Congress on Bioavailability and Bioequivalence Pharmaceutical R & D Summit (BABE-2013), to ….

The CIA award for knowing something about me other than my name

Nephro-2012… Omics group!

Dear Dr. John W Pickering,

We are aware of your busy schedule, still would like to contact you again …

Stop spying on me!

The Stating the bleeding obvious award

Team Catalyst, New Delhi

Dear Professional,

Diseases are the major cause of death,…

The Nutter of the year and Supreme winner of the 2013 AASAs

Alex of the Ukraine

Hello.

I found your e-mail address on medical site.
My name is Alex, I am from Ukraine, I am 32 years old man, I do not drink alcohol and do not smoke cigarettes, my blood is O+ and I have a good health. If you need liver transplant I am ready to give part of my liver, but I want to receive a big compensation for that…

If you do not need liver transplant, but you know somebody who need it, please send my message to this person or keep it just in case.

[ email address removed ]

Alex

P.S. This is not a joke and I am not a cheater or scammer.

All that’s left is to add a reference so that you don’t think I am the only one:

Academic Spam: Comic ID 1590 "Piled Higher and Deeper" by Jorge Cham www.phdcomics.com

Academic Spam: Comic ID 1590
“Piled Higher and Deeper” by Jorge Cham
http://www.phdcomics.com