Tag Archives: medicine

beyond reasonable doubt: a significant improvement

For the second time in a week I have removed the word “significant” from a draft manuscript written by a colleague of mine in clinical medicine. In a significantly p’d I wrote about the myth of significance – that is about the ubiquitous use of the term “significant” in the medical literature to mean a specific probability  incorrectly rejecting the hypothesis that two things (eg two treatments) are the same (you may need to read that twice).  What I pointed out was the “significant” does not mean “meaningful.”   Here I want to propose an alternative.  But first, I need to discuss two major problems with the term.

Where common is not specific

In my experience the common usage of “significant” to mean important is the normal interpretation of the word in the science literature even by many medically trained people and sometimes the authors of articles themselves.

The tyranny of p<0.05

When the maths wiz Ronald Fisher talked about significance (in an agricultural journal not a medical one!) he used 0ne in 20 (p<0.05) as an acceptable error rate in agricultural field trials so that trials did not have to be repeated many times.  That p<0.05 has taken on almost magical proportions (‘scuse the pun) in the medical literature is scary and shameful.  I don’t want to delve into all that now.  If you want to, a starting point maybe the Nature article here.

My proposal

I propose that in all scientific literature that authors replace the term “significant” with the phrase “beyond reasonable doubt” and that they only be allowed to publish the article if in the methods section they define what p value they choose to represent “beyond reasonable doubt” and they defend why they have chosen this value and not another.  “Beyond reasonable doubt” is a term used in the New Zealand judicial system where those charged with a crime are presumed innocent (Null hypothesis) until proven otherwise.  Perhaps those of us in science could learn something from our lawyer friends.

Advertisements

HRC success in Christchurch

The Health Research Council announced Programme and Project grant recipients.  Here’s the list from the Christchurch campus of the University of Otago in which I get a brief mention :).  If others have abstracts of successful grants they’d like posted on this blog, then please let me know.

*****Update: It’s come to my attention that this announcement sent to Uni Otago staff left off the investigator lists investigators who were not current University staff.  I’ve added a few I know about below, but here may be others left out of the list, sorry.  ****

Monday, 9 June 2014.

University of Otago, Christchurch researchers have been awarded more than $8 million of Health Research Council 2014 funding. The results were announced by Minister Steven Joyce at 11.30am today.

The funded projects are:

  • HRC Programme Grant to Professor Mark Richards: Heart Failure: markers and management ($4,980,858).
  • HRC Project Grant to Professor David Murdoch: Legionnaires’ disease in New Zealand: improving diagnostics and treatment ($999,467).
  • HRC Project Grant to Dr Ben Hudson: A randomised controlled trial of nortriptyline in knee osteoarthritis ($1,190,921).
  • HRC Project Grant to Professor Tim Anderson Genetics, brain imaging, and cognitive decline in Parkinson’s disease ($1,178,804).
  • Emerging Researcher First Grant to Dr Tracy Melzer: Imaging markers of imminent cognitive decline in Parkinson’s disease ($149,943).

A summary of each project follows:

HRC Programme Grant to Professor Mark Richards ($4,980,858)

Heart Failure: markers and management

Heart failure (HF) will affect 20% of people now aged 40 years and confers high rates of early readmission and death.  Professor Richards and his team will implement an integrated programme addressing unmet needs in HF including: (1) The IMPERATIVE-HF controlled trial of intensified immediate post-discharge management using special blood tests to individually grade risk and guide intervention with rapid adjustments to treatment to improve outcomes. (2) Testing of candidate kidney damage markers for early warning of this frequent and dangerous complication of HF. (3) Establishing correct sampling times for novel markers for best prediction of early and long term outcomes in HF. (4) Testing our newly discovered markers for early warning of pneumonia complicating HF. (5) Clarification of diagnoses and testing management plans for patients in the Emergency Department with breathlessness or chest pain who do not have clear-cut HF or heart attacks but who nevertheless have elevated blood biomarkers and a poor outlook.

Other investigators are: Prof Vicky Cameron, Prof Richard Troughton, A/Prof Chris Pemberton, A/Prof Miriam Rademaker, A/Prof Chris Frampton, Prof Chris Charles, Dr Leigh Ellmers, Medicine, A/Prof John Pickering, Dr Anna Pilbrow (all University of Otago). Professor Zoltan Endre (University of New South Wales), Dr Martin Than (ED, Christchurch District Health Board), Prof Robert Doughty (University of Auckland), Dr James Pemberton (Cardiology, Auckland District Health Board)

HRC Project Grant to Professor David Murdoch ($999,467)

Legionnaires’ disease in New Zealand: improving diagnostics and treatment

Legionnaires’ disease is a severe type of pneumonia that is under-diagnosed in New Zealand. Special tests are required to make a diagnosis of legionnaires’ disease, but there are no clear guidelines about which patients to test. An enhanced testing system for legionnaires’ disease was developed in Canterbury and has been used there since 2010. The system involves targeted use of the current best test for legionnaires’ disease: PCR(polymerase chain reaction), which detects bacterial DNA. This approach has uncovered many cases of legionnaires’ disease that would have otherwise gone undetected. This study will roll out this same testing strategy across New Zealand for one year in order to measure the national burden of legionnaires’ disease, toimprove patient treatment, to identify cost-effective ways to test for legionnaires’ disease in the future, and to create better guidelines for the treatment of pneumonia.

Other investigators: A/Prof Patricia Priest, Prof Stephen Chambers, Dr Ian Sheerin.

HRC Project Grant to Dr Ben Hudson ($1,190,921)

A randomised controlled trial of nortriptyline in knee osteoarthritis

Osteoarthritis (OA) is a very common and painful condition.  Medicines currently available for treating OA pain are not ideal: they are either inadequately effective or cause unpleasant or dangerous side effects. Recent research has shown how the brain processes pain in OA and this has opened up the possibility of using different types of medicines for OA pain.  Nortriptyline (an antidepressant) has been used to treat persistent pain in other conditions, and other antidepressants may reduce pain in knee OA.  It is not known whether nortriptyline is useful in this condition.  We plan to test this effect by randomly allocating participants to treatment with nortriptyline or placebo and to measure changes in their pain before and after a period on the medication.  We hope that this will tell us whether nortriptyline will be helpful.  If it is, then we believe that many people may benefit from taking this medicine.

Other investigators: Prof Les Toop, Prof Lisa Stamp, Dr Jonathan Williman, Prof Gary Hooper, A/Prof Dee Mangin, Ms Bronwyn Thompson

HRC Project Grant to Professor Tim Anderson ($1,178,804)

Genetics, brain imaging, and cognitive decline in Parkinson’s disease

Many people with Parkinson’s are at risk of dementia but scientists and clinicians have been unable to predict when that will occur. Professor Tim Anderson and his team will do advanced brain scans (MRI and PET) gene testing and clinical evaluations in 85 Parkinson’s patients who have mild cognitive impairments, who are known to be at higher risk, and then determine whether they progress to dementia over the subsequent three years. By identifying characteristics present in the scans and genetic tests of those who develop dementia, compared to those who do not, Professor Anderson and his team can advance understanding of this important issue and establish a useful and reliable tool for researchers and clinicians. It is critical to do this so that preventative treatments to protect against dementia can be targeted at the most appropriate patients when that treatment becomes available and also to select the right ‘at risk’ Parkinson’s patients for trials of new treatments.

Other investigators are: Prof Martin Kennedy, Dr Tracy Melzer, Dr John Pearson.  Prof. John Dalrymple-Alford (University of Canterbury), Dr Ross Keenan (CDHB, Christchurch Radiology Group), Prof. David Miller (University College London)

HRC Emerging Researcher First Grant to Dr Tracy Melzer ($149,943)

Imaging markers of imminent cognitive decline in Parkinson’s disease.

Most Parkinson’s disease (PD) patients eventually develop dementia, which is the most burdensome aspect of this progressively worsening condition.  Mild cognitive impairments often indicate imminent dementia, but the two to 20 year time course poses a major problem for medical interventions, as brain changes associated with dementia in PD are still poorly understood.  Recent evidence suggests that neurodegenerative diseases such as PD progress along discrete brain networks.  One important network, known as the ‘default mode network’ appears particularly susceptible to neurodegeneration. Dr Melzer and his team will examine this network to determine if its disruption can specify which PD patients are vulnerable to progression to dementia within the next two years. A sophisticated but readily available brain imaging technique, called resting state functional imaging, will be used. These measures will assist in the selection of the most suitable patients for new treatments that may delay or prevent subsequent dementia in this vulnerable population.

The other investigator is: Prof Tim Anderson. Prof. John Dalrymple-Alford (University of Canterbury), Dr Ross Keenan (CDHB, Christchurch Radiology Group), Dr Daniel Myell (NZ Brain Research Institute)

 

A new entity is born: CDaR

Have you ever been told the blood test is positive and the disease in question is shocking – Cancer, an STD (but you don’t sleep around!), MS?  Have you every wondered why it is that some drugs get withdrawn years after, and millions of prescriptions after, they were first approved?  Surely, you’ve read a headline that coffee is good for you and chocolate bad, or was that chocolate good and coffee bad or were they both good, or both bad? Probably you’ve read all those headlines.  What does it all mean?  Am I sick or not (I heard some tests falsely give positive results)? Does it matter if I’ve been taking that drug or drinking three cups a day?  The answer to all those questions depends on one thing – clinical data research.  That is, it depends on how we collect the numbers, and what story those numbers are telling us.  Today, I am thrilled to announce that I have had my department’s (Department of Medicine, University of Otago Christchurch) endorsement to establish a new group, Clinical Data Research (CDaR), which will focus on the stories numbers in medicine tell us.

Source: Pickering et al http://ccforum.com/content/17/1/R7

Source: Pickering et al http://ccforum.com/content/17/1/R7

My recent expertise, as readers of this blog may have picked up, is in Kidney Attack (or Acute Kidney Injury). My contribution, as someone with a physics background, has been in data analysis and mathematical modeling.  It has been a privilege to have been involved with many discoveries and helping bring to light the stories of the biomarkers of that disease and the results of a unique randomised controlled trial.  Kidney Attack is a notoriously difficult to detect, and, partly because of that, one that has no effective treatment.  I’m currently working on the story of the association of Kidney Attack with death following surgery with cardiopulmonary bypass.  I am now looking to take those skills and work with other researchers in other medical specialties who generate data and are looking to tell its story (although I will still work on the kidney data!).  I’m particularly keen to engage with more students and pass on some of the data analysis skills I have acquired.  Moves towards open data as well as collection of data in large databases is providing more opportunities to assess the efficacy of health interventions and detect disease risk factors. The prospect of personalised medicine is one of both hope and hype. To sort fact from fantasy in all these areas will require development of new analytical techniques and careful assessment of evidence. This is what I wish to devote the rest of my career to, and to inspire others along the way. John Ioannidis, a highly respected biostatistician, once wrote an essay entitled “Why most research findings are false”  It is a scary thought that many interventions and diagnostic techniques in medicine may be based on biased studies (usually inadvertently biased!). More data will help reduce the bias, if it is treated nicely.  I promise to do my best to treat my data nicely, after all it is your and my health that is at stake.

I posted a few weeks ago my ten commandments of a data culture.  This is the ethos of CDaR.  Below is the lay summary of the new entity.

Group Name:            Clinical Data Research (CDaR)

Department:             Department of Medicine

Institution:                University of Otago Christchurch

Aim: To provide transparent evidence, with the lowest possible risk of bias, of the utility of biomarkers and efficacy of treatments in health or disease.

Lay summary of our aim: We aim to save lives and reduce the burden of disease by applying new ways to collect and analyse clinical data to better diagnose diseases, to predict the course and outcomes of diseases, and to assess how well treatments work.  We do this because we all want the best possible health outcomes for our communities, our families, and ourselves, with the least possible harm done along the way.  We are excited by the new ways scientists, including those at the University of Otago Christchurch, have come up with to measure disease, disease risk, and treatment outcomes. We are also living in an age of unprecedented data generation. To discover both benefits and harm in all this data and to make those discoveries available to all those making clinical decisions requires people dedicated to analysing this data in a transparent and open fashion that exposes both the good and the bad. That is who we want to be and who we want our students to become.

Definition:  A biomarker is any measureable quantity related to disease risk or diagnosis, or disease or health outcomes.

Significantly p’d

I may be a pee scientist, but today is brought to you by the letter “P” not the product.  “P” is something all journalists, all lay readers of science articles, teachers, medical practitioners, and all scientists should know about.  Alas, in my experience many don’t and as a consequence “P” is abused. Hence this post.  Even more abused is the word “significant” often associated with P; more about that later.

P is short for probability.  Stop! – don’t stop reading just because statistics was a bit boring at school; understanding maybe the difference between saving lives and losing them.  If nothing so dramatic, it may save you from making a fool of yourself.

P is a probability.  It is normally reported as a fraction (eg 0.03) rather than a percentage (3%).  You will be familiar with it when tossing a coin.  You know there is a 50% or one half or 0.5 chance of obtaining a heads with any one toss.  If you work out all the possible combinations of two tosses then you will see that there are four possibilities, one of which is two heads in a row.  So the prior (to tossing) probability of two heads in a row is 1 out 4 or P=0.25. You will see P in press releases from research institutes, blog posts, abstracts, and research articles, this from today:

“..there was significant improvement in sexual desire among those on  testosterone (P=0.05)” [link]

So, P is easy, but interpreting P depends on the context.  This is hugely important.  What I am going to concentrate on is the typical medical study that is reported.  There is also a lesson for a classroom.

One kind of study reporting a P value is a trial where one group of patients are compared with another.  Usually one group of patients has received an intervention (eg a new drug) and the other receives regular treatment or a placebo (eg a sugar pill).  If the study is done properly a primary outcome should have been decided before hand.  The primary outcome must measure something – perhaps the number of deaths in a one year period, or the mean change in concentration of a particular protein in the blood.  The primary outcome is how these what is measured differs between the group getting the new intervention and the group not getting it.  Associated with it is a P value, eg:

“CoQ10 treated patients had significantly lower cardiovascular mortality (p=0.02)” [link]

To interpret the P we must first understand what the study was about and, in particularly, understand the “null hypothesis.”  The null hypothesis is simply the idea the study was trying to test (the hypothesis) expressed in a particular way.  In this case, the idea is that CoQ10 may reduce the risk of cardiovascular mortality.  Expressed as a null hypothesis we don’t assume that it could only decrease rates, but we allow for the possibility that it may increase as well (this does happen with some trials!).  So, we express the hypothesis in a neutral fashion.  Here that would be something like that the risk of cardiovascular death is the same in the population of patients who take CoQ10 and in the population which does not take CoQ10.  If we think about it for a minute, then if the proportion of patients who died of a cardiovascular event was exactly the same in the two groups then the risk ratio (the CoQ10 group proportion divided by the non CoQ10 group proportion) would be exactly 1.  The P value, then answers the question:

If the risk of cardiovascular death was the same in both groups (the null hypothesis) was true what is the probability (ie P) that the difference in the actual risk ratio measured from 1 is as large as was observed simply by chance?

The “by chance” is because when the patients were selected for the trial there is a chance that they don’t fairly represent the true population of every patient in the world (with whatever condition is being studied) either in their basic characteristics or their reaction to the treatment. Because not every patient in the population can be studied, a sample must be taken.  We hope that it is “random” and representative, but it is not always.  For teachers, you may like to do the lesson at the bottom of the page to explain this to children.  Back to our example, some numbers may help.

If we have 1000 patients receiving Drug X, and 2000 receiving a placebo.  If, say, 100 patients in the Drug X group die in 1 year, then the risk of dying in 1 year we say is 100/1000 or 0.1 (or 10%).  If in the placebo group, 500 patients die in 1 year, then the risk is 500/2000 or 0.25 (25%).  The risk ratio is 0.1/0.25 = 0.4.  The difference between this and 1 is 0.6.  What is the probability that we arrived at 0.6 simply by chance?  I did the calculation and got a number of p<0.0001.  This means there is less than a 1 in 10,000 chance that this difference was arrived at by chance.  Another way of thinking of this is that if we did the study 10,000 times, and the null hypothesis were true, we’d expect to see the result we saw about one time.  What is crucial to realise is that the P value depends on the number of subjects in each group.  If instead of 1000 and 2000 we had 10 and 20, and instead of 100 and 500 deaths we had 1 and 5, then the risks and risk ratio would be the same, but the P value is 0.63 which is very high (a 63% chance of observing the difference we observed).  Another way of thinking about this is what is the probability that we will state there is a difference of at least the size we see, when there is really no difference at all. If studies are reported without P values then at best take them with a grain of salt.  Better, ignore them totally.

It is also important to realise that within any one study that if they measure lots of things and compare them between two groups then simply because of random sampling (by chance) some of the P values will be low.  This leads me to my next point…

The myth of significance

You will often see the word “significant” used with respect to studies, for example:

“Researchers found there was a significant increase in brain activity while talking on a hands-free device compared with the control condition.” [Link]

This is a wrong interpretation:  “The increase in brain activity while talking on a hands-free device is important.” or  “The increase in brain activity while talking on a hands-free device is meaningful.”

“Significant” does not equal “Meaningful” in this context.  All it means is that the P value of the null hypothesis is less than 0.05.   If I had it my way I’d ban the word significant.  It is simply a lazy habit of researchers to use this short hand for p<0.05.  It has come about simply because someone somewhere started to do it (and call it “significance testing”) and the sheep have followed.  As I say to my students, “Simply state the P value, that has meaning.”*

sig

_____________________________________________________________

For the teachers

Materials needed:

  • Coins
  • Paper
  • The ability to count and divide

Ask the children what the chances of getting a “Heads” are.  Have a discussion and try and get them to think that there are two possible outcomes each equally probable.

Get each child to toss their coin 4 times and get them to write down whether they got a head or tail each time.

Collate the number of heads in a table like.

#heads             #children getting this number of heads

0                      ?

1                      ?

2                      ?

3                      ?

4                      ?

If your classroom size is 24 or larger then you may well have someone with 4 heads or 0 (4 tails).

Ask the children if they think this is amazing or accidental?

Then, get the children to continue tossing their coins until they get either 4 heads or 4 tails in a row.  Perhaps make it a competition to see how fast they can get there.  They need to continue to write down each head and tail.

You may then get them to add all their heads and all their tails.  By now the proportions (get them to divide the number of heads by the number of tails).  If you like, go one step further and collate all the data.  The probability of a head should be approaching 0.5.

Discuss the idea that getting 4 heads or 4 tails in a row was simply due to chance (randomness).

For more advanced classes, you may talk about statistics in medicine and in the media.  You may want to use some specific examples about one off trials that appeared to show a difference, but when repeated later it was found to be accidental.

_____________________________________________________________

*For the pedantic.  In a controlled trial the numbers in the trial are selected on the basis of pre-specifying a (hopefully) meaningful difference in the outcome between the case and control arms and a probability of Type I (alpha) and Type II (beta)  errors.  The alpha is often 0.05.  In this specific situation if the P<0.05 then it may be reasonable to talk about a significant difference because the alpha was pre-specified and used to calculate the number of participants in the study.

Vitamin D: “Silver bullet or fool’s gold?”

Vitamin D has had big raps lately.  We know that low levels of it correlate with higher levels of some diseases, but does taking a supplement help?  An article in the Herald this morning by Martin Johnson nicely outlines a study  being undertaken by Professor Robert Scragg of the University of Auckland.  His is the quote in the title.

Why is there need for an expensive trial when lots of observation studies show low levels of Vit D mean you are more likely to get Cardiovascular (and other) diseases, high levels mean you are less likely?  Isn’t it obvious that by taking supplements that health outcomes will improve?  Sadly, no it isn’t.  Correlation does not mean causation (or “Post hoc ergo propter hoc” for you latinistas out there – I learnt this from a re-run of West Wing this week).  What this means is that there is more than one reason for the correlation ie:

  1.  Illnesses are because Vit D is an essential component in the biochemical pathway’s that provide a defense against these illnesses (causation), or
  2.  Low Vit D is a consequence of something else that has gone wrong that also causes the diseases (ie Vit D is a “flag” or “marker” for something else).

If 1 is true, then raising Vit D levels may help.  If 2 is true, then raising levels probably won’t help.  For the moment assume 1 is true, then the next question is “does supplementation help?”  Again, most would think “Of course.”  However, it is possible that by bypassing the mechanism by which the body makes its own Vit D (ie beginning with exposure to the sun) the body’s response to the increased Vit D is different.  These, and others, are reasons why a Randomised Controled Trial (RCT) in which some participants get Vit D and some get Placebo (in this case sunflower lecithin) is conducted.  There is some information about the trial in the Herald article, more can be found on the Aust NZ Clinical Trials Registry here.  Briefly, participants (50 to 84 years of age) will receive 1 capsule a month for 4 years.  The incidence rate of fatal and non-fatal cardiovascular disease is the primary outcome. Secondary outcomes include the incidence of respiratory disease and fractures. They need to recruit 5100 people (so get involved!).

Why so many people?  This is because they want to avoid making two mistakes.  They want to know with high certainty that if they see a difference in the rates of cardiovascular disease between the Vit D and Placebo group, the that it is not a difference that occurred randomly (ie seeing a difference when there really is no difference).  It is most common to accept a 5% chance of seeing a difference by chance (tossing 4 heads in a row is about a 6% chance).  The second mistake is if the trial were to show no difference between the groups, but for this to be a false conclusion (ie not seeing a difference when there really is a difference).  It is common to accept about a 10% chance of this happening.  Notice, I have talked about “difference” not Vit D being “better” than placebo.  This is very important, because it is possible that Vit D is worse and scientists must take into account that possibility.  That is why scientists also start with what we call the “null hypothesis” – the presumption, in this case, that there is “no difference” in the rates of cardiovascular disease between those taking Vit D and those taking placebo.

I liked the quote of Prof Scragg in the Herald:

“GPs are very supportive of it and I know they are prescribing it extensively to patients. Hospital specialists are sceptical. Me, I’m in the middle. My heart says I want it to work. My head says I have to keep an open mind.”

I too often find myself in the “middle” – hoping with my heart that something works for the good of all, but working with my head so that we don’t end up peddling false hope or worse.

How can donor rates be increased? : guest post

I read the following in Kidney Health New Zealand’s annual report and with KHNZ and Paula Martin’s permission I have reproduced her great report.  I am thrilled to see such important research being undertaken and as you’ll read, Paula has great motivation.  Paula is a PhD Candidate in the Health Services Research Centre, School of Government, Victoria University, Wellington, New Zealand.

Kidney Health New Zealand Research Grant

Paula Martin

In 2006 I donated a kidney to my husband. At the time, I was just focusing on getting through the year long donor work up and supporting my husband while we coped with the impacts on our lives of him being on peritoneal dialysis. Only after the transplant did I realise just how few living donor transplants are done each year in New Zealand. In 2006, only 46 other live donors gave a kidney to someone; last year the number had climbed to 57 live donors, but the number of people needing a transplant had also increased dramatically, with around 600 on the official waiting list.

What could be done to increase the current rate of kidney donations? The low number of transplants is a concern because we know that for most people with end stage renal failure, a transplant is the best treatment. In addition, it is cheaper than keeping people on dialysis. In order to develop solutions, we needed research to tell us what the barriers to living donor kidney transplantation are in New Zealand; how similar to, or different from, barriers in other countries these are; and what people involved in the renal community here think could be done about those barriers, so that more people wanting a transplant can get one.

In 2010, I decided to do some research on this topic to fill this gap. Supported by a research grant from Kidney Health New Zealand, I’m currently undertaking a PhD in Public Policy based at the Health Services Research Centre in the School of Government, at Victoria University of Wellington. My focus is solely on living donation, not deceased. Around half of all kidney transplants now come from living donors and with the increasing demand for kidney transplants and the shortage of deceased donors, living transplantation has to be a critical part of solving the problem. The barriers to living donation and deceased donation are different so it’s important to think about them separately.

We know from overseas research that there can be many different barriers: for example, patients needing a transplant often find it difficult to approach their family and friends about whether they might consider living donation; people who want to be donors can face practical barriers such as loss of income while they take time off to recover from the surgery; and many people who would like to be donors discover that they aren’t compatible with the person they want to donate to, or that they have a medical problem of their own which makes them unsuitable. A particular problem in NZ is that Maori and Pacific people often find it harder to get a transplant than European/Pakeha. There are likely to be many different reasons for this. Cultural attitudes to organ donation may be one factor, but a bigger issue may be that it can be harder for these patients to find a donor who meets the strict medical suitability criteria because of things like the high rates of Type II diabetes in these populations.

There is no single solution to this problem – this is an extremely complex issue and we’ll need a variety of different initiatives to make a difference to it. So, I’ve been looking at our legislation and current policies as well as how renal services operate on the ground, and talking to a range of different people – patients, renal specialists, transplant coordinators, patient support groups, managers in District Health Boards and senior government officials and politicians – to find out what they think the issues are.

Finding out what the issues are from a patient perspective has been a big part of the research. With the assistance of the three renal transplant units, I carried out a postal survey last year of all the people on the kidney transplant waiting list and received nearly 200 replies. I’ve followed that up with a small number of in-depth patient interviews. The early results of this part of the research suggest that, as in other countries, patients find it very difficult to “ask” someone to be a kidney donor which often stops them talking to their family and friends about living donation. Furthermore, patients that do get offers from people to be kidney donors often find they are incompatible or the potential donor is medically unsuitable for some reason. Health professionals I’ve interviewed have provided valuable insights into what the issues are from their perspective inside the health system.

I’m aiming to finish this research in 2013. I hope it will be of use to practitioners, policy makers, patient groups and anyone else interested in making a difference to this problem.

Thanks to Kidney Health New Zealand for supporting this work with a grant towards the costs of doing research.

A positive STD

The doc said it was an STD.  I laughed.  Why?

The answer my friends lies in the numbers.

Of course the doctor wanted me to have an awkward conversation and prescribed some anti-bs.  I, on the other hand, was very confident, so took a different line – “Do the test again,” I said.

And the rest is history…

Any blood or urine test has a reference range – that is a range of concentrations  at which it is negative and above (or below) at which it is positive (some tests are just shown as a “+” or a “-“ as in home pregnancy kits rather than a number).  It is up to the doctor when they receive a positive test result how they interpret them.  They may chose to believe the test has diagnosed a disease, they may choose to do more tests in order to “confirm a diagnosis”, or they may choose to think that the results are erroneous.  I really don’t know how often they choose any course.  What I do know, is that every school child should be taught about false positives and false negatives.

A false positive is simply a test which says that you do have the disease when you don’t. 

A false negative is simply a test which says that you do not have the disease when you do.

What we want is a test with as few false positives and false negatives as possible (the “narrow” ellipse in the diagram).  In reality, tests vary a lot.

Ideally every test result will lie in the dark blue (true negative) or dark red (true positive). In reality, there is always a few false positives and false negatives [A good test would have few (the narrow ellipse), a poor test would have many (broader ellipse)].

How often do false negatives and false positives occur?  I don’t know the exact number, but the answer is “frequently.”  The more tests done, the more false negatives and false positives there are.  For a test like chlamydia which is ordered by doctors even when it is not asked for by patients (grrrr!) my guess is that it is very frequent.  Consider this – if the boffins who developed the test for chlamydia decide on the threshold for positivity (eg the concentration above which the test is called “positive”) such that the test correctly identifies as having the disease 99% of those that have it (ie only 1% of the “positives” are False positives) then for every 100 people diagnosed with the disease, 1 does not have it.  If it identifies correctly 99% of those who do not have the disease as not having the disease then 1 out of every 100 people who are told they do not have the disease actually have it.

If, on any given day, 1000 people in New Zealand have the test.  For a moment, let us assume that of those 1000 people, 100 actually have chlamydia.  With the numbers above it means 1 person will be told they have chlamydia when they don’t and 9 will be told that they don’t when they do!

Telling someone they have a disease when they don’t matters most if the treatment for the disease is dangerous and/or expensive, or the psychological or social consequences for the individual are serious (eg divorce!).

Telling someone they don’t have a disease when they do matters most if the failing to treat could lead to more serous healthy problems and/or costs for the person or their community (as with an STD).

All these factors have to be weighed up when deciding on test thresholds and on whether a test should be made available in the first place.  It is why, for example, we don’t routinely screen for prostrate cancer – the test has too high a likelihood of false positives.

Recently in the media there was concern over screening for breast cancer in Southland.  Some women had received negative readings of mammograms, yet later were found to have breast cancer.  Was this because of poor reading of mammograms?  The answer appears to be “No”, the “False negatives” were at the rate expected (See  http://www.health.govt.nz/news-media/media-releases/confidence-southern-screening-programme).

For the record – the second test was negative.