Category Archives: Keeping it simple

Cheesecake Files: The ICare-Acute Coronary Syndrome (heart attack) study

Hundreds of nurses, Emergency Department doctors, Cardiologists and other specialists, laboratory staff, administrators and managers from every hospital in New Zealand with an emergency department have come together to implement new, effective, and safe pathways for patients who think they may be having a heart attack.  Today, Dr Martin Than (CDHB, Emergency Department) presented to the American Heart Association results of our research into the national implementation of clinical pathways that incorporate an accelerated diagnostic protocol (ADP) for patients with possible heart attacks.  Simultaneously, a paper detailing that research is appearing in the academic journal Circulation.

The headlines, are that in the 7 hospitals we monitored (representing about 1/3rd of all ED admissions in NZ a year), there was a more than two fold increase in the numbers of patients who were safely discharged from the ED within 6 hours of arrival and told “It’s OK, you are not having a heart attack”.

Improving Care processes for patients with a possible heart attack.

Why is this important?

About 65,000 of the 1 million presentations to EDs each year in New Zealand are for patients whom the attending doctors think may be having a heart attack.  However, only 10-15% of those 65,000 are actually having a heart attack.  The traditional approach to assessment is long, drawn out, involves many resources, and means thousands of people are admitted into a hospital ward even thought it turns out they are not having a heart attack.  Of course, this means that they and their families have a very uncomfortable 24 hours or so wondering what is going on.  So, any method that safely helps to reassure and return home early some of those patients is a good thing.

What is a clinical pathway?

A clinical pathway is a written document based on best practice guidelines that is used by physicians to manage the course of care and treatment of patients with a particular condition or possible condition.  It is intended to standardise and set out the time frame for investigation and treatment within a particular health care setting – so it must take into account the resources available for a particular hospital.   For example, each hospital must document how a patient is assessed and if, for example, they are assessed within the ED as having a high-risk of a heart attack, where they must go.  In a large metropolitan hospital, this may mean simply passing them into the care of the cardiology department.  In a smaller setting like Taupo, where there is  no cardiology department, it may mean documenting when and how they are transported to Rotorua or Waikato hospital.

What is an accelerated diagnostic protocol?

An accelerated diagnostic protocol (ADP) is a component of the clinical pathway that enables the ED doctors to more rapidly and consistently make decisions about where to send the patient.  In all cases in New Zealand the ADPs for evaluating suspected heart attacks have 3 main components: (i) an immediate measurement of the electrical activity of the heart (an ECG), (ii) an immediate blood sample to look for the concentration of a marker of heart muscle damage called troponin, and a second sample 2 or 3 hours later, and (iii) a risk score based on demographics, prior history or heart conditions, smoking etc., and the nature of the pain (ie where it hurts and does it hurt when someone pushes on the chest, or when the patient takes deep breaths etc).   Importantly, these components enable a more rapid assessment of patients than traditionally and, in-particularly, enable patients to be rapidly risk stratified into low-risk, intermediate risk, and high-risk groups.  Usually the low-risk patients can be sent home.

What was done?

The Ministry of Health asked every ED to put in place a pathway.  Over an ~18 month period, a series of meetings were held at each hospital which were led by Dr Than, the clinical lead physician for the project.  Critically, at each meeting there were multiple members of the ED (doctors and nurses), cardiology, general wards, laboratory staff, and hospital administrators.  The evidence for different ADPs was presented.  Each hospital had to assess this evidence themselves and decide on the particularly ADP they would use.  Potential barriers to implementation and possible solutions were discussed.  Critically, champions for different aspects of the pathway implementation process were identified in each hospital.  These people led the process internally.

Oversight of the implementation was an adhoc advisory board put together by the Ministry of Health and with MoH officials, Dr Than, Cardiologists, and myself.

The Improving Care processes for patients with suspected Acute Coronary Syndrome (ICare-ACS) study was a Health Research Council sponsored study with co-sponsorship of staff time by participating hospitals.  Its goal was to measure any changes in each hospital to the proportions of patients who were being discharged home from ED early and to check whether they were being discharged safely (ie to check that there were not people with heart attacks were being sent home).  Dr Than and I co-led this project, but there were many involved who not only set up the pathways in each of the 7 participating study hospitals, but who also helped with attaining the data for me to crunch.

What were the study results?

In the pre-clinical pathway implementation phase (6 months for each hospital) there were 11,529 patients assessed for possible heart attack. Overall, 8.3% of them were sent home within 6 hours of arrival (we used 6 hours because this is a national target for having patients leave the ED).  The proportions of patients sent home varied considerably between hospitals – from 2.7% to 37.7%.  Of those sent home early, a very small proportion (0.52%) had what we call a major adverse event (eg a heart attack, a cardiac arrest, or death for any reason) within 30 days.  This is actually a very good number (it is practically impossible to be 0%).

We monitored each hospital for at least 5 months after pathway implementation and a median of 10.6 months.  Of the 19,803 patients, 18.4% were sent home within 6 hours of arrival.  ie the pathway more than doubled the number of patients who were sent home early.  Importantly, all 7 of the hospitals sent more patients home earlier.  The actual percentages sent home in each hospital still varied, showing there are more further improvements to be made in some hospital than others.  Very importantly, the rate of major adverse events in those sent home remained very low (0.44%).  Indeed, when we looked in detail at the few adverse events, in most cases there was a deviation from the local clinical pathway.  This suggests that some ongoing education and “embedding in” of the pathways may improve safety even more.

The study also showed that amongst all patients without a heart attack the implementation of the pathway reduced the median length of stay in hospital by nearly 3 hours.  Using crude numbers for the cost of an acute event in a hospital I estimate that this is a saving to the health system of $9.5Million per year.  These types are calculations are difficult and full of assumptions, nevertheless, I can be confident that the true savings are in the millions (pst… Government… I wouldn’t mind a fraction of this saving to carry on research please).

How did this come about?

This study and the pathway implementation is the result of a decade long series of studies in Christchurch hospital and some international studies, particularly with colleagues in Brisbane.  These studies have involved ED staff, cardiologists, research nurses, University of Otago academics (particularly those in the Christchurch Heart Institute) and many others.  They began with an international onbservational study which measured troponin concentrations at earlier than normal time points to see whether they gave information that would enable earlier discharge of some patients.  This was followed by the world’s first randomised trial of an ADP verse standard (then) practice.  That showed that the ADP resulted in more patients being safely sent home.  It was immediately adopted as standard practice in Christchurch.  The ADP was refined with a more “fit for purpose” risk assessment tool (called EDACS – developed locally and with collaboration of colleagues in Brisbane).  The EDACS protocol was then compared to the previous protocol (called ADAPT) in a second randomised trial.  It was at least as good with potential for discharging safely even more patients.  It is currently standard practice in Christchurch.

As a consequence of the Christchurch work, the Ministry of Health said, effectively,  ‘great, we want all of New Zealand to adopt a similar approach’, and the rest, as they say, is history.  Now, all EDs have a clinical pathway in place, all use an evidence based ADP – two use the ADAPT and all the rest use EDACS with one exception which uses a more ‘troponin centric’ approach (still evidence based) which I won’t go into here.  Meanwhile, all of Queensland has adopted the ADAPT approach and we know of many individual hospitals in Australia, Europe and Iran (yes) which have adopted EDACS.

Other help

As mentioned already, the Health Research Council and the Ministry of Health along with all those medical professionals were integral to getting to where we are today.  Also integral, were all those patients who in the randomised trials agreed to participate.  Medical research is build on the generosity of the patient volunteer.  Behind the scenes is our research manager, Alieke, who ensures doctors run on time.  Finally, I am very fortunate to be the recipient of a research fellowship that enables me to do what I do.  I thank my sponsors, the Emergency Care Foundation, Canterbury Medical Research Foundation, and Canterbury District Health Board.  Some of the earlier work has also been done in part with my University of Otago Christchurch hat on.  Thank you all.

Advertisements

Half a million Kiwis suddenly have high blood pressure

At 10am 14 November 2017 NZST millions of people around the world suddenly had high blood pressure. This will come as a shock to many and may precipitate a crisis in hand wringing and other odd behaviour, like over medication and jogging.

The American Heart Association and American College of Cardiology have just announced a redefinition of High blood pressure.

High blood pressure is now defined as readings of 130 mm Hg and higher for the systolic blood pressure measurement, or readings of 80 and higher for the diastolic measurement. That is a change from the old definition of 140/90 and higher, reflecting complications that can occur at those lower numbers. (link)

Announced at the annual American Heart Association conference, this is bound to cause some consternation.  It shifts 14% of the US adult population into the “High blood pressure” category and I estimate that it will do something similar for the NZ population meaning half a million New Zealanders who didn’t have High blood pressure at 9am now have high blood pressure (assuming NZ cardiologists follow their US colleagues).

While this is, of course, absurd.  It also highlights the seriousness with which the cardiologists take elevated blood pressure – maybe we all should take it a bit more seriously, perhaps park the care further from work and walk a little (likely to be cheaper too).

Have you got high blood pressure. (c) American Heart Association

 

A vision of kiwi kidneys

Sick of writing boring text reports.  Take a leaf out of Christchurch nephrologist Dr Suetonia Palmer’s (@SuetoniaPalmer) book and make a visual abstract report.  Here are two she has created recently based on data collected about organ donation and end stage renal failure by ANZDATA (@ANZDATARegistry). Enjoy.

Suetonia C-18RfJXUAApRcU

Suetonia C-16lBZXsAERoeM

ps. The featured image is of the Kidney Brothers.  Check out the great educational resources at The OrganWiseGuys.

Cheesecake files: A little something for World Kidney Day

Today is World Kidney Day, so I shall let you in on a little secret. There is a new tool for predicting if a transplant is going to be problematic to get working properly.

Nephrologist call a transplant a “graft” and when the new kidney is not really filtering as well as hoped after a week they call it “Delayed Graft Function.”  Rather than waiting a week, the nephrologist would like to know in the first few hours after the transplant if the new kidney is going to be one of these “problematic” transplants or not.  A lot of money has been spent on developing some fancy new biomarkers (urinary) and they may well have their place.  At this stage none are terribly good at predicting delayed graft function.

A while ago I helped develop a new tool – simply the ratio of  a measurement of the rate at which a particular substance is being peed out of the body  to an estimate how much the body is is producing in the first place.  If the ratio is 1 then the kidney is in a steady state. If not, then either the kidneys are not performing well (ie not keeping up with the production), or they have improved enough after a problem and are getting rid of the “excess” of the substance from the body.  This ratio is simple and easy to calculate and doesn’t require extra expense or specialist equipment.

A few months ago, I persuaded a colleague in Australia to check if this ratio could be used soon after transplant to predict delayed graft function. As it turns out in the small study we ran that it can, and that it adds value to a risk prediction model based on the normal stuff nephrologists measure! I’m quite chuffed about this.  Sometimes, the simple works.  Maybe something will become of it and ultimately some transplants will work better and others will not fail.  Anyway, it’s nice to bring a measure of hope on World Kidney Day.

This was published a couple of weeks ago in the journal Nephron.

 

Christchurch has breast cancer research hub

Guest post by: Kim Thomas, Communications Manager at the University of Otago, Christchurch

Research Radar UOC

A team of specialist cancer researchers have joined forces to focus on the impact of obesity on breast cancer.

The researchers all work at the University of Otago, Christchurch’s Mackenzie Cancer Research Group. The Group is headed by Canterbury District Health Board oncologist Professor Bridget Robinson, a breast cancer expert.

Researchers Associate Professor Gabi Dachs, Dr Margaret Currie and Dr Logan Walker have previously investigated various aspects of cancer but decided to team up and focus on the significant health issue of obesity.

Associate Professor Dachs says that international studies have shown breast cancer patients who were obese before or after diagnosis are less likely to survive than patients with normal BMI. Risk of dying from breast cancer increases by a third for every increment of 5kg/m2 in BMI.

autumn15obesity

From left to right: A/Prof Gabi Dachs, Dr Margaret Currie, Dr Logan Walker

The three researchers are investigating different aspects of obesity and breast cancer:

  • Associate Professor Dachs is looking at molecular factors associated with obesity in cancer, particularly how fat cells communicate with cancer cells and negatively affect them.
  • Dr Margaret Currie is putting fat and breast cancer cells together to see how the fat cells make tumours more resistant to treatment. She suspects the fat cells provide ‘an extra energy hit’ to cancer cells by providing lipids, or fats, in addition to glucose.
  • Geneticist Dr Logan Walker will investigate whether the obesity-related gene responsible for the amylase enzyme in saliva (AMY1) contributes to breast cancer development. He will also explore the role of key genes that behave differently in breast tumours from obese women.

The researchers’ work is funded by the NZ Breast Cancer Foundation, the Cancer Society of New Zealand, the Canterbury and West Coast Division of the Cancer Society NZ, the Mackenzie Charitable Foundation and the University of Otago.

newresearchradarfooter

My 10 Commandments of a Data Culture

Thou shalt have no data but ethical data.

Thou shalt protect the identity of thy subjects with all thy heart, soul, mind and body.

Thou shalt back-up.

Thou shalt honour thy data and tell its story, not thy own.

Thou shalt always visualise thy data before testing.

Thou shalt share thy results even if negative.

Thou shalt not torture thy data (but thou may interrogate it).

Thou shalt not bow down to P<0.05 nor claim significance unless it is clinically so.

Thou shalt not present skewed data as mean±SD.

Thou shalt not covet thy neighbour’s P value.

Significantly p’d

I may be a pee scientist, but today is brought to you by the letter “P” not the product.  “P” is something all journalists, all lay readers of science articles, teachers, medical practitioners, and all scientists should know about.  Alas, in my experience many don’t and as a consequence “P” is abused. Hence this post.  Even more abused is the word “significant” often associated with P; more about that later.

P is short for probability.  Stop! – don’t stop reading just because statistics was a bit boring at school; understanding maybe the difference between saving lives and losing them.  If nothing so dramatic, it may save you from making a fool of yourself.

P is a probability.  It is normally reported as a fraction (eg 0.03) rather than a percentage (3%).  You will be familiar with it when tossing a coin.  You know there is a 50% or one half or 0.5 chance of obtaining a heads with any one toss.  If you work out all the possible combinations of two tosses then you will see that there are four possibilities, one of which is two heads in a row.  So the prior (to tossing) probability of two heads in a row is 1 out 4 or P=0.25. You will see P in press releases from research institutes, blog posts, abstracts, and research articles, this from today:

“..there was significant improvement in sexual desire among those on  testosterone (P=0.05)” [link]

So, P is easy, but interpreting P depends on the context.  This is hugely important.  What I am going to concentrate on is the typical medical study that is reported.  There is also a lesson for a classroom.

One kind of study reporting a P value is a trial where one group of patients are compared with another.  Usually one group of patients has received an intervention (eg a new drug) and the other receives regular treatment or a placebo (eg a sugar pill).  If the study is done properly a primary outcome should have been decided before hand.  The primary outcome must measure something – perhaps the number of deaths in a one year period, or the mean change in concentration of a particular protein in the blood.  The primary outcome is how these what is measured differs between the group getting the new intervention and the group not getting it.  Associated with it is a P value, eg:

“CoQ10 treated patients had significantly lower cardiovascular mortality (p=0.02)” [link]

To interpret the P we must first understand what the study was about and, in particularly, understand the “null hypothesis.”  The null hypothesis is simply the idea the study was trying to test (the hypothesis) expressed in a particular way.  In this case, the idea is that CoQ10 may reduce the risk of cardiovascular mortality.  Expressed as a null hypothesis we don’t assume that it could only decrease rates, but we allow for the possibility that it may increase as well (this does happen with some trials!).  So, we express the hypothesis in a neutral fashion.  Here that would be something like that the risk of cardiovascular death is the same in the population of patients who take CoQ10 and in the population which does not take CoQ10.  If we think about it for a minute, then if the proportion of patients who died of a cardiovascular event was exactly the same in the two groups then the risk ratio (the CoQ10 group proportion divided by the non CoQ10 group proportion) would be exactly 1.  The P value, then answers the question:

If the risk of cardiovascular death was the same in both groups (the null hypothesis) was true what is the probability (ie P) that the difference in the actual risk ratio measured from 1 is as large as was observed simply by chance?

The “by chance” is because when the patients were selected for the trial there is a chance that they don’t fairly represent the true population of every patient in the world (with whatever condition is being studied) either in their basic characteristics or their reaction to the treatment. Because not every patient in the population can be studied, a sample must be taken.  We hope that it is “random” and representative, but it is not always.  For teachers, you may like to do the lesson at the bottom of the page to explain this to children.  Back to our example, some numbers may help.

If we have 1000 patients receiving Drug X, and 2000 receiving a placebo.  If, say, 100 patients in the Drug X group die in 1 year, then the risk of dying in 1 year we say is 100/1000 or 0.1 (or 10%).  If in the placebo group, 500 patients die in 1 year, then the risk is 500/2000 or 0.25 (25%).  The risk ratio is 0.1/0.25 = 0.4.  The difference between this and 1 is 0.6.  What is the probability that we arrived at 0.6 simply by chance?  I did the calculation and got a number of p<0.0001.  This means there is less than a 1 in 10,000 chance that this difference was arrived at by chance.  Another way of thinking of this is that if we did the study 10,000 times, and the null hypothesis were true, we’d expect to see the result we saw about one time.  What is crucial to realise is that the P value depends on the number of subjects in each group.  If instead of 1000 and 2000 we had 10 and 20, and instead of 100 and 500 deaths we had 1 and 5, then the risks and risk ratio would be the same, but the P value is 0.63 which is very high (a 63% chance of observing the difference we observed).  Another way of thinking about this is what is the probability that we will state there is a difference of at least the size we see, when there is really no difference at all. If studies are reported without P values then at best take them with a grain of salt.  Better, ignore them totally.

It is also important to realise that within any one study that if they measure lots of things and compare them between two groups then simply because of random sampling (by chance) some of the P values will be low.  This leads me to my next point…

The myth of significance

You will often see the word “significant” used with respect to studies, for example:

“Researchers found there was a significant increase in brain activity while talking on a hands-free device compared with the control condition.” [Link]

This is a wrong interpretation:  “The increase in brain activity while talking on a hands-free device is important.” or  “The increase in brain activity while talking on a hands-free device is meaningful.”

“Significant” does not equal “Meaningful” in this context.  All it means is that the P value of the null hypothesis is less than 0.05.   If I had it my way I’d ban the word significant.  It is simply a lazy habit of researchers to use this short hand for p<0.05.  It has come about simply because someone somewhere started to do it (and call it “significance testing”) and the sheep have followed.  As I say to my students, “Simply state the P value, that has meaning.”*

sig

_____________________________________________________________

For the teachers

Materials needed:

  • Coins
  • Paper
  • The ability to count and divide

Ask the children what the chances of getting a “Heads” are.  Have a discussion and try and get them to think that there are two possible outcomes each equally probable.

Get each child to toss their coin 4 times and get them to write down whether they got a head or tail each time.

Collate the number of heads in a table like.

#heads             #children getting this number of heads

0                      ?

1                      ?

2                      ?

3                      ?

4                      ?

If your classroom size is 24 or larger then you may well have someone with 4 heads or 0 (4 tails).

Ask the children if they think this is amazing or accidental?

Then, get the children to continue tossing their coins until they get either 4 heads or 4 tails in a row.  Perhaps make it a competition to see how fast they can get there.  They need to continue to write down each head and tail.

You may then get them to add all their heads and all their tails.  By now the proportions (get them to divide the number of heads by the number of tails).  If you like, go one step further and collate all the data.  The probability of a head should be approaching 0.5.

Discuss the idea that getting 4 heads or 4 tails in a row was simply due to chance (randomness).

For more advanced classes, you may talk about statistics in medicine and in the media.  You may want to use some specific examples about one off trials that appeared to show a difference, but when repeated later it was found to be accidental.

_____________________________________________________________

*For the pedantic.  In a controlled trial the numbers in the trial are selected on the basis of pre-specifying a (hopefully) meaningful difference in the outcome between the case and control arms and a probability of Type I (alpha) and Type II (beta)  errors.  The alpha is often 0.05.  In this specific situation if the P<0.05 then it may be reasonable to talk about a significant difference because the alpha was pre-specified and used to calculate the number of participants in the study.