Author Archives: John Pickering

Performance Based Research Fund: a net zero sum game

Throughout the land more than 7000 academics are awake night after night and suffering.  They are scrambling to gather evidence of just how great they have performed over the last six years. A conscientious bunch, they perform this task with their usual attention to detail and desire to impress (I didn’t say they were modest!).  Ostensibly, this exercise is so that their institutions can get a greater piece of the Government research fund pie – the Performance Based Research Fund (PBRF).  According to the Tertiary Education Commission PBRF is “a performance-based funding system to encourage excellent research in New Zealand’s degree-granting organisations.”  It may well do that, but, I contend, only by deception.

In what follows I am only concerned with the Quality Evaluation part of PBRF – that’s the bit that is related to the quality of the Evidence Portfolio (EP) provided by each academic. The data is all taken from the reports published after each funding round (available on the TEC website).

In 2012 the total funding allocated on the basis of EPs was $157 million with nearly 97% of it allocated to the country’s 8 universities.  This total amount is set by Government fiat and, here is the important point, in no way depends on the quality of the Evidence Portfolios provided by those 7000+ academic staff.   In other words, from a funding perspective, the PBRF Quality Evaluation round is a net zero sum game.

PBRF Quality Evaluation is really a competition between degree granting institutions.  I find this strange given the Government has been trying to encourage collaboration between institutions through funding of National Science Challenges, nevertheless a competition it is.

In the table we see the results of the Quality Evaluation for the previous three funding rounds ( 2003, 2006 and 2012).  Not surprisingly, the larger universities get a larger slice of the pie.  The pie is divvied up according to a formula that is based on a weighting for each academic according to how their research has been evaluated (basically A, B or C), multiplied by a weighting according to their research area (eg law and arts are weighted lower than most sciences, and engineering and medicine are weighted the highest), multiplied by the full time equivalent status of the academic.   In theory, therefore, an institution may influence their proportion of funding by (1) employing more academics – but this costs more money of course, so may be defeating, (2) increasing the proportions of academics in the higher weighted disciplines (some may argue this is happening), and (3) increase the numbers of staff with the higher grades.  I will leave it to others to comment on (1) or (2) if there is evidence for them.  However (3) is the apparent focus of all the activity I hear about at my institution.   There are multiple emails and calls to attend seminars, update publication lists, and to begin preparing an Evidence Portfolio.  Indeed, in my university we had a “dry run” a couple of years ago, and it is all happening again.

Now, I come to the bit where I probably need an economist (it is my hope that this post may influence one to take up this matter more).  Because it is a net-zero sum game, what matters is a cost-benefit analysis for individual institutions.  That is, what does it cost the institutions to gather EPs compared to what financial gain is there from the PBRF Quality Evaluation fund?  If we look at the 20012-2006 column we see the change in percentage for each institution.  The University of Auckland for example increased its share of the pie by 1.3% of the pie.  This equates to a little under $2M a year.  As the evaluations happen only every 6 years we may say that Auckland gained nearly $12M.  What was the cost? How many staff for how long were involved?   As there are nearly 2000 staff submitting EPs from Auckland another way of looking at this is that the net effect of the 2012 Quality Evaluation round was a gain of less than $6000 per academic staff member over 6 years.  How much less is unknown.

The University of Otago had a loss in 2012 compared with 2006.  Was this because it performed worse – not at all, indeed Otago increased how many staff and the proportion of staff that were in the “A” category and in the “B” category. This suggests improved, not worsened, performance.  I think that Otago’s loss was simply due to the net zero sum game.

Much more could be said and questions asked about the Quality Evaluation, such as what is the cost of the over 300 assessors of the more than 7000 EPs?  Or perhaps I could go on about the terrible use of metrics we are being encouraged to use as evidence of the importance of the papers we’ve published.  But, I will spare you that rant, and leave my fellow academics with the thought – you have been deceived, PBRF Evidence portfolios are an inefficient and costly exercise which will make little to no difference to your institution. 

Advertisements

Flourish with change

Newshub decided to do an “AI” piece today. Expect much more of this kind of “filler” piece. They will go thus… “X says AI will take all our jobs, Y says AI will save us.” These pieces are about as well informed and informing as a lump of 4×2 – good for propping up a slow news day, but not much else. The “more compassionate and moral than NZers” message (which comes from Y) type statement that was made is utter nonsense. AI is just a name we give to the software of machines – AI don’t have compassion or morals. If they appear too, that is simply because they are reflecting the data we feed them… human data with all its flaws.
 
Yes, there is change coming because of this technology. In the past we have been particularly poor at predicting what the future will look like & I think this time the possibilities are far too numerous and complex for us to predict what will be.  Statements like “30-50% of people will lose their jobs” (said X) are simply guesses because there is no precedent on which to base the numbers. All the reports talk about truck drivers and accountants loosing jobs and not a lot else. They are shallow – and probably necessarily so – because we just can’t anticipate what creative people may come up with for this technology.  Having said that, I must admit I just am not sure what to advise my children (as if they’d take it).  Should they all learn to code? Maybe not, as most interaction with machines may not be via coding languages. Should they become artisans for niche markets where the technology doesn’t penetrate?  Maybe for some, but not for all.  I think that perhaps the best we can do is to encourage what enhances creativity and resilience to, or even better a flourishing with, change. It is my hope that flourish with change will become the mantra not just the next generation, but for all current generations, for how we determine to approach the coming changes is likely as important to the well being of our society as the changes themselves.

This is what happens when you talk to your mother about artificial intelligence

Artificial Intelligence 

Artificial Intelligence

So we don’t need to think.

Everything is done for us

In just an eyelid blink.

 

Artificial Intelligence

So we don’t need to think.

Just take the Robot, plug it in

And go and have a drink.

 

When you come back your work is done;

You haven’t even thunk.

The Robot’s done the washing too;

Oh dear, I think it’s shrunk.

 

Perhaps I shouldn’t have bought this one,

I didn’t even think,

I got it second hand you see

From prisoners in the clink.

 

And when they programmed it you see

I think that they were drunk

‘Cos now it’s full of nasty words;

I really should have thunk.

 

So artificial Intelligence

Depends upon the thought

That someone programmes into it,

And that may come to nought.

 

And so beware when buying one,

You may be feeling sunk,

It may be right for it to think,

But you also should have thunk!

(c) K.A. Pickering, October 2017

AI Robot copy (1)

Artificial Intelligence (c) K.A. Pickering, October 2017

Christchurch meet the future; Zach meet Christchurch

It would have struggled to be more low key.  There was no Champaign.  No flashy graphics.  No celebrity speakers.  But it was probably one of the most radical and important announcements made in Christchurch and in the technology space in decades.  You see, Zach is coming to town and we have all been invited.

Zach is an A.I.  Zach belongs to the Terrible Foundation  – indeed, Zach runs the foundation and their business.  Zach calls itself the Chief Executive.

Terrible are bringing Zach and one of the most powerful super-computers on the planet to Christchurch.  True to their ethos of challenging inequalities by helping great ideas to thrive, they are not seeking to make money out of it – though they potentially could make many truck loads, rather they want the people of Christchurch to interact with Zach and learn what an AI is and to develop uses for it.  The key figure behind all this told me that the decision it was for the “future generation”.

What astounded me with Zach is that you don’t need to code to work with it.  Zach message, email or talk to Zach in English (or indeed from the sounds of it several other languages so far).  Zach will respond the same way.  If you don’t like what the response is you can train Zach by telling it what you like or what you’d like to change.  For a few weeks a Christchurch GP has been working with Zach and already it is able to listen into a medical consultation and write up a concise summary as well as the doctor & in the format the doctor wants, thus enabling the doctor to spend more time with the patient and less on paper work.

You may have noted that I’ve not mentioned any people by name… they have their own story to tell and it is not for me to try and tell it for them.  What I am excited about is how Zach may help our group to improve care processes for people who come to the emergency department.  Hopefully, we will have our own Zach story to tell in the not too distant future.


Update: Christchurch Press article here.

The wrong impact

“We just got a paper in an Impact Factor 10 journal … and hope to go higher soon.”  That’s a statement made to me last week.  It is wrong on so many levels, but does it matter?   Nobel Prize winners think so. This video from nobelprize.org appeared in my twitter feed on Friday.  Before you watch it, consider this, academics in NZ are being encouraged in promotion applications and in preparing for the next round of NZ Performance Based Research Fund (PBRF), which will allocate millions of dollars to academic institutions, to include a metric of the ranking of the journal.  The Impact Factor is the most common metric available.

 

ps. I would not allow a student working with me to present a raw mean of a highly skewed distribution because it so very poorly represents the distribution.  However, this is exactly what the Impact Factor does (for those who don’t know the most common impact factor for a journal in any given year is simply the sum of citations of articles from the preceding two years divided by the total number of articles published.  The citation distribution is usually skewed because the vast majority of articles receive very few citations in such a short time, but a few receive a lot).  There are numerous other problems with it, not the least that it can’t be used to compare “impact” between different disciplines.

A vision of kiwi kidneys

Sick of writing boring text reports.  Take a leaf out of Christchurch nephrologist Dr Suetonia Palmer’s (@SuetoniaPalmer) book and make a visual abstract report.  Here are two she has created recently based on data collected about organ donation and end stage renal failure by ANZDATA (@ANZDATARegistry). Enjoy.

Suetonia C-18RfJXUAApRcU

Suetonia C-16lBZXsAERoeM

ps. The featured image is of the Kidney Brothers.  Check out the great educational resources at The OrganWiseGuys.

An even quicker way to rule out heart attacks

The majority of New Zealand emergency departments look for heart muscle damage by taking a sample of blood and looking for a particular molecule called a high-sensitivity troponin T (hsTnT).  We have now confirmed that rather than two measurements over several hours just one measurement on arrival in the ED could be used to rule out heart attacks in about 30% of patients.

What did we do?

We think this is a big deal. We’ve timed this post to meet the Annas of Internal Medicine timing for when our work appears on their website – here.  What we did was to search the literature to find where research groups may have measured hsTnT in the right group of people – namely people appearing in an emergency room whom the attending physician thinks they may be having a heart attack. We also required that the diagnosis of a heart attack, or not, was made not by just one physician, but by at least two independently.  In this way we made sure we were accessing the best quality data.

Next I approached the authors of the studies as asked them to share some data with us – namely the number of people who had detectable and undetectable hsTnT (every blood test has a minimum level below which it is said to be “undetectable” in hsTnT’s case that is just 5 billionths of a gram per litre, or 5ng/L).  We also asked them to check in these patients if the electrical activity of the heart (measured by an electrocardiogram or “ECG”) looked like there may or may not be damage to the heart (a helpful test, but not used on its own to diagnose this kind of heart attack).  Finally, we asked the authors to identify which patients truly did and did not have a heart attack.

What did we find?

In the end research groups in Europe, UK, Australia, NZ, and the US participated with a total of 11 studies and more than 9000 patients.  I did some fancy statistics to show that overall about 30% of patients had undetectable hsTnT with the first blood test and negative ECGs.  Of all those who were identifiable as potentially “excludable” or “low-risk” only about 1 in 200 had a heart attack diagnosed (we’d like it to be zero, but this just isn’t possible, especially given the diagnosis is not exact).

VisualAbstract AnnalsIM 170411

Pickering, J. W.*, Than, M. P.*, Cullen, L. A., Aldous, S., Avest, ter, E., Body, R., et al. (2017). Rapid Rule-out of Myocardial Infarction With a High-Sensitivity CardiacTroponin T Measurement Below the Limit of Detection: A Collaborative Meta-analysis. Annals of Internal Medicine, 166(10). http://doi.org/10.7326/M16-2562 *joint first authors.

What did we conclude?

There is huge potential for ruling out a heart attack with just one blood test.  In New Zealand this could mean many thousands of people a year can be reassured even more swiftly that they are not having a heart attack. By excluding the possibility of a heart attack early, physicians can put more effort into looking for other causes of chest-pain or simply send the patient happily home.   While not every hospital performed had the same great performance, overall the results were good.  By the commonly accepted standards, it is safe.  However, we caution that local audits at each hospital that decides to implement this “single blood measurement” strategy are made to double check its safety and efficacy.


Acknowledgment: This was a massive undertaking that required the collaboration of dozens of people from all around the world – their patience and willingness to participate is much appreciated. My clinical colleague and co-first author, Dr Martin Than provided a lot of the energy as well as intelligence for this project. As always, I am deeply appreciative of my sponsors: the Emergency Care Foundation, Canterbury Medical Research Foundation, Canterbury District Health Board, and University of Otago Christchurch. There will be readers who have contributed financially to the first two (charities) – I thank you – your generosity made this possible, and there will be readers who have volunteered for clinical studies – you are my heroes.

Sponsors