Tag Archives: Research Funding

Performance Based Research Fund: a net zero sum game

Throughout the land more than 7000 academics are awake night after night and suffering.  They are scrambling to gather evidence of just how great they have performed over the last six years. A conscientious bunch, they perform this task with their usual attention to detail and desire to impress (I didn’t say they were modest!).  Ostensibly, this exercise is so that their institutions can get a greater piece of the Government research fund pie – the Performance Based Research Fund (PBRF).  According to the Tertiary Education Commission PBRF is “a performance-based funding system to encourage excellent research in New Zealand’s degree-granting organisations.”  It may well do that, but, I contend, only by deception.

In what follows I am only concerned with the Quality Evaluation part of PBRF – that’s the bit that is related to the quality of the Evidence Portfolio (EP) provided by each academic. The data is all taken from the reports published after each funding round (available on the TEC website).

In 2012 the total funding allocated on the basis of EPs was $157 million with nearly 97% of it allocated to the country’s 8 universities.  This total amount is set by Government fiat and, here is the important point, in no way depends on the quality of the Evidence Portfolios provided by those 7000+ academic staff.   In other words, from a funding perspective, the PBRF Quality Evaluation round is a net zero sum game.

PBRF Quality Evaluation is really a competition between degree granting institutions.  I find this strange given the Government has been trying to encourage collaboration between institutions through funding of National Science Challenges, nevertheless a competition it is.

In the table we see the results of the Quality Evaluation for the previous three funding rounds ( 2003, 2006 and 2012).  Not surprisingly, the larger universities get a larger slice of the pie.  The pie is divvied up according to a formula that is based on a weighting for each academic according to how their research has been evaluated (basically A, B or C), multiplied by a weighting according to their research area (eg law and arts are weighted lower than most sciences, and engineering and medicine are weighted the highest), multiplied by the full time equivalent status of the academic.   In theory, therefore, an institution may influence their proportion of funding by (1) employing more academics – but this costs more money of course, so may be defeating, (2) increasing the proportions of academics in the higher weighted disciplines (some may argue this is happening), and (3) increase the numbers of staff with the higher grades.  I will leave it to others to comment on (1) or (2) if there is evidence for them.  However (3) is the apparent focus of all the activity I hear about at my institution.   There are multiple emails and calls to attend seminars, update publication lists, and to begin preparing an Evidence Portfolio.  Indeed, in my university we had a “dry run” a couple of years ago, and it is all happening again.

Now, I come to the bit where I probably need an economist (it is my hope that this post may influence one to take up this matter more).  Because it is a net-zero sum game, what matters is a cost-benefit analysis for individual institutions.  That is, what does it cost the institutions to gather EPs compared to what financial gain is there from the PBRF Quality Evaluation fund?  If we look at the 20012-2006 column we see the change in percentage for each institution.  The University of Auckland for example increased its share of the pie by 1.3% of the pie.  This equates to a little under $2M a year.  As the evaluations happen only every 6 years we may say that Auckland gained nearly $12M.  What was the cost? How many staff for how long were involved?   As there are nearly 2000 staff submitting EPs from Auckland another way of looking at this is that the net effect of the 2012 Quality Evaluation round was a gain of less than $6000 per academic staff member over 6 years.  How much less is unknown.

The University of Otago had a loss in 2012 compared with 2006.  Was this because it performed worse – not at all, indeed Otago increased how many staff and the proportion of staff that were in the “A” category and in the “B” category. This suggests improved, not worsened, performance.  I think that Otago’s loss was simply due to the net zero sum game.

Much more could be said and questions asked about the Quality Evaluation, such as what is the cost of the over 300 assessors of the more than 7000 EPs?  Or perhaps I could go on about the terrible use of metrics we are being encouraged to use as evidence of the importance of the papers we’ve published.  But, I will spare you that rant, and leave my fellow academics with the thought – you have been deceived, PBRF Evidence portfolios are an inefficient and costly exercise which will make little to no difference to your institution. 

Publication police and how to choose where to publish

“I confess, I published behind a paywall.  I’m sorry, sir, I didn’t want to, but but but I’m almost out of funds and and …..<suspect’s voice fades>”             Publication Police files, Nov.3 2024

Will peer pressure eventually lead to discrimination against those who publish behind a paywall?  Is it now a moral imperative that we publish everything open access?  If so, is that not simply morality by majority (a dangerous proposition at the best of times), or worse, morality by the most vocal?

I’m often asked “Where should I publish this?” and I must admit that “In an open access journal” is not my first response.  This is simply because there is a higher standard than mere open access (as great as that is).  Where to publish is first and foremost the answer to the question “Where will it get the attention it deserves?”  Of course, this is where ego can raise its ugly head and, worse, I have colleagues who think this means the journal with the highest impact factor, but those distractions aside, it is still the most important question.

Most of our science is simply an incremental step building on what is going before.  Most of the time it is of interest to a relatively small group of fellow researchers or those whose profession is impacted on by the research.  Furthermore, it will probably be of interest only for a short period of time before someone else builds upon it. The “attention a paper deserves” is the attention that these people for whom it has most meaning give it.  For this reason, it should be published in a manner which makes it easy for these people to read about it and access it. This will probably mean one of the professional society journals and/or one of the most read journals in the field.  In the fields of Critical Care and Nephrology where I’ve published most recently this will probably mean a European or American journal which has high readership in those jurisdictions because this is where most of the research is being done.    Of course, this does not mean my manuscript will necessarily be accepted by those journals, but if I deem it has something important to say, then that is where I should send it first.

Comparatively few of those journals are open access only, but all offer an open access option.  This tends to come with a publishing fee in the range of US$1500 to US$3000.  My budget does not stretch to paying such a fee for every publication. I am forced to be pragmatic. If my manuscript is accepted into one of those more high profile journals I have to pick and choose.  The more important I think the findings the more likely I will take the open access option.  Also, if I think the message has immediate application for clinicians (i.e. not just the narrow group of researchers in my field) I am more likely to choose open access.

There is, of course, the option to publish in more general online journals (PlosOne, PeerJ, F1000 etc) and I have done so.  However, my impression at this stage it that these do not rapidly reach the inbox of most of the very very busy researchers and clinicians in the fields I publish in.  A few (like myself), may have set up automatic search strategies or use social media to follow journals in their field, and, of course, if people are conducting PubMed or the like searches they may come across those articles.  However, their lack of specialisation and reliance on someone making more effort over and above reading the specialised professional journals they have always read, mitigates somewhat their usefulness to me to “getting the message out.”  Of course, I could choose to be a “early adopter” or “pioneer” and publish in a low cost open access journal (if my fellow authors would let me) with the hope that this will change the publishing culture of paywalls and high publishing fees elsewhere.  However, it would be at the cost of less exposure of my research to those who are most interested and active in the field.  For some of what I publish I must balance my obligation to advance the field the most by maximising the chances of exposure amongst those for whom it is likely to be of immediate interest with the more philosophical desire for open access to all and sundry from now to eternity.

________________________________________________________________

sponsors_4