Tag Archives: PBRF

More on the PBRFs new clothes

A few of weeks ago I outed the multi-million-dollar exercise that is the Quality Evaluation component of the performance based research fund (PBRF) as a futile exercise because there was no net gain in research dollars for the NZ academic community.  Having revealed the Emperor’s new clothes, I awaited the call from the Minister in charge to tell me they’d cancelled the round out of futility.  When that didn’t come, I pinned my hope on a revolt by the University Vice-Chancellors. Alas, the VCs aren’t revolting.  This week, my goal is for there to be mass resignations from the 30 or so committees charged with assessing the evidence portfolios of individual academics and for individual academics to make last minute changes to their portfolios so as to maintain academic integrity.

I love academic metrics – these ways and means of assessing the relative worth of an individual’s contribution to academia or of the individual impact of a piece of scholarly work are fun.  Some are simple, merely the counting of citations to a particular journal article or book chapter, others are more complex such as the various forms of the h-index. It is fun to watch the number of a citations of an article gradually creep up and to think “someone thinks what I wrote worth taking notice of”.  However, these metrics are largely nonsense and should never be used to compare academics.  Yet, for PBRF and promotions we are encouraged to talk of citations and other such metrics.  Maybe, and only maybe, that’s OK if we are comparing how well we are performing this year against a previous year, but it is not OK if we are comparing one academic against another.  I’ve recently published in both emergency medicine journals and cardiology journals.  The emergency medicine field is a small fraction the size of cardiology, and, consequently, there are fewer journals and fewer citations.  It would be nonsense to compare citation rates for an emergency medicine academic with that of a cardiology academic.

If the metrics around individual scholars are nonsense, those purporting to assess the relative importance (“rank”) of an academic journal are total $%^!!!!.  The most common is the Impact Factor, but there are others like the 5-year H-index for a journal.  To promote them, or use them, is to chip away at academic integrity.  Much has been written elsewhere about impact factors.  They are simply an average of a skewed distribution.  I do not allow students to report data in this way.  Several Nobel prize winners have spoken against them.  Yet, we are encouraged to let the assessing committees know how journals rank.

Even if the citation metrics and impact factors were not dodgy, then there is still a huge problem that faces the assessing committee, and that is they are called on to compare apples with oranges.  Not all metrics are created equal.  Research Gate, Google Scholar, Scopus and Web of Science all count citations and report h-indices.  No two are the same.  A cursory glance at some of my own papers sees a more than 20% variation in counts between them.  I’ve even paper with citation counts of 37, 42, 0 and 0.  Some journals are included, some are not depending on how each company has set up their algorithms. Book chapters are not included by some, but are by others. There are also multiple sites for ranking journals using differing metrics.  Expecting assessing committees to work with multiple metrics which all mean something different is like expecting engineers to build a rocket but not to allow them to use a standard metre rule.

To sum up, PBRF Evidence Bases portfolio assessment is a waste of resources, and encourages use of integrity busting metrics that should not be used to rank individual academic impact.

Advertisements

Performance Based Research Fund: a net zero sum game

Throughout the land more than 7000 academics are awake night after night and suffering.  They are scrambling to gather evidence of just how great they have performed over the last six years. A conscientious bunch, they perform this task with their usual attention to detail and desire to impress (I didn’t say they were modest!).  Ostensibly, this exercise is so that their institutions can get a greater piece of the Government research fund pie – the Performance Based Research Fund (PBRF).  According to the Tertiary Education Commission PBRF is “a performance-based funding system to encourage excellent research in New Zealand’s degree-granting organisations.”  It may well do that, but, I contend, only by deception.

In what follows I am only concerned with the Quality Evaluation part of PBRF – that’s the bit that is related to the quality of the Evidence Portfolio (EP) provided by each academic. The data is all taken from the reports published after each funding round (available on the TEC website).

In 2012 the total funding allocated on the basis of EPs was $157 million with nearly 97% of it allocated to the country’s 8 universities.  This total amount is set by Government fiat and, here is the important point, in no way depends on the quality of the Evidence Portfolios provided by those 7000+ academic staff.   In other words, from a funding perspective, the PBRF Quality Evaluation round is a net zero sum game.

PBRF Quality Evaluation is really a competition between degree granting institutions.  I find this strange given the Government has been trying to encourage collaboration between institutions through funding of National Science Challenges, nevertheless a competition it is.

In the table we see the results of the Quality Evaluation for the previous three funding rounds ( 2003, 2006 and 2012).  Not surprisingly, the larger universities get a larger slice of the pie.  The pie is divvied up according to a formula that is based on a weighting for each academic according to how their research has been evaluated (basically A, B or C), multiplied by a weighting according to their research area (eg law and arts are weighted lower than most sciences, and engineering and medicine are weighted the highest), multiplied by the full time equivalent status of the academic.   In theory, therefore, an institution may influence their proportion of funding by (1) employing more academics – but this costs more money of course, so may be defeating, (2) increasing the proportions of academics in the higher weighted disciplines (some may argue this is happening), and (3) increase the numbers of staff with the higher grades.  I will leave it to others to comment on (1) or (2) if there is evidence for them.  However (3) is the apparent focus of all the activity I hear about at my institution.   There are multiple emails and calls to attend seminars, update publication lists, and to begin preparing an Evidence Portfolio.  Indeed, in my university we had a “dry run” a couple of years ago, and it is all happening again.

Now, I come to the bit where I probably need an economist (it is my hope that this post may influence one to take up this matter more).  Because it is a net-zero sum game, what matters is a cost-benefit analysis for individual institutions.  That is, what does it cost the institutions to gather EPs compared to what financial gain is there from the PBRF Quality Evaluation fund?  If we look at the 20012-2006 column we see the change in percentage for each institution.  The University of Auckland for example increased its share of the pie by 1.3% of the pie.  This equates to a little under $2M a year.  As the evaluations happen only every 6 years we may say that Auckland gained nearly $12M.  What was the cost? How many staff for how long were involved?   As there are nearly 2000 staff submitting EPs from Auckland another way of looking at this is that the net effect of the 2012 Quality Evaluation round was a gain of less than $6000 per academic staff member over 6 years.  How much less is unknown.

The University of Otago had a loss in 2012 compared with 2006.  Was this because it performed worse – not at all, indeed Otago increased how many staff and the proportion of staff that were in the “A” category and in the “B” category. This suggests improved, not worsened, performance.  I think that Otago’s loss was simply due to the net zero sum game.

Much more could be said and questions asked about the Quality Evaluation, such as what is the cost of the over 300 assessors of the more than 7000 EPs?  Or perhaps I could go on about the terrible use of metrics we are being encouraged to use as evidence of the importance of the papers we’ve published.  But, I will spare you that rant, and leave my fellow academics with the thought – you have been deceived, PBRF Evidence portfolios are an inefficient and costly exercise which will make little to no difference to your institution. 

The wrong impact

“We just got a paper in an Impact Factor 10 journal … and hope to go higher soon.”  That’s a statement made to me last week.  It is wrong on so many levels, but does it matter?   Nobel Prize winners think so. This video from nobelprize.org appeared in my twitter feed on Friday.  Before you watch it, consider this, academics in NZ are being encouraged in promotion applications and in preparing for the next round of NZ Performance Based Research Fund (PBRF), which will allocate millions of dollars to academic institutions, to include a metric of the ranking of the journal.  The Impact Factor is the most common metric available.

 

ps. I would not allow a student working with me to present a raw mean of a highly skewed distribution because it so very poorly represents the distribution.  However, this is exactly what the Impact Factor does (for those who don’t know the most common impact factor for a journal in any given year is simply the sum of citations of articles from the preceding two years divided by the total number of articles published.  The citation distribution is usually skewed because the vast majority of articles receive very few citations in such a short time, but a few receive a lot).  There are numerous other problems with it, not the least that it can’t be used to compare “impact” between different disciplines.

Publish And Perish

I didn’t want to be in a position to write this post.  I’ve procrastinated and debated whether I should or not – mainly because I don’t want it to come across as sour grapes.  However, procrastination over…

2013 was a great year from the academic metrics point of view – many articles were written, twice I published articles which were written up in Nature Reviews, I got a PhD student across the line, I had more citations than every before, and my h-index continued to increase.  My PBRF score came out a “B”, which given it was based on only 4 years work I was happy with, and to top it off 3 months ago my university promoted me to Associate Professor.

PandPYou’d think that would be enough to keep me happy, but, one crucial element was missing.  On 31 Dec 2013 I finally ran out of grant funds and lost my position.  Yes, I Published AND Perished.  Ta daaa…

There are a number of reasons for this situation: (1) I failed to successfully beat other grant applicants to the prize – something I have to do regularly for me to survive in academia, (2) I failed to persuade the university to shift funds from one priority to another, and (3) I have failed to persuade (successive) governments to change the focus of their funding from projects to people. The reality of the situation in New Zealand is that within universities the position of investigator driven grant funded (only) research scientist is under threat.  It is  a “career path” which has all but disappered.  Should this career path be cleared and made navigable once more?  That’s something the policy makers in universities and government departments need to think about.

For me, the consequences are that the work I have been doing on Acute Kidney Injury must slow down dramatically.  I’m still looking to carry on some work part-time – at the very very least I still have the data which patients have volunteered to provide which needs writing up and publishing. I see this as a moral responsibility.

Fortunately, this post is not all negative.  Two weeks ago I began a part-time position as a Senior Research Scientist with the Emergency Care Foundation.   This is a great opportunity to get involved with some world-class research emanating from the Emergency Department of Christchurch hospital.  At a later stage I will post on the studies and trials we are running.

One last comment, Sir Peter Gluckmann wrote recently of the “Impact Agenda” for publicly funded research.  He talked of what are sometimes seen as competing impacts – that of the universities with an emphasis on publications, citations, and that awful pathetic publication metric called the “impact factor”, and that of policy makers wanting research to impact public policy, societal health, the environment and the economy.  I think there is a need for some given and take – academic institutions and academics need to take a breath and re-evaluate the public good of metric driven research – some changes to the PBRF system could help this. Indeed, I wish many of my fellow academics would recognise they are in a service industry, where ultimately they research for the good of the public.  Policy makers and politicians, on the other hand, need to step back from treating scientists as if they were engineers who can be told to build something.  Science just does not work like engineering, it is not a tool to be used to produce a desired output, rather a methodology by which great changes and great good can happen.

Spin Doctors go to work on PBRF

University of Taihape:  Doctor Doctor I’ve got a 1.7 on my PBRF

Doctor Spin: Never mind son, your Gumbootlogy results make you the healthiest tertiary education provider in the country.  Let’s talk about that, shall we?

Scoop.co.nz has all the spin from the universities and polytechnics this morning as they try and give the impression that they are the best.  At times like this I am ashamed to be an academic.  One of the worst of sins is to cherry pick data to make your self look good.  We are used to this from certain sectors of society, but we should expect better from our educational institutions. Unfortunately, the culture of self promotion above all else has taken hold in our hallowed halls.

For those who are unaware of what I am talking about, around 18 months or so ago all academics in the country had to put forward a “portfolio” to demonstrate just how research active they were.  This is the Performance Based Research Fund (PBRF) exercise held every 6 or so years. Groups of academics under government oversight then went about scoring each academic in a process that has taken 15 months.  The result is that every academic has been given a grade A , B, C or not research active.  The grades of academics in each institution are then thrown into four different formula – each has additional information about different aspects of the institution (eg numbers of postgrad students).  Four numbers result.  This gives Doctor Spin plenty to play with. The numbers are also what are used to allocate hundreds of millions of dollars of research funds – here in lies the importance of PBRF to the institutions. A number is also provided for each of the self selected academic units that the institutions provided to the Tertiary Eduction Commission.  If they don’t score well in any of the four overall grades (comparative to other institutions their own size), then they can pick a favourable number from one of their academic units and talk about that. More grist for the spin mill.

Academics are notoriously competitive – obviously a good trait when it drives them to produce better and better research. I certainly have some of that streak in me. However, it is not  helpful when it results in attempting to pull the wool over the eyes of the public as happened yesterday.  The PBRF is a complex system designed to find a way to allocate research funds and hopefully improve research quality.  Academics will argue until the cows come home if it does this fairly. It certainly is a very expensive exercise. It certainly focusses institutions on the importance of research, which is a good thing.  Remember, the teaching in our universities (not polytechnics) is required by law to derive from research.  However, as a small country where the combined size of all our universities is only that of some of the larger overseas universities I wonder if such a inter – institution competitive model is the best for the country?  Perhaps the story should be an evaluation of cost benefits of the exercise. Is this the best method of allocating funds? Such a story should also consider if the competition is enhancing or detracting from the quality of research – after all in almost any field experts are spread across institutions.  Collegiality is a major driver of good research – does PBRF hinder that?

_________________________

If you want to check out the PBRF results in detail your self you can download a PDF from the Tertiary Education Commission here.

Disclaimer:  If you think my skepticism about PBRF is sour grapes because of a “poor” grade, then you’d be wrong.