Tag Archives: Tertiary Education Commission

Performance Based Research Fund: The numbers are up

8269 academics and their bosses have been alerted – the data is in, the numbers are crunched, PBRF scores are out. Who are the winners, who are the losers? Find out more with tec publication. But before you go there…

I predict that any minute now tertiary institutions throughout the land will be posting press releases detailing their successes, each one trying to say “we’re the best” – at least in some category, somehow, if you squeeze the numbers and look at them sideways… well, you get the picture. Speaking of which – here’s one I posted after the last lot of PBRF results were released in 2013.

http://www.scoop.co.nz/education 12 April 2013

Last year I stated that the PBRF quality evaluation was a net zero sum game, because irrespective of the outcomes of the quality evaluations of the 8269 staff there is no net increase in the overall funding (that being set by the Government of the day in the Budget). The total quality evaluation dollar amounts are $173,250,000 p.a. The 2018 PBRF process has shifted where this pie is divided up marginally.

The percentage changes are marginal, but they do translate to a loss or gain to individual institutions. In this round, based on these preliminary results, the “big winners” appear to be AUT, VUW and the non-university sector. The biggest loser appears to be UC, but the others seem to have lost a substantial amount. Of course, as I’ve argued before, these $ amounts must be considered in terms of the total $ costs to the institutions and the government to administer the PBRF, likely millions for each institution. The gains of VUW and AUT aren’t really as much as they look and the losses to the other institutions more.

Was this Quality Evaluation PBRF process worthwhile? – I think not. Will the results be celebrated? – I expect so. Will this post do more to expose the emperor’s new clothes and change the system? – one lives in hope.

Performance Based Research Fund: a net zero sum game

Throughout the land more than 7000 academics are awake night after night and suffering.  They are scrambling to gather evidence of just how great they have performed over the last six years. A conscientious bunch, they perform this task with their usual attention to detail and desire to impress (I didn’t say they were modest!).  Ostensibly, this exercise is so that their institutions can get a greater piece of the Government research fund pie – the Performance Based Research Fund (PBRF).  According to the Tertiary Education Commission PBRF is “a performance-based funding system to encourage excellent research in New Zealand’s degree-granting organisations.”  It may well do that, but, I contend, only by deception.

In what follows I am only concerned with the Quality Evaluation part of PBRF – that’s the bit that is related to the quality of the Evidence Portfolio (EP) provided by each academic. The data is all taken from the reports published after each funding round (available on the TEC website).

In 2012 the total funding allocated on the basis of EPs was $157 million with nearly 97% of it allocated to the country’s 8 universities.  This total amount is set by Government fiat and, here is the important point, in no way depends on the quality of the Evidence Portfolios provided by those 7000+ academic staff.   In other words, from a funding perspective, the PBRF Quality Evaluation round is a net zero sum game.

PBRF Quality Evaluation is really a competition between degree granting institutions.  I find this strange given the Government has been trying to encourage collaboration between institutions through funding of National Science Challenges, nevertheless a competition it is.

In the table we see the results of the Quality Evaluation for the previous three funding rounds ( 2003, 2006 and 2012).  Not surprisingly, the larger universities get a larger slice of the pie.  The pie is divvied up according to a formula that is based on a weighting for each academic according to how their research has been evaluated (basically A, B or C), multiplied by a weighting according to their research area (eg law and arts are weighted lower than most sciences, and engineering and medicine are weighted the highest), multiplied by the full time equivalent status of the academic.   In theory, therefore, an institution may influence their proportion of funding by (1) employing more academics – but this costs more money of course, so may be defeating, (2) increasing the proportions of academics in the higher weighted disciplines (some may argue this is happening), and (3) increase the numbers of staff with the higher grades.  I will leave it to others to comment on (1) or (2) if there is evidence for them.  However (3) is the apparent focus of all the activity I hear about at my institution.   There are multiple emails and calls to attend seminars, update publication lists, and to begin preparing an Evidence Portfolio.  Indeed, in my university we had a “dry run” a couple of years ago, and it is all happening again.

Now, I come to the bit where I probably need an economist (it is my hope that this post may influence one to take up this matter more).  Because it is a net-zero sum game, what matters is a cost-benefit analysis for individual institutions.  That is, what does it cost the institutions to gather EPs compared to what financial gain is there from the PBRF Quality Evaluation fund?  If we look at the 20012-2006 column we see the change in percentage for each institution.  The University of Auckland for example increased its share of the pie by 1.3% of the pie.  This equates to a little under $2M a year.  As the evaluations happen only every 6 years we may say that Auckland gained nearly $12M.  What was the cost? How many staff for how long were involved?   As there are nearly 2000 staff submitting EPs from Auckland another way of looking at this is that the net effect of the 2012 Quality Evaluation round was a gain of less than $6000 per academic staff member over 6 years.  How much less is unknown.

The University of Otago had a loss in 2012 compared with 2006.  Was this because it performed worse – not at all, indeed Otago increased how many staff and the proportion of staff that were in the “A” category and in the “B” category. This suggests improved, not worsened, performance.  I think that Otago’s loss was simply due to the net zero sum game.

Much more could be said and questions asked about the Quality Evaluation, such as what is the cost of the over 300 assessors of the more than 7000 EPs?  Or perhaps I could go on about the terrible use of metrics we are being encouraged to use as evidence of the importance of the papers we’ve published.  But, I will spare you that rant, and leave my fellow academics with the thought – you have been deceived, PBRF Evidence portfolios are an inefficient and costly exercise which will make little to no difference to your institution. 

Spin Doctors go to work on PBRF

University of Taihape:  Doctor Doctor I’ve got a 1.7 on my PBRF

Doctor Spin: Never mind son, your Gumbootlogy results make you the healthiest tertiary education provider in the country.  Let’s talk about that, shall we?

Scoop.co.nz has all the spin from the universities and polytechnics this morning as they try and give the impression that they are the best.  At times like this I am ashamed to be an academic.  One of the worst of sins is to cherry pick data to make your self look good.  We are used to this from certain sectors of society, but we should expect better from our educational institutions. Unfortunately, the culture of self promotion above all else has taken hold in our hallowed halls.

For those who are unaware of what I am talking about, around 18 months or so ago all academics in the country had to put forward a “portfolio” to demonstrate just how research active they were.  This is the Performance Based Research Fund (PBRF) exercise held every 6 or so years. Groups of academics under government oversight then went about scoring each academic in a process that has taken 15 months.  The result is that every academic has been given a grade A , B, C or not research active.  The grades of academics in each institution are then thrown into four different formula – each has additional information about different aspects of the institution (eg numbers of postgrad students).  Four numbers result.  This gives Doctor Spin plenty to play with. The numbers are also what are used to allocate hundreds of millions of dollars of research funds – here in lies the importance of PBRF to the institutions. A number is also provided for each of the self selected academic units that the institutions provided to the Tertiary Eduction Commission.  If they don’t score well in any of the four overall grades (comparative to other institutions their own size), then they can pick a favourable number from one of their academic units and talk about that. More grist for the spin mill.

Academics are notoriously competitive – obviously a good trait when it drives them to produce better and better research. I certainly have some of that streak in me. However, it is not  helpful when it results in attempting to pull the wool over the eyes of the public as happened yesterday.  The PBRF is a complex system designed to find a way to allocate research funds and hopefully improve research quality.  Academics will argue until the cows come home if it does this fairly. It certainly is a very expensive exercise. It certainly focusses institutions on the importance of research, which is a good thing.  Remember, the teaching in our universities (not polytechnics) is required by law to derive from research.  However, as a small country where the combined size of all our universities is only that of some of the larger overseas universities I wonder if such a inter – institution competitive model is the best for the country?  Perhaps the story should be an evaluation of cost benefits of the exercise. Is this the best method of allocating funds? Such a story should also consider if the competition is enhancing or detracting from the quality of research – after all in almost any field experts are spread across institutions.  Collegiality is a major driver of good research – does PBRF hinder that?

_________________________

If you want to check out the PBRF results in detail your self you can download a PDF from the Tertiary Education Commission here.

Disclaimer:  If you think my skepticism about PBRF is sour grapes because of a “poor” grade, then you’d be wrong.