Monthly Archives: July 2018

PBRF: The end is nigh

I’d like to say the end is nigh for the performance-based research fund (PBRF), full stop. A few months ago, I demonstrated how the expensive and tedious production of evidence portfolios by 7000 academic staff will do nothing to change the redistribution of research funding – the purported reason for PBRF. So, I’d like to say the end is nigh because the minister responsible (Hon. Chris Hipkins) has seen the light and pulled the plug. But, alas, it is simply that all portfolios have now been submitted and so await assessment by the peer review panels . About 250 people serve on these panels, nearly all of whom are Professors, most from New Zealand but a sprinkling from Australia and elsewhere.  They represent the gathering of some of the best minds in the country.  From my perspective it is a terrible waste  of time for them and of tax-payers’ money for the rest of us. 

In completing my portfolio I received a message concerning citation counts that “Panels are not a fan of Google scholar as they think the counts are over-inflated. You can use this but also supply cite counts from either Scopus or WoS.” Frankly, I think the panellists are far too intelligent to worry about this and I expect that they realise that while Google scholar counts are over-inflated, that Scopus (owned by Elsevier!) and WoS under-count (eg by not counting book chapters, leaving out some journals etc).  What matters, if citations have to be used at all, is that apples are compared with apples.  I’ve discussed some of these problems recently.  Before I suggest a solution that doesn’t require 250 Professors sitting in days of meetings, or 7000 academics spending days in completing evidence portfolios, I’ve produced a graphic to illustrate the problem of comparing apples with oranges.  Google scholar ranks journal according to the 5-year h-index. These can be explored according to the various categories and sub-categories Google Scholar uses (here). Visually each of the 8 major categories has different numbers of citations and so of the h-indices.  For example, Social Sciences is a small fraction of Health and Medial Sciences, but is larger than the Humanities, Literature & Arts.   Within each category there are large differences between sub-categories.  For example, in the Health & Medical Sciences category a cardiologist publishing in cardiology journals will be publishing in journals where the top 20 h-indices range from 176 to 56.   However, the Nursing academic will be publishing in journals whose top 20 h-indices range from 59 to 31.  So what is needed is a system that takes into account where the academic is publishing.

Visualisation of Google Scholar’s h-5 index Categories (large ellipses at the bottom) and sub-categories (smaller ellipses). Each sub-category ellipse represents in height and area the sum of the h-indices for 20 journals within that sub-category.

Google Scholar, which, unlike WoS and Scopus, is open and public, can be scraped by just three lines of code in R (a free and open programming language) to extract the last 6 years of published article and their citations for any academic with a profile on Google Scholar.  Thousands of NZ academics already have one.  Here’s the code which extracts my last 6 years of data:

library(scholar)
library(dplyr)
pubs<-get_publications("Ig74otYAAAAJ&hl") %>% filter(year>=2012 & year <=2017)

 

The “Ig74otYAAAAJ&hl” is simply the unique identifier for me which is found in the URL of my Google Scholar profile (https://scholar.google.co.nz/citations?hl=en&user=Ig74otYAAAAJ&hl).

I’ve also been able to scrape the list of top 20 journals and their h-index data for the 260 sub-categories from Google Scholar.  Here is what Cardiology looks like:

Google Scholar’s tops 20 journals for Cardiology as at 13 July 2018: https://scholar.google.co.nz/citations?view_op=top_venues&hl=en&vq=med_cardiology

So, how do we use all this data to compare academics without them having to submit screeds of data themselves?  All that needs is for them to be registered with their Google Scholar identity and for there to be an appropriate formula for comparing academics.  Such a formula is likely to have several components:

  1. Points for ranking within a category. For example, 20 pts for a publication ranked first in a subcategory, down to 1 pt for a publication ranked 20th and, say, 0.5 pts for ones not ranked.
  2. Points that reflect the number of citations a paper has received relative to the h-index for that journal and with a factor that accounts for the age of the paper (because papers published earlier are likely to be cited more).  For example, #citations/Journals 5y h-index * 2/age[y] * 20.  I use 20 just to make it have some similar value to that of the ranking in point 1 above.
  3. Points that reflect the author’s contribution.  Perhaps 20 for first author, 16 second, 12, 8, and 4 for the rest + a bonus 4 for being Senior author at the end.

Here’s a couple examples of mine from the last 6 years:

Pickering JW, Endre ZH. New Metrics for Assessing Diagnostic Potential of Candidate Biomarkers. Clinical Journal Americac Society Nephrology (CJASN) 2012;7:1355–64. Citations 101.

The appropriate sub-category is “Urology & Nephrology” (though I wonder why these are grouped together, I’ve published in many Nephrology, but never a Urology journal).

  1. Ranking:  12 points.    [CJASN is ranked 8th, so 20-8 = 12]
  2. Citations:  10.8 points. [ CJASN 5y h-index is 62. Paper is 6 years old. 101/62 * 2/6 * 20 =10.8]
  3. Author: 20 points [ 1st author]
  4. TOTAL: 42.8

Similarly for:

Flaws D, Than MP, Scheuermeyer FX, … Pickering JW, Cullen, L. External validation of the emergency department assessment of chest pain score accelerated diagnostic pathway (EDACS-ADP). Emerg Med J (EMJ) 2016;33(9):618–25. Citations 10.

The appropriate sub-category is “Emergency Medicine”  (though I wonder why these are grouped together, I’ve published in many Nephrology, but never a Urology journal).

  1. Ranking:  12 points.    [EMJ is ranked 8th, so 20-8 = 12]
  2. Citations:  10.8 points. [ EMJ 5y h-index is 36. Paper is 2 years old. 10/36 * 2/2 * 20 =5.6]
  3. Author: 4 points [ I’m not in the top 4 authors or senior author]
  4. TOTAL: 26.8 pts

This exercise for every academic could be done by one person with some coding skills.  I’m sure it could be calibrated to previous results and funding allocations by taking citations and papers for an earlier period. There may need to be tweaks to account for other kinds of academic outputs than just journal articles, but there are plenty of metrics available.

To summarise, I have just saved the country many millions of dollars and allowed academics to devote their time to what really matters.  All it needs now is for the decision makers to open their eyes and see the possibilities.

(ps. even easier would be to use the research component of the Times Higher Education World University Rankings and be done with it).

Advertisements