What the HRC should have done

The system is broke.  It is no better than a lottery.  The Health Research Council tacitly acknowledged this last year when they introduced a lottery to their grant funding round.  The lottery was for three grants of $150,000 each.  These “Explorer Grants” are available again this year.  The process went thus: HRC announced the grant and requested proposals;  proposals were required to meet simple requirements of transformative, innovative, exploratory or unconventional, and have potential for major impact;  proposals were examined by committees of senior scientists;  all that met the criteria were put in a hat and three winners were drawn out.

116 grants were received, 3 were awarded (2.6%!!!). There were several committees of 4-5 senior scientists. Each committee assessed up to 30 grants.  I’m told it was a couple of days work for each scientist. I’m also told that, not surprisingly given we’ve a damned good science workforce, most proposals met the criteria. WHAT A COLOSSAL WASTE OF TIME AND RESOURCES.

Here is what should have happened:  All proposals should have gone immediately into the hat.  Three should have been drawn out.  Each of these three should have been assessed by a couple of scientists to make sure they meet the criteria.  If not, another should be drawn and assessed.  This would take about a 10th of the time and would enable results to be announced months earlier.

Given that the HRC Project grants have only about a 7% success rate and that the experience of reviewers is that the vast majority of applications are worthy of funding  I think a similar process of randomly drawing and then reviewing would be much more efficient and no less fair.  Indeed, here is the basis of a randomised controlled trial which I may well put as a project proposal to the HRC.

Null Hypothesis:  Projects assessed after random selection perform no differently to those assessed using the current methodology.

Method:  Randomly divide all incoming project applications into two groups. Group 1: Current assessment methodology.  Group 2: Random assessment methodology.  Group 1: assess as per normal aiming to assign half the allocated budget.  Group 2: Randomly draw 7% of the Group 2 applicants;  assess;  draw more to cover any which fail to meet fundability (only) criteria;  fund all which meet this criteria in order they were drawn until half the allocated budget is used.

Outcome measures:  I need to do a power calculation and think about the most appropriate measure, but this could be either a blinded assessment of final reports or a metric like difference in numbers of publications.

Let’s hope that lessons are learnt when it comes to the processes used to allocate National Science Challenges funds.

Advertisements

Got something to say? Don't be shy

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s