I have just got back from Canberra and a week spent sitting on my first NHMRC grant review panel: one of two for “mental health, psychology and psychiatry”. We started on Monday morning with a talk reminding us of our lifetime commitment to confidentiality, but were encouraged to discuss the process to dispel the myth that “this is a club”. So in that spirit, and being mindful of not divulging anything about specific grants, here goes. [A note that there is a fair bit of detail to follow: the NHMRC is nothing if not process-driven.]I sat in a first-floor room looking out a big window onto an ANU carpark for three-and-a-bit days. On either side of a long table sat the 11 other grant panel members; with the top of the “T” occupied by the panel chair, the assistant chair, and an NHMRC admin person. We discussed 54 grants, with every discussion following the same format. The chair announced the grant, and then revealed the panel members who had nominated they had a conflict of interest. They left the room, before the chair announced who the two external assessors were, and which of the panel members were the primary and secondary spokespersons, along with their scores.
The primary spokesperson spent about ten minutes outlining the grant and discussing what they thought its strengths and weaknesses. The secondary spokesperson then presented the reviews of the external assessors, and how the researchers had rebutted those reviews. Once finished, there was a five-minute free-for-all, sometimes spirited, sometimes muted. Then the primary and secondary spokespersons re-scored the grant – from one to seven for each of scientific quality, significance or innovation, and team track record – and the chair asked if any panel members were considering scores that were two or more points from the primary spokesperson’s on any category. We all then wrote our scores on a secret ballot paper and handed it to the admin person. We had a breather while the admin person entered the scores, and awaited announcement of the final score. If that was five or more, and there was a possibility that the grant would be funded, we discussed the budget. This could last a surprisingly, and tediously, long time. Most grants scored less than five though, so more often than not it didn’t happen. And then on to the next grant and the same process again.
The week was the culmination of many months work. The review process started in March with an invitation to participate on a grant review panel. In May we were assigned our grants: eight as primary spokesperson and eight as secondary. We had about a month to review and score them, and to write reports for those we were primary spokesperson for. The applicants were sent these reports and those of two external assessors, and had ten days to make a two-page rebuttal.
We received a copy of the reports in July, seeing what the external assessors had to say for the first time, along with the rebuttals. We than had a week or so to re-score the applications in light of this new information. On the basis of the scores from the primary and secondary spokespersons, the bottom half of the applications were eliminated (“not for further consideration” in the NHMRC argot). It was the remaining applications, 54 out of 91 for our panel, that we discussed when we met in Canberra last week.
While I learnt much from the process there was one thing I didn’t learn, and that was how the NHMRC determines which of the grants from our panel will receive funding. At the end of our panel’s meeting we had established a rank order for the grants (from 1 to 54), but left without knowing which had actually been successful. Last year 3,810 applications were submitted for assessment by 35 panels, and 553 (15%) were funded. On that basis, out of the original 91 grants assessed by the panel, 13 or 14 will get funded. But that might be optimistic. The total funding pool remains about $400 million; but the grants are asking for more money for longer periods, so although the total number of grants submitted has remained reasonably constant over recent years (3,846 were assessed by 37 panels this year), the number of grants getting funded has been on a steady decline.
There were many things I did learn.
- Grants have to appeal to researchers from broad backgrounds. The Australian medical research community is small, and the mental health panels aren’t specialised. Our panel was broadly divided into those whose research was into basic clinical neurosciences (rat and genetic studies), and those who had a more clinical focus. Grants had to appeal to both: the former seemed more concerned about hypotheses and mechanisms, the latter about clinical relevance, study power, and feasibility. It was a mistake to ignore either perspective.
- There is little to be gained from reading the tea-leaves when you receive the first spokesperson and external assessor reviews. Only one of the them has actually scored the grant (you have little way of knowing which report is by the spokesperson and which by the assessors), and the final panel score was only moderately correlated with the primary spokesperson’s (I was geeky enough to calculate the correlation for our panel: Pearson’s r was 0.58).
- Related to this, the rebuttal matters much more than I had thought. Not only do the spokespeople re-score the application after reading it, but discussion of the rebuttal takes up a reasonable portion of the time spent discussing the grant. So it is worth making an effort. The better rebuttals addressed the major concerns raised by the reviewers, using the opportunity to continue the argument about why their grant should be funded. The less successful ones rebutted each point in micro-detail, foregoing any opportunity to restate their case.
- The review process has a human element that adds uncertainty. If you are fortunate your grant is assigned to sympathetic spokespersons and reviewers, and assessed by a panel who is receptive to it. And unfortunate if the opposite occurs. I don’t know whether much can be determined by having a grant that is unsuccessful, especially if it made it over the first hurdle and got assessed at the Canberra meeting. If it wasn’t successful, and you believe on reflection that the research has merit, I would be inclined to submit it again without major revision. And hope the dice come up more favourably.
There is suspicion and resentment about the NHMRC grant review process, much of it because of the low success rate. I understand the resentment: as researchers we spend many hours preparing our grants, and most of the time that effort is wasted. Our research careers depend on getting the grants, and success is hard. As a process though, it is difficult to find fault with it. Gaming the system would require an implausible degree of conspiration. It would be a better system if the total funding pool was doubled: a success rate of 25% or so seems about right. And with the Medical Research Future Fund something like that might actually happen.