Funding

Overview

External funding is the ultimate measure of success for a scientist - whoever can convince the funding agencies to allocate money for what he's doing will not only be able to support a large research group, but more often than not has leverage in his home institution. Think of all the other measures discussed so far (citations, number of publications, publications in prestigious journals,...) as assessing the value of the researcher, usually called the principal investigator (PI) of a project and then the idea outlined in the research plan part of an application as it is evaluated in a peer review process as assessing the value of future research - funding decisions are then made weighing both factors.

The paradigm of the age is evaluation - money should not be given to researchers randomly, rather it should specifically be allocated to the projects most likely to yield results which advance science. Hence funding decisions are usually short term - grants often last two years, sometimes three rarely five, almost never more. Afterwards the value of the funded research is assessed, and the researcher has to argue again for continuation.

Now, one problem is that external funding is the ultimate measure of success, because it is often actually used to gauge the value of the PI. That's right - suposedly funding is allocated because someone is a good scientist, which is proven by the fact that he gets funding. If you think that's self-referential and hence not a trivial principle to apply, I quite agree.

Timing issues

The whole premise of evaluation-based funding schemes is that one gets better science out by giving an amount X to one brilliant scientist than giving half of X to two reasonably good scientists - so the schemes aim to find the brilliant scientists among all applicants. Let's leave the question whether this can be done at all aside for the moment and assume we have found our brilliant scientist and two reasonably good ones - do we actually get better science by allocating money that way?

One example of this are the ERC grants in Europe (more on those later) which provide 2.5 million Euro for a researcher over a five year period - with very few applicants getting the grants. The question to be discussed now - is it evident that things would not be better if we would give five applicants 500.000 Euro over the same five year period?

Let's start with the observation that given a huge sum to fund a good group does not mean I can hire the people I want or that I can hire immediately - the job market looking for specialists for a particular branch of physics is limited, often less than twenty people would really qualify for a position and they're not all available at any given time. In the worst case, I get to know of the funding decision right after institutional hiring decisions are made (in the US, usually late winter/early spring for positions in fall). The promising PhD student I would like to hire might thus just have signed up for a two year position elsewhere, i.e. he won't be available for the next 2 1/2 years. I might not have my group in an attractive location (certainly asking people to come to central Finland has been a stretch...) - so my preferred candidate might favour signing up in Paris instead because that's where he prefers to live. There might be family issues as well. Young researchers might prefer a position with an established senior scientist than with a young grant holder. And so on. Being able to hire the person you want is rare.

Thus, most likely I can not immediately hire a good group, if I am lucky I can do it partially and get one person I want. Yet the money needs to be spent (in a five year grant, one cannot decide to spend nothing the first year and twice the amount the last year) - so often one ends up hiring someone other than the best people for the group. More money to spend doesn't solve the problem, it just worsens it - there are now more slots to be filled, but the number of really good people looking for a position is not changed. Thus, getting more money I can readily see that I could get a larger group, but not that I could get a better group.

Building a good group needs time. If I'd be able to wait two years for a good candidate, and then offer him a five year position (that's right, longer term positions are really attractive because scientists actually like to not having to move around all the time), then I would be more likely to succeed. But such a long term perspective on group building is incompatible with the way the grant money needs to be spent.

Allocating a large sum to a single brilliant researcher also assumes that she is not only a good researcher but a good group leader as well, i.e. can guide students and postdocs such that they produce valuable results. In practice, these skills are not necessarily correlated though.

Face time, i.e. how much a student gets to discuss with the group leader, is another important matter which determines the productivity of a group. Some students work well on their own, but that's not the rule. I did my PhD in an exceptionally large theory group - and as students we had half an hour every month to discuss with the group leader plus 1 1/2 hours of group seminar talk every half year. Coming later into a group in which I could simply walk into the office of the local professors whenever I needed to discuss something made me realize just how big a difference this makes. The face time and supervision problem is also much larger in the large groups large sums of grant money can create than in small groups.

As a side note, when I prepared an ERC application and computed funding for the group I would really like to have, I ended up requesting significantly less than the nominal sum for the grant would be. It might perhaps not come as a surprise that there was a persistent push from the university to ask for a larger sum nevertheless.

Finally, let us have a look at the time it takes to prepare a grant application. It takes at least one to two weeks doing nothing else to prepare a major grant application like the ERC, possibly more. If such a grant scheme allocates funding to few selected scientists only and has a success quota of maybe 5-10%, more than 90% of this preparation work is ultimately wasted - in addition to the time the reviewers spend looking through unsuccessful applications. Thus, every successful grant comes with an unseen cost of half a man-year of highly qualified researcher work wasted - a drain on research productivity which then must be offset by the successful grant research.

An inside view on grant reviews

As mentioned above, grant applications usually go through a peer review process, i.e. a certain number of scientists must certify that the research proposed in the application is reasonable and that the PI is the right person to conduct it. The process comes in two flavours. In one of them (done in the US for DOE and NSF grants) applications go first to selected scientists (much like for a manuscript) who write a report assessing the application, then the application along with the report go to a panel which selects the successful applications based on the reports and their own assessment. The second flavour (as done for the ERC grants) skips the report and lets the panel make the assessment directly.

Just as for reviews for journals, a defining characteristic of the process is that the grant reviewer is not accountable. Often application guidelines specifically rule out appeals - if a panel member declined funding because he misunderstood what the application was about then that's the end of the process just as well. While the review process is supposed to provide quality control for the research that is funded, there is often little quality control of the selection process itself.

It is not unusual to see the grant selection stage as an area of competition for funding between different fields. For instance, if a call is for the topic 'Structure of Matter' and the panel is composed of two particle physicists and a nuclear physicist, and a quick survey among colleagues in nuclear physics who have submitted an application established that everyone sees the same pattern of one enthusiastically supportive assessment and two negative ones, you might form a theory as to what happened.

I have never been in a panel, but I have written review reports for US, Japanese and South African funding agencies. How this works in practice is that you're asked to assess the application with regard three distinct questions: 1) Is the proposed research sufficiently relevant and interesting 2) Is the PI sufficiently qualified to do this research and 3) Is the requested grant sum adequate?

Out of the three, 3) is answered most readily. There are certain numbers to be allocated for salaries and travel and usually it is roughly known in advance what sum to aim for in the grant. I suppose when experimentalists ask about hardware, they similarly have a feeling for what it costs to run their part of the experiments.

The second question is more tricky to do - here one usually tries to assess the track record using all the indicators of research productivity described above - number of publications and citations, talks on major conferences, previous funding,... - along with all their flaws and pitfalls. It's at this stage where it is potentially fairly easy to keep young researchers out - simply claiming they do not have the necessary experience is enough. Perhaps that is the reason that there are so many grant schemes specifically targeted to young researchers.

However, in the end most hinges on the first question, and here is where it all gets murky. What the applicant is supposed to provide is a research plan - what milestones will be reached when, what will their impact be,... As the saying goes among researchers however: If we knew what we're doing, it wouldn't be called research. That is to say, research can't be predicted very well. Interesting developments may occur (say a new measurement) which completely changes an idea. The proposed reseach itself may turn out quickly to be on a wrong track. And so on.

Thus, a reseach plan by and large is a piece of fiction - most likely the actual research done will deviate from the plan anyway. It is most valuable as an expression of what someone is interested in. The situation gets more complicated by the fact that many agencies want to primarily fund breakthrough research rather than systematic work (which would be more predictable). It is intuitively clear that everyone wants to be funding the next Einstein, but first of all true breakthroughs are rare, and second that can't be predicted at all, otherwise they wouldn't be considered breakthroughs. And often it becomes only apparent in hindsight (say with five years distance) what is a crackpot idea and what is a revolutionary idea.

So a grant reviewer is really asked to predict whether an idea proposed now will be considered valuable by everyone in ten years from now. That, naturally, can't really be done particularly well and looking in hindsight, errors in judgement are the norm rather than the exception. Remeber, it wouldn't be called research if we knew beforehand.

Personally, I was never sure how to assess whether an idea is sufficiently relevant. Certainly I can judge what I find interesting, but I am not under any delusion that this would necessarily correspond to what is interesting research or what others would find interesting. What is possible is to identify grant applications which probably won't work - sometimes the applicant happens to underestimate the magnitude of a problem drastically, sometimes he has a flawed notion of how things tie together, sometimes the application doesn't form a consistent whole. So I believe a process in which the fraction of projects which has little chance at success is sorted out would be possible, but I don't see how a process should reliably identify a small fraction of truly brilliant ideas and projects.

Political pressure

Imagine you are in an experimental collaboration. Detectors are huge complicated devices, they have to be planned and funded years before the actual research starts, i.e. you have to declare to a funding agency now what kind of equipment you will need perhaps fifteen years from now - which is difficult.

Now imagine fifteen years later, with the actual experiments starting and data coming in that you simply got it wrong - the tools you requested back then turn out not to work well for the task after all. It's no one's fault - it's research after all, one couldn't know everything before. Yet - make a guess as to what the reaction of the public and the funding agency would be if you openly state that.

Imagine in addition that there is a competing collaboration (it is a good principle on experimental science to do the same measurements in two independent experiments for verification) and this other collaboration has the same problem, but refuses to acknowledge it - how would that now change your position in public and with the funding agency?

The sad truth is that while grant application guidelines often profess to aim to fund high risk research with breakthrough potential, that's not what funding agencies actually want - what they actually want is that the risk pays off. Having built a 20 million US $ machine which does not deliver the promised results will not be perceived as a risk taken and a stroke of bad luck, it will be perceived as a waste of money which should have been avoided beforehand.

This creates a need for justification - in order to keep the channels for future research open, the existing projects and hardware must be perceived as successful and useful, regardless whether they actually turned out to be or not. Whatever the researchers themselves might think or want, they can not simply change the plan without taking a huge risk for the field.

Now, this need for justification massively gets into the way of science's mission to find out things about nature. It potentially places a scientist into a situation where she has a choice between doing what is best to ensure future funding for the field or doing what is best to find out about nature.

I experienced a key moment on a workshop where I presented results which indicated that the jet clustering techniques which are very successful in particle physics would not make very sensitive observables for URHIC physics. I was not particularly happy about this outcome as it contradicted widely held assumptions - but I had verified my results, could explain them in simple terms and seen the hints of the same thing in other theory publications by then. After my presentation, an experimentalist objected with: "How should I possibly explain this to my colleagues in p-p physics?"

His point was not that I got the science wrong, his point was that he had (like many others) designed experimental programs and proposals around the idea of the observable being valuable, so I found out something that should not have happened. He was from a large LHC experiment in which the URHIC group was only a small fraction of a collaboration mainly from particle physicists, and I believe I now understand that he had to negotiate and fight with that part of the collaboration for influence and resources - and part of the 'deal' brokered apparently was that the URHIC group would use the precision jet measurement techniques from p-p physics. Me reporting that they would not do what we needed was certainly a spoiler (well, another experimentalist argued more vehemently to 'get these results out of the world', so the comment directed at me was rather mild). So there was a real concern that my results would harm the future ability of his group to do science.

The whole situation was anything but nice. I was given to understand from many sides quite clearly that my results were not welcome and had the potential of making the whole community look bad. On the other hand, they were what I had calculated, and I could not make them go away, I even understand why they had to come out generically like that. So I kept showing them - though I believe they did not advance my career.

ERC grants

The European Research Council grants deserve a special mention here because in my view they exemplify almost everything which characterizes a grant scheme that not optimal to advance science, while it is usually depicted in public as the flagship scheme to support excellent research.

According to the application guidelines of around 2012, the scheme aims to fund high-risk high-gain breakthrough research. To assess the value fo research, it knows the following categories: Outstanding, Excellent, Very Good and Non-Competitive.

Suffice to say, we might argue whether good research is really well described by non-competitive, and by definition half of the research done in the world is in quality below the median, so there's an assumption behind that that we can not only reliably identify the best of the best of PIs and research ideas, but only they deserve to be funded. Consequently, the grant contains a staggering amount of money (dependent on the precise age group of applicants, ranging from 1.5 million to 5 million Euros) over a period of five years with a low success rate for applications of the order of five to ten percent at most. While this makes ERC grant holders the darlings of their host universities and (since funding attracted is considered a measure of good research) gives a lot of public reputation - what about the science?

First of all, how does the ERC go about and identify the best of the best researchers - is there a long and careful selection process, with reviewers writing educated in-depth reports about the long and detailed applications, weighing their strong and weak points?

No, there really doesn't seem to be. While the applications are certainly long and carefully prepared, the written evaluation of the grants I have been able to see and have been told by colleagues are terse two sentence paragraphs, usually with just a broad reference to the content of the application, no discussion of any details. There is no reviewer, the evaluation is directly done by the panel members. For URHIC physics one finds a very characteristic and fairly pattern of two rather negative and one enthusiastic evaluations, correlating with two particle physicists and one nuclear physicist in the panel (note that this is indicative, but no proof - correlation is not causation). It can happen that the panel member misunderstood the point of the application - this is evident from summaries 'The applicant aims at doing A' when the applicant does in fact not aim at A and might mention A just in the introduction to provide context.

The evidence visible in the feedback to the applicants suggests that many applications are reviewed by giving the abstract a cursory read, skimming over the rest and then writing a minimal evaluation.

Note again that the reviewers are not accountable. They are free to write 'this research is too ambitious' or 'this project is not ambitious enough' or 'the PI is not sufficiently experienced', they do not need to justify or argue any of this, and all this will lead to an unsuccessful application. As far as assessing the value of the proposed research is concerned - if I imagine I would have to do it for an idea outside my own field, say in particle physics, it would be very difficult. I would first have to learn the context, try to understand how the research ties to the rest of particle physics, and a day spent with a 20 page application certainly would not enough to do that. I would not be able to express an educated opinion in two terse sentences. Although I readily admit not being a distinguished senior scientist.

Now, I would like to contrast this with grant reviews for US agencies in which, as a reviewer, I was actually asked to submit a detailed opinion and to justify it to the program manager - explain how I reached my conclusions in a longer text. Things can be done in a different way.

Combine the reality of the selection procedure with what I argued above about the effects or large group sizes and the pitfalls of a low application success rate, and you might understand why I didn't reach the conclusion that the scheme is optimal to advance good science.

The funding dilemma

A general problem alluded to above is that funding agencies really would like to be the one to support the next Einstein. The focus is on supporting novel ideas which revolutionarize or transform fields, breakthroughs, frontier research and a few other catchy concepts like this. What I've seen hardly at all is funding for finding out whether the old ideas are actually right, or for independent verification of spectacular results.

So, external money (which is really considered the measure of success) would ask researchers to produce a steady stream of novel ideas - but testing whether these ideas are correct is left to someone else. Yet, often scientific progress does not come from having five ideas how a phenomenon might come about and adding five more - it comes from sorting out whether one of the original five ideas is correct. As often as not, breakthroughs are driven by large systematic efforts, painstakingly eliminating any alternative possibility till at the end the correct solution remains.

Yet this is considered boring research - the time horizon may be too long, systematically testing old alternatives is not as exciting as presenting novel ideas - and consequently it is much more difficult to get funding. Independent verification is even worse - the necessity to do an experiment someone else has already done is difficult to argue to the public - after all, we know what comes out, right? Except when we don't of course. Even then, there's a perception that the researchers do it do it only because they don't have any idea of their own, or because they are envious of the successful first research and try to pull a copycat - it's easy to fall into one of these traps.

And yet, that's fairly often what science actually needs. Scientists are just expected to provide this on their own.

Another impression one gets from the endless rounds of evaluations, the short grant periods and aims to select the best project is that there's a general mistrust, a suspicion that researchers would simply waste money and do nothing if they could be sure to be settled for, say, ten years, and that the only way to keep scientists productive is make them compete all the time.

I would suspect the opposite is true. Most scientists do what they do because they like it, they're motivated and interested, and there's a lot of competition of ideas happening on any workshop already. The competition for grant money isn't really seen so much as competition between the best research idea because there is just a component of luck involved - one can't guess what research the particular reviewer would like to see. As for scientists who would do nothing - they exist. I've met one case during my career in physics, I've heard of a second one. Hardly numbers to be afraid of, and probably less wasteful than the sum of time spent by motivated scientists preparing ultimately unsuccessful grant applications. About the number one wish I have heard of colleagues what they'd need to do better research is the ability to actually be able to do longer term planning - not what grants provide.

Continue this essay with remarks about Outreach

Back to main index     Back to essays

Created by Thorsten Renk 2015 - see the disclaimer and contact information .