My previous blog focused on the second pillar of the science team strategy, and I had intended this time to move on to the third pillar – using knowledge effectively. But there are still some things I want to discuss that relate to strengthening research capability, so I’ll leave pillar three until next time.
Identifying the best researchers – do we get it right?
I write this blog shortly after a series of interview panels – the three-day bonanza during which our interview panel, divided into four ‘arms’, questions researchers carefully about their grant applications and then makes recommendations to Wellcome’s Science Division about which should be funded.
The applications that make it this far have already have been through an earlier screening by one of nine expert review groups, who look at the quality of the science and the importance of the question.
The interview panels also consider these aspects, of course, but in addition they allow a researcher to address any questions that the panel or the expert referees may wish to raise, and this allows the panel to assess whether the researcher can respond well to other challenges, and how well they might manage a research team.
Through this process we hope we manage to identify the best researchers. But do we get it right?
- How helpful is the interview?
- Does it in fact only test how good a researcher is at being interviewed?
- And how well does this correlate with how good a scientist they are?
Perhaps it’s a good thing that in the UK different funders assess applications differently; the MRC, for example, doesn’t interview applicants for its project and programme grants.
I think about this a lot with my colleague Chonnettia Jones, Director of Insight and Analysis. We agree that when funding researchers, Wellcome is looking for exciting and challenging science that will improve knowledge and, whenever possible, make a difference to human health.
An applicant’s track record will influence our thinking, but we shouldn’t allow this to disadvantage early career researchers. We don’t pay attention to where an applicant’s paper is published – what counts is the science – and we take great care to make sure that equality, diversity and inclusion are at the front of our minds.
But are we making the right decisions? I’d like to think we are, but what is the evidence for this? Are we doing better than simply allocating grants at random?
With this in mind, Dorothy Bishop asked in a recent blog whether funders should award small grants randomly to those applications that exceed a quality threshold. Surprisingly, to my mind, two-thirds of people who responded to her Twitter poll were in favour of this.
As Chonnettia and I develop our ideas, we’d be interested to hear what people think.
- What is the best way to decide which grants to fund?
- What measures of success should we use?
- How should we recognise that the tangible outcomes of an award can be a long time coming?
And do read Chonnettia’s blog about Wellcome’s Success Framework, which helps us think about what we want to achieve and how we know we have achieved it.
We’re backing team science
I sometimes think it’s a shame that we have to give a name to something that should happen naturally and that we have to develop mechanisms that encourage it to happen. But this is where we are with team science, which we can define as research in which two or more independent research groups come together to do their work, whether in one institute or across the world.
As it becomes more and more difficult for one research group to master all the technologies needed to answer a question, it is obvious and inevitable – and intellectually stimulating and exciting – that scientists should work together to answer difficult questions.
I see this with my own research, where we have made some novel insights (or so I claim) into the early cell cycle of the frog embryo only by collaborating with Torsten Krude and Phil Zegerman in Cambridge, who complement our embryological skills with expertise in biochemistry and the cell cycle.
Clearly, funders and everyone involved in scientific research should encourage team science and lower the activation energy to making it happen. How can we do this?
Wellcome recognises that research takes you to unexpected places and provides unforeseen opportunities. If your original application didn’t mention collaborating with another group to achieve something, this doesn’t matter – just go for it.
What we care about is that the best science gets done and that new things are discovered, and we’re proud that our awards are flexible enough to allow this to happen through whatever route is appropriate.
But we also recognise, of course, that some collaborations need to be more formal and may require specific funding to allow them to go ahead, and for these we have our Collaborative Awards.
Like most of our grants, the organisation hosting a Collaborative Award must be based in the UK, the Republic of Ireland, or a low- or middle-income country. But we know that science is an international activity and the best collaborators may well be elsewhere in the world.
We encourage such interactions. Our communications campaign #togethersciencecan celebrates and shares stories about the importance of collaboration in science, and what this has already achieved and could achieve in future.
We also recognise that a collaboration between (say) a developmental biologist and a mathematician does not necessarily need transformative embryology and transformative maths – bringing the two fields together will frequently be enough to stimulate innovative thinking.
One example is the expert meeting about nutrition science at Wellcome this autumn. We are keen to identify early-career researchers interested in this area and we have a small number of fully funded places for young scientists to join the meeting.
The Academy of Medical Sciences has published an excellent report on team science (conflict: I serve on the Academy’s Council), and this raises the important question of recognition. All too frequently, individuals’ contributions to a paper are still recognised predominantly by authorship position (first or last), perhaps nuanced with a few asterisks or daggers.
I completely agree with the report that people’s contributions to a piece of work should be properly and accurately recognised, with links to ORCIDs, and that those of us who assess scientists should pay proper attention to this.
Many journals are now making authorship contributions clearer, with eLife, for example (on whose board I sit), providing this information online when you click on an author’s name, as well as listing it at the end of the PDF.
The project CRediT (Contributor Roles Taxonomy) is also a great step forward, and may have the valuable effect of reducing the frequency of ‘guest authorship’. But use of CRediT isn’t mandatory, and taxonomy doesn’t always capture the subtleties of people’s contributions to a paper.
This is another reason it can be so helpful to interview grant applicants – to find out what their contributions to a piece of work really were.
Embryo development, synthetic vaccines and big data neuroimaging
I finish this blog, as usual, with some papers and work that have caught my eye.
- The first relates to my continued interest in early frog embryos (see above). Marc Kirschner, Allon Klein and colleagues have carried out an extraordinarily detailed analysis of gene expression at the single cell level during the development of Xenopus tropicalis. The work is complemented by two papers (here and here) on zebrafish development that are similarly detailed and informative. As James Briscoe says, there is lots left to do, but it’s reassuring that some of the classical concepts of developmental biology remain intact.
- I have also been excited by Andy Sewell’s work, in which he and his colleagues have created the first synthetic, non-biological vaccine, and gone on to test it in mice. This vaccine, based on D-amino acid combinatorial chemistry, effectively mirrored the immunogenicity profile of an influenza virus epitope. The great thing about the vaccine is that it is highly stable and effective after oral administration, which means it is simple to transport and easy to administer. Vaccines based on this approach could transform the way we vaccinate against disease around the world.
- I was struck by a paper at the beginning of the year by Tom Nichols and Steve Smith, in which they discuss the opportunities and challenges of big data neuroimaging. They emphasise the importance of training and the need for neurobiologists to acquire skills in computer science and epidemiology – and of course, as I discuss above, to collaborate with people who have those skills.
- And there is a paper just out in Wellcome Open Research, where Angus Lamond and colleagues have carried out a remarkable multi-dimensional quantitative proteomics study to look at how a cell's proteome is remodelled after oncogene-driven transformation. I recommend taking a look at Figure 10, one of the first examples of an interactive figure in Wellcome Open Research.
This article was amended on 21 October 2021 to remove an out-of-date PDF download.