Assessing the current state of systematic reviews: Matthew Page shares his insights

An increasing number of systematic reviews are being published, but many are poorly conducted and reported. This is the key finding of a landmark study published in PLoS Medicine last month that sheds new light on the quantity and quality of systematic reviews undertaken over the last decade. The cross-sectional study was completed by an international team of researchers, led by Cochrane contributors David Moher and Matthew Page. Matthew completed his PhD at Cochrane Australia in 2015 and is currently an NHMRC Early Career Fellow based at the University of Bristol. Here, he joins Cochrane Library Senior Editor Toby Lasserson to share his insights into the scope, methods and findings of this important new research. 


TL: Can you tell us a bit about the study that you undertook? What were you comparing?

We performed a cross-sectional study to find out how many systematic reviews of biomedical research are being published, what questions they are addressing (e.g. therapeutic efficacy or diagnostic test accuracy or prognosis), and how well the methods and results are reported. This was a 10-year update of a landmark study by Moher and colleagues published in PLoS Medicine in 2007, which summarised the characteristics of a 2004 sample of reviews. We looked for all systematic reviews added to MEDLINE during one month (February 2014), and recorded the characteristics of these reviews. We wanted to know how the frequency and quality of systematic reviews has changed over the decade, and whether reporting quality is associated with being a Cochrane  Review and self-reported use of the PRISMA Statement.

TL: You identified 682 systematic reviews in doing this study. How did you assess their quality?  

We assessed the quality of systematic reviews using a standardised data collection form which includes 88 items. The items were influenced by PRISMA and MECIR, and addressed all components of the systematic review process, including reporting of eligibility criteria, search methods, risk of bias assessment, statistical analyses employed, and details such as funding and conflicts of interest. We captured not only whether a method was reported, but also which specific method was used (e.g. rather than just recording whether risk of bias was assessed, we recorded how many authors were involved in the assessment, and which risk of bias tool was used).

TL: One of the interesting findings is the increase in the number of systematic reviews published. Roughly how many more are being published now, and why do you think that they are more common?  

Compared to the 2004 sample, we found a more than threefold increase in the production of systematic reviews indexed in MEDLINE - from about 2,500 systematic reviews in 2004 to more than 8,000 in 2014. I think there are many possible reasons for the increase. For example, the scientific community and healthcare providers may have increasingly recognised the need to integrate the massive amount of published research in a systematic way; and some funding agencies now require applicants to perform a systematic review to justify their proposal. But also, the proliferation of journals means that authors can more successfully submit a systematic review for publication - regardless of whether another one on the same topic has already been published elsewhere. So there is a big problem of overlapping reviews in the literature.

TL: So, there are more systematic reviews around now - but have they got better overall?

In some ways yes, but in many other ways no. It was really encouraging to see that more review authors are summarising the results of their literature searches in a PRISMA flow diagram, and recording the reasons for excluding studies. However, many important aspects of the process -  defining a primary outcome, presenting a full Boolean search strategy, assessing harms in intervention reviews, declaring the funding source for the review - were either reported less frequently or only slightly more frequently in the 2014 sample. So there is much more work to be done to increase transparency of systematic reviews.

TL: There are more guidelines on systematic review production and reporting now than there were in 2004. Do you think researchers are adhering to guidance from initiatives like MECIR and PRISMA? 

Only 29% of authors explicitly stated that they used the PRISMA Statement to guide either the conduct or reporting of their review. It’s encouraging to note that we found that authors who stated that they used PRISMA reported their reviews more completely than authors who did not. So I think reporting guidelines have had some impact on the systematic review community, but there is definitely room to improve adherence.

TL: One of the key differences for Cochrane is how our reviews compare with their non-Cochrane counterparts. What sort of differences were there both in terms of their features and their quality?

Cochrane Reviews differed to their non-Cochrane counterparts in numerous ways. On average, Cochrane Reviews include fewer studies, and restrict eligibility to randomized trials more often, rather than including non-randomized studies of interventions (NRSI). I suspect that this latter aspect may change with the recent launch of the Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I) tool, which provides review authors with a way to highlight a wide range of risks of biases in NRSI in a comprehensive way. 

Reporting in Cochrane Reviews also happens to be much more complete when compared with non-Cochrane Reviews. I think this greater transparency has been influenced by a number of Cochrane-led initiatives, such as MECIR and the standard RevMan template. As an active Cochrane reviewer, I have found the RevMan template invaluable in prompting me to document all components of the systematic review process!

TL: Based on your study, what are the most important areas where systematic reviews need to improve, and how might we achieve that?

Systematic reviews can improve in a number of ways. There could be more detailed reporting of the search strategies used, methods of data extraction and risk of bias assessment, and funding sources. Risk of bias or study limitations need to be considered more often when reaching conclusions. And there should be much greater use of trials registers such as ClinicalTrials.gov and other sources of unpublished data, to address the problem of reporting bias, which is too often ignored.

I think that in order to address poor reporting of systematic reviews, we need to move towards strategies other than passively disseminating reporting guidelines in journals. Some promising options include developing software to facilitate completeness of reporting, and having journal editors and peer reviewers receive certified training in how to implement reporting guidelines.

TL: As editors we have a particularly important role in assuring quality of published systematic reviews. What sort of insights does your study provide about what we do well and where we need to improve?  

I think that journal editors who actively endorse and implement reporting guidelines such as PRISMA or MECIR tend to produce more completely reported reviews. However, as a journal editor myself, I recognise the large amount of (often voluntary) time that goes into such work. Therefore, I think we as editors need to engage with the tech community to develop tech solutions to facilitate these processes. Start-up companies such as Penelope, which is developing a web tool to check manuscripts automatically for missing information, particularly excite me!