The conversion rate charts are used to quantify how individuals invited to peer review respond.
Invited-to-agreed conversion rate vs. invited-to-completed conversion rate
Invited-to-agreed conversion rate and the invited-to-completed conversion rate are typically used to examine the health of your reviewer database. Invited-to-agreed is a measure of the percentage of the invitations to review that led to an agreement to review. Invited-to-completed is a measure of the percentage of the invitations to review that led to completed reviews. The invited-to-agreed conversion rate chart, of course, does not show whether the reviewers who agreed ultimately submitted their review.
Higher conversion rates indicate a healthy reviewer database containing individuals seemingly willing to provide peer reviews when asked more often than not. Furthermore, higher conversion rates may also indicate that your editors are effective at selecting reviewers who are more likely to agree to provide a peer review. Conversely, a lower conversion rate possibly indicates that many individuals currently in your reviewer database are less predisposed to agree to peer review. It should be noted that for an incomplete (current) year, especially early in the year, the conversion rate might be lower due to open invitations when compared to completed years where the data is now historic and the invitation and review process has been completed.
Just like the invited-to-agreed conversion rate chart, the invited-to-completed conversion rate chart gives an indication of the health of your reviewer database. It poses a slightly different question from the previous chart in that it asks, “For the number of invitations that you sent to reviewers, how many completed reviews were received?” This chart does not take into account the number of reviewers who agreed but never submitted a review. It shows the final outcome of your reviewer invitations. Incomplete (current) years will probably show a lower conversion rate since there are open reviewer invitations and more open reviews that have not yet been submitted. In reality, the invited-to-completed chart is best used historically rather than to measure the current year.
Which chart should I use?
The general trend for the invited-to-agreed and invited-to-completed conversion rate charts should be similar, unless you have many reviewers who are agreeing to review, but never submitting. Either of these charts provides insight into how your reviewers are responding to your invitations. Ultimately, you could use both charts. Otherwise, it is down to your personal preference which of these conversion rate charts you use.
Agreed-to-completed conversion rate
The agreed-to-completed conversion rate is a third measurement and slightly different from the previous two conversion rate charts since it indicates the reliability of your reviewers. This chart tells you how likely the reviewers are to submit their reviews after agreeing to review.
Obviously, conversion rates of 100% would be ideal, which would indicate that all reviewers who agree would eventually submit their review. What does it mean if your journal has lower agreed-to-completed conversion rates? It might mean that your reviewers are over committing themselves. For instance, your editors may be over relying on a small cohort of trusted reviewers that are always willing to agree to review but then find themselves to be over committed due to the number of times that they are asked. Equally, if you have a very short deadline for reviews to be completed, some reviewers simply may be taking too long with their review and miss their deadline. A confounder to this metric is if you have editors that are prone to terminate the peer review process early (i.e. editors who choose to make a decision early based on the first reviews received rather than waiting for all reviewers to complete their evaluations). Under this scenario, those reviewers who had still not finished their review at the moment the decision was made will show up as an incomplete review which will confound the agreed-to-submitted conversion rate.
General observations on conversion rates at journals
It is commonly known that finding reviewers willing to complete the assessment of a manuscript is getting harder. These conversion rate charts are the best indicator of whether your journal is increasingly susceptible to reviewer apathy or outright antipathy to providing peer reviews. The charts won’t explain the reasons why the reviewers are declining at higher rates but they will certainly measure the trend. It is not uncommon now to witness journals routinely dropping below a 50% conversion rate. In other words, this means for every reviewer that is required, two or more invitations will have to be sent out to potential reviewers.
Possible solutions for improving your conversion rates
There are a number of ways that your journal can try to improve the conversion rates. First, try to ensure that your reviewer database is as clean as possible. This includes removing duplicate user accounts, removing reviewers who have not agreed to review in an extended period of time, and reviewers who have never agreed to review despite receiving numerous invitations. It might also be time to recruit new reviewers, especially if your submissions have steadily been increasing. Someone from your journal might try contacting a group of potential reviewers beforehand to determine if they would be willing to do peer review in the future as their schedule allows. Individuals who commit to review at some point in the future when available can be flagged in your system so that they are easily identifiable to your editors. This is often more effective than sending invitations to people who are outside of a journal’s area of specialization or, if there is a society back, the membership of the society that might otherwise be more predisposed to help out their society’s journal, as individuals rarely agree to review when there is no previous relationship to the journal. Remind editors that Editorial Board Members should be part of their standard reviewer pool, as these reviewers have excellent credentials and strong ties to the journal.
Some journals have found that when all incoming submissions are triaged by the Editor-in-Chief or a senior editor, that low quality submissions can be immediately rejected. Thus, the editors will be less encumbered, as well as reducing the strain on your reviewer pool. It does seem that low quality manuscripts often require a greater number of invitations to secure the needed reviewer agreements. Many reviewers are able to determine the likelihood of eventual publication based on a manuscript’s abstract. Reviewers are typically more willing to commit their time to interesting manuscripts, hot topics, areas directly aligned to their research interests and papers that obviously seem like they will be published (thus the request to peer review is more based on a need to polish an eminently publishable work). Finally, try to expand your reviewer pool’s diversity to match the diversity of your incoming submissions. Manuscripts that look at specialized populations are more likely to be of interest to reviewers from that population.
It is very easy to fixate on the reviewers and an annually diminishing willingness to review as the cause for a decline in conversion rates. However, editors themselves are possibly as important in the successful securing of reviewers. For instance, a good editor is likely adept at picking the right people for the right manuscript. They may have an extensive network of contacts they can call upon which may be a problem for more junior editors. Thought leaders in a given field may have greater success at securing a commitment by virtue of their ability to leverage that position to convince potential reviewers they should agree to provide a review. Editor engagement with a journal and the training they receive is also critical. All journals at some time or another have editors that seem disinterested and put in minimal effort to match the best reviewers for a manuscript under review, pick the same people over and over, fail to heed warnings about picking certain people or simply don’t know how to use the peer review management system properly, thus missing out on keyword or subject search tern searches or various suggested review functions these systems provide. Some editors swear by the effectiveness of adding personal notes to invited reviewers to convince a wavering potential reviewer to commit. Such notes typically are used to explain why a potential reviewer is being asked to review the particular manuscript in hand. Though more formal studies need to be conducted, it does seem highly probably that an editor that puts little effort in to finding who might be a good reviewer, ignores advice on picking certain people based on past performance data and then uses the standard system email invitation is more likely to come up short in securing the mandatory minimum number of reviewers for a given manuscript.
Conversion rates are calculations
Invited-to-agreed conversion rate = # of reviewers who agreed to review/# of reviewer invitations sent
Invited-to-completed conversion rate = # of reviews completed/# of reviewer invitations sent
Agreed-to-completed conversion rate = # of reviews completed/# of reviewers who agreed to review