Confusing Nature article on peer-review scams

Posted by on December 7, 2014

Nature has recently published a news feature where the authors, all associated with the Retraction Watch blog, discuss some cases where the peer review system has been abused. The article includes a range of confusing statements that, instead of exposing the real flaws in the review processes, in order to help others avoiding these flaws, hides them under a smokescreen of statements about alleged vulnerabilities of publishing software.

The article begins by describing a case where a scientist called Hyung-In Moon “provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues” when asked by a journal to provide suggestions for reviewers for his papers.  The article then says: “Moon’s was not an isolated case. In the past 2 years, journals have been forced to retract more than 110 papers in at least 6 instances of peer-review rigging. What all these cases had in common was that researchers exploited vulnerabilities in the publishers’ computerized systems to dupe editors into accepting manuscripts, often by doing their own reviews.” This suggests that Moon was also exploiting a vulnerability in the publisher’s computerized system.

The article then presents another case. “[…] Ali Nayfeh, then editor-in-chief of the Journal of Vibration and Control, received some troubling news. An author who had submitted a paper to the journal told Nayfeh that he had received e-mails about it from two people claiming to be reviewers. Reviewers do not normally have direct contact with authors, and — strangely — the e-mails came from generic-looking Gmail accounts rather than from the professional institutional accounts that many academics use […]“. This led to an investigation that found 130 suspicious-looking accounts in the publication management system, that were both reviewing and citing each other at an anomalous rate, and 60 articles that had evidence of peer-review tampering, involvement in the citation ring or both, with one author in the centre of the ring.

Is the software to blame?

The article explains these cases as follows: “Moon and Chen both exploited a feature of ScholarOne’s automated processes. When a reviewer is invited to read a paper, he or she is sent an e-mail with login information. If that communication goes to a fake e-mail account, the recipient can sign into the system under whatever name was initially submitted, with no additional identity verification.

In fact, Moon was not exploiting a vulnerability of some computer software, but vulnerabilities in the publisher’s process, which were independent of the software used by the publisher to manage the review process. These two vulnerabilities were that the publisher (through the editors) asked authors to suggest reviewers, and that the publisher did not properly check the credentials of reviewers and the association of the emails used by the system with the actual persons selected for being reviewers or with their publications.

The quoted explanation of the feature of the ScholarOne software does not explain how the process was flawed, but only confuses the reader. How can the invitation sent by email to a reviewer go to a fake email account? Was there a redirection of the email through some tampering of the network, or was the email address wrong from the start? What makes an email account fake, as opposed to just another email account? Who initially submitted a name?

What the Nature article seems to describe is a situation where the editors want to invite a particular scientist by sending an email to a particular address, but this address is not actually used by the selected scientist. If this is the case, what made the editors use a wrong address, and what is the blame of ScholarOne software? If the editors get a wrong email address for some scientist, it is irrelevant whether the invitation is transmitted through ScholarOne or through any other software capable of sending an email. In the Moon case, it appears that Moon introduced wrong email addresses while suggesting reviewers. In the Chen case, who introduced the wrong email addresses? Is ScholarOne providing, independently of the editors, a database of email addresses of scientists, and is it suggesting editors to trust that the email addresses actually belong to the scientists in the database, possibly identified by name and affiliation? If not, then the responsibility for using a particular email address belongs to the editor or the publisher. A ScholarOne user guide (see pp. 24-25) suggests that the software has such a database, but it is not clear whether the information in this database is provided by ScholarOne independently of the editors of a particular journal or publisher, or it is just what the editors saved there. Since ScholarOne is provided by the same company that manages Web of Science (Thomson Reuters), does it crosscheck the emails of potential reviewers with the emails of corresponding authors of articles in Web of Science? If not, why? What additional identity verification should be performed by users of ScholarOne? The Nature article does not explain any of these issues.

An extra source of confusion is the story about alleged reviewers contacting an author. This issue seems unrelated to that of falsifying the identity of reviewers. What was the purpose of the alleged reviewers for doing this? Were they trying to recruit a real person in the review ring? Review rings are an issue independent of fake identities, because they might be composed entirely of real persons. Again, the Nature article does not shed any light on any of these issues.

The Nature article continues by telling how Elsevier is taking steps to prevent reviewer fraud by consolidating accounts across journals and by integrating the use or ORCIDs. But how is the risk of fraud reduced by consolidating accounts across journals? Wouldn’t consolidated accounts expose editors to using wrong emails introduced by other people, by trusting data in this accounts without knowing how reliable is it, as in the putative case of the ScholarOne database discussed above? How is the use of ORCIDs decreasing the risk of fraud, as compared to the use of emails as IDs of persons? (A short answer: not much, see below). As in the case of Thomson Reuters, Elsevier also has a database associating the emails of authors to scientific publications (Scopus); is it using this database when selecting reviewers? Again, the Nature article does not explain any of these.

Trusting email addresses other than those included in publications requires careful analysis

Science is a global enterprise, and scientific publishers typically interact remotely with their authors and reviewers. For efficiency purposes, the transactions between publishers and reviewers are typically performed online. This leads to challenges in vetting reviewers. Checking traditional forms of identification, such as government-provided IDs, is not suitable for being included in the typical workflows of publishers. If so, what mechanisms should be used for identifying reviewers?

Peer review of scientific publications implies the assessment of publications by other experts. What defines somebody as an expert suitable for reviewing other publications is the expert’s own publications. Publications typically include the email address of the corresponding author, thereby creating an association between the publication, the name of the corresponding author, her/his affiliation, and her/his email address. If the set of publications associated with an email address has enough relevance for defining as a relevant expert the author supposedly associated with the email, then this email address can be used to identify a potential reviewer because the associations create a link between the email and the sought expertise.

If the email address that is about to be used by an editor for inviting a reviewer cannot be associated with a set of relevant publications, then a careful analysis of the available information must be performed by the editors in order to assess the probability of associating the new email address with the putative person to which it belongs and to publications putatively authored by this person. If the editors or the publishers do not perform this analysis, it is entirely their responsibility and not one of publishing software.

How software can help avoiding misuse

In fact, software can help by automatically suggesting reviewers, given information about the publication to review (its references and text). This avoids asking authors to suggest reviewers, as in the Moon case, which obviously creates a conflict of interest.

Software can also help by searching the newly-introduced email addresses of potential reviewers in publication databases that include the email addresses of authors, thereby validating the associations between an email address and publications authored by somebody who used that email address.

The issue of fake identities overlooks a proper discussion about review and citation rings that can also be composed of real persons that unethically agree to support reciprocally. If a review and citation ring including some non-existing persons was able to publish at least 60 papers, in the Chen case, then the same can happen with rings composed of real persons. Again, software based on the current state of the art in network science and machine learning is able to pinpoint potential unethical review and citation rings, given information about citation networks and reviewers.

Thus, although the Nature article blames software about the peer review scams that were discussed, in fact software can help in preventing such scams. The scams described in the article were caused, in fact, by the negligence of publishers and editors.

Would ORCID help?

The use of ORCID would not improve much the situation, at least in the short term. For ORCIDs to be used instead of email addresses for identifying potential reviewers, there must be a certified association between ORCIDs and publications, similar to how currently email addresses are published within publications. Many publishers now allow authors to associate their ORCIDs to the publications, however, the percentage of publications having associated ORCIDs is currently very small. Then, there is the challenge of associating ORCIDs to actual persons. Anyone can create an ORCID account with a made-up email address and by hijacking the name of somebody else, similarly to how email accounts can be created while using the name of somebody else, as in the case of the scams discussed here.

ORCID will allow organizations to associate themselves with the ORCIDs of individuals actually employed by them, and this will help identify individuals as long as the organizations creating these associations can be trusted. Again, this is something that has to gain a larger adoption in order to be used on a large scale when selecting reviewers. It remains to be seen whether how large the adoption of this mechanism will become, and it is unlikely to generalize because organizations have to pay ORCID for participating in it.