Celebrating peer review week

Posted by on September 28, 2015

peer review week

Peer Review Week (September 28th-October 4th) is an occasion to celebrate this activity that is a keystone of science. We invite scientists to join us for this celebration by publishing on Epistemio a post-publication peer review or rating of one of the scientific publications they have read lately. Publishing a rating takes no more than one minute:

  • Log in or sign up;
  • Search the publication you would like to rate, for example by typing its title;
  • Add the rating;
  • Optionally, add a review that supports your rating.

Ratings and reviews may be either anonymous or signed.

By providing such ratings, you contribute to building peer-review-based metrics of quality and importance of individual publications, eliminating the problems of current indicators indicated, for example, in the San Francisco Declaration on Research Assessment (DORA).

A new scale for rating scientific publications

Posted by on July 23, 2015

We are officially announcing the launch of a new scale for rating scientific publications, which scientists may use for contributing to the assessment of publications they are reading. The rating scale has been presented at the 15th International Conference on Scientometrics and Informetrics, recently held in Istanbul, Turkey (Florian, 2015). Scientists can now use this scale to rate publications on the Epistemio website.

The use of metrics in research assessment is widely debated. Metrics are often seen as antagonistic to peer review, which remains the primary basis for evaluating research. Nevertheless, metrics can actually be based on peer review, by aggregating ratings provided by peers. This requires an appropriate rating scale.

rating scale

Online ratings typically take the form of a five-star or ten-star discrete scale: this standard has been adopted by major players such as Amazon, Yelp, TripAdvisor and IMDb. However, these types of scales do not measure well the quality and importance of scientific publications, because of the likely high skewness of the actual distribution of values of this target variable. Extrapolating from distributions of bibliometric indicators, it is likely that maximum values of the target variable can be 3 to 5 orders of magnitude larger than the median value.

A solution to this conundrum is asking reviewers to assess not the absolute value of quality and importance, but the relative value, on a percentile ranking scale. On such a scale, the best paper is not represented by a number several orders of magnitude larger than the number representing the median paper, but just 2 times larger (100% for the best paper vs. 50% for the median paper).

It is typically possible to estimate the percentile ranking of high-quality papers with better precision than for lower quality papers (e.g., it is easier to discriminate between top 1% papers and top 2% papers than between top 21% papers and top 22% papers). Therefore, the precision in assessing the percentile ranking of a publication varies across the scale. Reviewers may also have various levels of familiarity with the field of the assessed publication. Thus, it is useful for them to be able to express their uncertainty. The solution adopted for the new scale was to allow reviewers to provide the rating as an interval of percentile rankings, rather than a single value. Scientists can additionally publish on Epistemio reviews that support their ratings.

The aggregated ratings could provide evaluative information regarding scientific publications that is much better than what is available through current methods. Importantly, if ratings are provided voluntarily by scientists for publications they are reading for the purpose of their own research, publishing such ratings entails a minor effort from scientists, of about 2 minutes per rating. Each scientist reads thoroughly, on average, about 88 scientific articles per year, and the evaluative information that scientists can provide about these articles is currently lost.  If each scientist would provide one rating weekly, it can be estimated that 52% of publications would get 10 ratings or more (Florian, 2012). This would be a significant enhancement for the evaluative information needed by users of scientific publications and by decision makers that allocate resources to scientists and research organizations.

Indicators that aggregate peer-provided ratings solve some of the most important problems of bibliometric indicators:

  • normalizing across fields citation-based indicators is necessary due to differences in the common practices across fields (e.g., the median impact factor or the median number of citations is larger in biology than in mathematics), but widely-available bibliometric indicators are not normalized by their providers;
  • in some fields, publishing in scientific journals is not the only relevant channel for publishing results, but the coverage of other types of publications (books, conference papers) in the commercially-available databases is poorer; this may be unfair for these fields, or requires arbitrary comparisons between different types of indicators.

Indicators that aggregate peer-provided ratings makes possible the unbiased comparison of publications from any field, of any type (journal papers, irrespective of whether they are present in the major databases or not; conference papers; books; chapters; preprints; software; data), regardless of the publication’s age and of whether the publication received citations or not.

How scientists can provide ratings

To start rating the publications that you have read:

  • Search the publication on Epistemio;
  • Click on the publication title, go to the publication’s page on Epistemio, and add your rating. Optionally, a review can be published to support the rating.
  • If you are not logged on, please log in or sign up to save you rating and review.

Ratings and reviews may be signed or anonymous.

How research managers can use ratings in institutional research assessments

If the publications you would like to assess did not get enough ratings from scientists who read them and volunteered to publish on Epistemio their ratings, our Research Assessment Exercise service can select qualified reviewers and provide a sufficient number of ratings for the publications you would like to be assessed.

References

Florian, R. V. (2012). Aggregating post-publication peer reviews and ratings. Frontiers in Computational Neuroscience, 6, 31.

Florian, R. V. (2015). A new scale for rating scientific publications. In Proceedings of ISSI 2015: 15th International Society of Scientometrics and Informetrics Conference (p. 419-420). Istanbul, Turkey: Boğaziçi University.

Confusing Nature article on peer-review scams

Posted by on December 7, 2014

Nature has recently published a news feature where the authors, all associated with the Retraction Watch blog, discuss some cases where the peer review system has been abused. The article includes a range of confusing statements that, instead of exposing the real flaws in the review processes, in order to help others avoiding these flaws, hides them under a smokescreen of statements about alleged vulnerabilities of publishing software.

The article begins by describing a case where a scientist called Hyung-In Moon “provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues” when asked by a journal to provide suggestions for reviewers for his papers.  The article then says: “Moon’s was not an isolated case. In the past 2 years, journals have been forced to retract more than 110 papers in at least 6 instances of peer-review rigging. What all these cases had in common was that researchers exploited vulnerabilities in the publishers’ computerized systems to dupe editors into accepting manuscripts, often by doing their own reviews.” This suggests that Moon was also exploiting a vulnerability in the publisher’s computerized system.

The article then presents another case. “[…] Ali Nayfeh, then editor-in-chief of the Journal of Vibration and Control, received some troubling news. An author who had submitted a paper to the journal told Nayfeh that he had received e-mails about it from two people claiming to be reviewers. Reviewers do not normally have direct contact with authors, and — strangely — the e-mails came from generic-looking Gmail accounts rather than from the professional institutional accounts that many academics use […]“. This led to an investigation that found 130 suspicious-looking accounts in the publication management system, that were both reviewing and citing each other at an anomalous rate, and 60 articles that had evidence of peer-review tampering, involvement in the citation ring or both, with one author in the centre of the ring.

Is the software to blame?

The article explains these cases as follows: “Moon and Chen both exploited a feature of ScholarOne’s automated processes. When a reviewer is invited to read a paper, he or she is sent an e-mail with login information. If that communication goes to a fake e-mail account, the recipient can sign into the system under whatever name was initially submitted, with no additional identity verification.

In fact, Moon was not exploiting a vulnerability of some computer software, but vulnerabilities in the publisher’s process, which were independent of the software used by the publisher to manage the review process. These two vulnerabilities were that the publisher (through the editors) asked authors to suggest reviewers, and that the publisher did not properly check the credentials of reviewers and the association of the emails used by the system with the actual persons selected for being reviewers or with their publications.

The quoted explanation of the feature of the ScholarOne software does not explain how the process was flawed, but only confuses the reader. How can the invitation sent by email to a reviewer go to a fake email account? Was there a redirection of the email through some tampering of the network, or was the email address wrong from the start? What makes an email account fake, as opposed to just another email account? Who initially submitted a name?

What the Nature article seems to describe is a situation where the editors want to invite a particular scientist by sending an email to a particular address, but this address is not actually used by the selected scientist. If this is the case, what made the editors use a wrong address, and what is the blame of ScholarOne software? If the editors get a wrong email address for some scientist, it is irrelevant whether the invitation is transmitted through ScholarOne or through any other software capable of sending an email. In the Moon case, it appears that Moon introduced wrong email addresses while suggesting reviewers. In the Chen case, who introduced the wrong email addresses? Is ScholarOne providing, independently of the editors, a database of email addresses of scientists, and is it suggesting editors to trust that the email addresses actually belong to the scientists in the database, possibly identified by name and affiliation? If not, then the responsibility for using a particular email address belongs to the editor or the publisher. A ScholarOne user guide (see pp. 24-25) suggests that the software has such a database, but it is not clear whether the information in this database is provided by ScholarOne independently of the editors of a particular journal or publisher, or it is just what the editors saved there. Since ScholarOne is provided by the same company that manages Web of Science (Thomson Reuters), does it crosscheck the emails of potential reviewers with the emails of corresponding authors of articles in Web of Science? If not, why? What additional identity verification should be performed by users of ScholarOne? The Nature article does not explain any of these issues.

An extra source of confusion is the story about alleged reviewers contacting an author. This issue seems unrelated to that of falsifying the identity of reviewers. What was the purpose of the alleged reviewers for doing this? Were they trying to recruit a real person in the review ring? Review rings are an issue independent of fake identities, because they might be composed entirely of real persons. Again, the Nature article does not shed any light on any of these issues.

The Nature article continues by telling how Elsevier is taking steps to prevent reviewer fraud by consolidating accounts across journals and by integrating the use or ORCIDs. But how is the risk of fraud reduced by consolidating accounts across journals? Wouldn’t consolidated accounts expose editors to using wrong emails introduced by other people, by trusting data in this accounts without knowing how reliable is it, as in the putative case of the ScholarOne database discussed above? How is the use of ORCIDs decreasing the risk of fraud, as compared to the use of emails as IDs of persons? (A short answer: not much, see below). As in the case of Thomson Reuters, Elsevier also has a database associating the emails of authors to scientific publications (Scopus); is it using this database when selecting reviewers? Again, the Nature article does not explain any of these.

Trusting email addresses other than those included in publications requires careful analysis

Science is a global enterprise, and scientific publishers typically interact remotely with their authors and reviewers. For efficiency purposes, the transactions between publishers and reviewers are typically performed online. This leads to challenges in vetting reviewers. Checking traditional forms of identification, such as government-provided IDs, is not suitable for being included in the typical workflows of publishers. If so, what mechanisms should be used for identifying reviewers?

Peer review of scientific publications implies the assessment of publications by other experts. What defines somebody as an expert suitable for reviewing other publications is the expert’s own publications. Publications typically include the email address of the corresponding author, thereby creating an association between the publication, the name of the corresponding author, her/his affiliation, and her/his email address. If the set of publications associated with an email address has enough relevance for defining as a relevant expert the author supposedly associated with the email, then this email address can be used to identify a potential reviewer because the associations create a link between the email and the sought expertise.

If the email address that is about to be used by an editor for inviting a reviewer cannot be associated with a set of relevant publications, then a careful analysis of the available information must be performed by the editors in order to assess the probability of associating the new email address with the putative person to which it belongs and to publications putatively authored by this person. If the editors or the publishers do not perform this analysis, it is entirely their responsibility and not one of publishing software.

How software can help avoiding misuse

In fact, software can help by automatically suggesting reviewers, given information about the publication to review (its references and text). This avoids asking authors to suggest reviewers, as in the Moon case, which obviously creates a conflict of interest.

Software can also help by searching the newly-introduced email addresses of potential reviewers in publication databases that include the email addresses of authors, thereby validating the associations between an email address and publications authored by somebody who used that email address.

The issue of fake identities overlooks a proper discussion about review and citation rings that can also be composed of real persons that unethically agree to support reciprocally. If a review and citation ring including some non-existing persons was able to publish at least 60 papers, in the Chen case, then the same can happen with rings composed of real persons. Again, software based on the current state of the art in network science and machine learning is able to pinpoint potential unethical review and citation rings, given information about citation networks and reviewers.

Thus, although the Nature article blames software about the peer review scams that were discussed, in fact software can help in preventing such scams. The scams described in the article were caused, in fact, by the negligence of publishers and editors.

Would ORCID help?

The use of ORCID would not improve much the situation, at least in the short term. For ORCIDs to be used instead of email addresses for identifying potential reviewers, there must be a certified association between ORCIDs and publications, similar to how currently email addresses are published within publications. Many publishers now allow authors to associate their ORCIDs to the publications, however, the percentage of publications having associated ORCIDs is currently very small. Then, there is the challenge of associating ORCIDs to actual persons. Anyone can create an ORCID account with a made-up email address and by hijacking the name of somebody else, similarly to how email accounts can be created while using the name of somebody else, as in the case of the scams discussed here.

ORCID will allow organizations to associate themselves with the ORCIDs of individuals actually employed by them, and this will help identify individuals as long as the organizations creating these associations can be trusted. Again, this is something that has to gain a larger adoption in order to be used on a large scale when selecting reviewers. It remains to be seen whether how large the adoption of this mechanism will become, and it is unlikely to generalize because organizations have to pay ORCID for participating in it.