Next week I will talk about research assessment in Romania at the conference “Technologies Transforming Research Assessment“, organized at the Parliament of Lithuania.
Here is the abstract:
In 2011, the higher education and research systems of Romania undertook major reforms that were praised by the European Commission and led to what Nature editorialists characterized as “exemplary laws and structures for science”. Research assessment was a key focus of these reforms. This included: introducing a habilitation process, for evaluating an individual’s research achievements in order to become eligible to apply for full professorship jobs in the universities; minimal scientometric standards for an individual’s eligibility for the various levels of faculty jobs in universities, and for the eligibility for submitting grant applications for the major research funding programmes; the assessment of grant applications, which started to use mostly foreign reviewers; a national assessment exercise for the classification of universities and for the ranking of the universities’ study programmes, for which research was a major component; and a national assessment of research institutions. I present the background and the constraints that led to the design of these research assessment processes, and I discuss the choices that have been made. I also discuss some new tools and processes for research assessment that were designed to solve some technical problems encountered during these processes.
To some extent, Epistemio’s features were informed by the issues encountered during the 2011 reforms of the higher education and research systems of Romania, when I have been an adviser to the minister of education and research. For example, the data about scientific publications submitted by universities for the national assessment exercise included many errors, to such an extent that the ministry had to request a re-submission of data. This happened because universities were lacking suitable research information systems to allow them to have accurate information about their scientific publications. Epistemio Outcomes, that we launched in 2014, solves this problem and helps universities to easily aggregate their lists of publications authored by their scientists.
The minimal standards that were introduced in 2011 in Romania were much discussed, and an important issue was to find suitable standards for scientific domains where citations-based metrics, such as the article influence score, were not available or not applicable. Such domains are computer science and, to some extent, some areas of engineering where conferences, rather than journals, are the main vehicle, or an important one, for publishing original research results; and humanities and some social sciences, where there are few citations, and books are the main vehicle of publication, or an important one. The research councils that designed the minimal standards spent much time trying to find suitable equivalents, for these domains, of the article influence score that was used for natural sciences. For example, because there was no article influence score for conferences, an equivalent has been established by using a classification into three categories established by the Australian Research Council. Because there was no citation information for books, an equivalent has been established by the National Research Council by counting the number of WorldCat libraries where the books were available.
The problem of establishing ad-hoc equivalents between inherently distinct metrics would not have appeared if there would have been available a common metric for all types of publications. An obvious common metric are the ratings given by peers. Peer review is the foundation of assessment in science, and metrics based directly on peer review are likely to be much more relevant than any other types of scientometric indicators that are just weakly connected, through proxy intermediaries, to peer review. This is why Epistemio aims to aggregate ratings and reviews provided by peers, especially by those who read anyhow the publications to be rated, for their own research.