H-index, that’s why it doesn’t really serve to evaluate the quality of a researcher

H-index, that's why it doesn't really serve to evaluate the quality of a researcher

[ad_1]

Everyone is talking about the H-index now, even politicians and journalists. The latter, in particular, during these three years of the pandemic have been forced to deal with scientific aspects often unrelated to their cultural background due to the pressure of events. In this objectively difficult situation which is not always understandable to public opinion, indices have been called into play which in technical terms are called bibliometrics, including, and above all, the Hirsch index (H-index), until then the exclusive prerogative of the scientific and academic world.

I believe in perfect good faith, classifications of merit of researchers have been drawn up based on this index, sometimes also considered as an evaluation parameter for the management or presidency appointments of scientific institutes for whose good management, what is produced for the advancement of scientific knowledge it should be only one, and sometimes not even the most important, of the requirements. It is therefore legitimate to ask how much the writers of the journalistic articles and the public opinion itself know about the real value of the H-index, especially what its precise limits are.

Let’s learn to navigate the jungle of scientific publications

by Vittorio Lingiardi, Marianna Liotti


I state that evaluating a researcher’s contribution to scientific progress is a notoriously complex matter, even for insiders, because the impact of one’s research on that of the other can occur in various unmeasurable forms. Good ideas circulate in the daily reports between those who do this job, they are often inspirations drawn from opinions exchanged in congress meetings or from preliminary unpublished research data. However, when it has to be decided who to promote for a certain position for which the candidate’s scientific background is fundamental, the need to have some objective measure presses on whoever has to decide and leads to resorting to bibliometric indices in the evaluation of published by the candidates.

An index that measures citations by other authors

The H-index is one of these indices, probably the most significant, because it measures the scientific impact of a researcher’s publications through the citations they receive from other researchers, thus meaning that his results have not fallen on deaf ears but have have been considered and often even used by other researchers, in that fertile exchange that actually advances knowledge. To calculate this index, the citations received by each publication in a given period of time are taken into account. For example, having an H-index equal to 50 means having 50 publications that have been cited by other researchers at least 50 times. The calculation resulted in a small number which is assumed to measure the specific scientific contribution of the researcher. A magical number, a blessed harp for those who have to make certain decisions. But is it really so?

Many limitations, which affect its usefulness

Certainly not. The number is not magical at all and has several limits which, if not known and carefully considered, can completely invalidate its usefulness and lead to wrong conclusions. I will briefly review the main ones here. The first and most obvious is that small differences, such as between 50 and 60, are transient and do not necessarily mean real differences in high-impact scientific production. They shouldn’t be taken into consideration. Large differences, such as between 50 and 80, are in all likelihood indicative of a different scientific impact in your research area. However, even the great differences must take into account a phenomenon that is still widely practiced, that of self-citations (it means citing oneself, even when not pertinent, in various publications) and above all of the bibliographic platforms that measure citations.

Bibliographic platforms

There are basically three of them: Scopus, Web of Science and Google Scholar. The first two essentially use the same databases and generally give H-indexes that are very close and lower than those given by the third platform which also takes citations from doctoral theses or journals that Web of Science and Scopus exclude. For these reasons, a candidate who has H-index 65 on Scopus may well be worth another candidate who has 86 on Google Scholar, for the same discipline. For this last aspect, the index depends a lot on the number of scientific areas pertaining to the researchers and also on how much overall the various areas are worth in terms of impact: one in which several thousand researchers publish and cite the work of colleagues of the same area will have on average higher H-indexes than those of researchers in a niche area frequented by a few hundred researchers, with the same merit and quality of the results obtained.

Apply the index differently

There is a partial remedy for this, i.e. applying the index differently for distinct research areas, which is feasible when the areas of relevance are very distant such as for example microbiology and psychology but very problematic for distinct but highly correlated disciplinary areas such as microbiology and immunology.

But what makes the H-index inappropriate, as the only or dominant evaluation parameter, is that it does not distinguish the different contributions given by the different researchers who publish a work together, as is now the norm in many disciplines, which require cooperation between different professionals to make a good and well publishable research. The index attributes the citations received by these publications to all authors of the publication. By now I believe that even outside the experts, the almost universal convention is known, at least in the biomedical area, according to which some Authors, in particular the first and last, of a publication have given a more relevant contribution than the one given by the other authors.

The actual contributions of the researchers

Today, in scientific publications, the differences in the contribution of the various Authors are also declared but cannot be grasped, or at least have not yet been, from the bibliometric indexes. It would therefore be inherently wrong to assume that the publications of two people with comparable H-indexes had the same scientific impact in a given discipline. It is necessary to give importance to the position occupied in the list of Authors. You can have a high H-index by being part of a very productive research group to which you always give a good but not particularly relevant or substantial contribution. From this point of view, the H-index is perhaps more suitable for evaluating the global scientific impact of research groups or even of research institutions than of individual researchers.

The index? A short cut should not replace an accurate assessment

There are many bibliometric indexes and they are constantly being invented. All of them, including the H-index, are shortcuts that can be useful but should never replace the avenue to an accurate and comprehensive resume assessment of candidates. It is necessary to take responsibility for examining with honesty and competence all that the candidate has produced and its potential impact on research in general as well as on society as a whole and, in the biomedical area, on the whole of public health, as has recently been pointed out on Health colleagues Lingiardi and Liotti. Shortcuts are not needed to make a good choice.

* Former Director of the Department of Infectious Diseases Istituto Superiore di Sanità and member of the American Academy of Microbiology

[ad_2]

Source link