The Scientific Scoreboard

After becoming disillusioned by the seemingly elitist system of publishing in scientific journals, Jorge Hirsch devised the h-index; a system to quantify the scientific impact of a researcher’s publications (regardless of journal) and thus the scientific impact (importance) of the researcher.

There’s a clear pecking order [for scientific journals], established and reinforced by several independent rating systems. Chief among them: the Journal Impact Factor.

Hirsch, like his peers, understood that if he wanted to get to the front ranks of his discipline, he had to publish in journals with higher JIFs. But this struck him as unfair. […] It shouldn’t be about where he published; it should be about his work.

[…] In his 2005 article, Hirsch introduced the h-index. The key was focusing not on where you published but on how many times other researchers cited your work. In practice, you take all the papers you’ve published and rank them by how many times each has been cited. […] Or to put it more technically, the h-index is the number n of a researcher’s papers that have been cited by other papers at least n times. High numbers = important science = important scientist.

According to the article, Edward Witten—cosmologist at the Institute for Advanced Study—scores the highest of all physicists with 120, Stephen Hawking gets 67, while Hirsch rates a 52.

Tags:

Comments

3 responses to “The Scientific Scoreboard”

  1. Paul

    So Hirsch is taking the PageRank principle and applying it to suggest importance of scientific journals? Hardly a groundbreaking application of the algorithm, but probably long overdue anyway.

  2. Exactly, and yes, it’s well overdue.

    However, I can see some glaring errors with the h-index (context of the citation) as well as other ways to possibly improve the algorithm (taking into account the journal’s reputation and the h-index score of the researcher doing the citing, maybe?).

    It’ll definitely be interesting to see how this index develops over time and if it becomes widely adopted.

  3. With such an obvious similarity to PageRank we might start to see similar kinds of abuse/optimisation which may detract from the research itself: rushing out lots of papers to push up the number of citations(links), Quid pro quo citations, etc.

    An organised campaign might be able to generate lots of publicity for bad research. Imagine how well a “scientific” paper denying evolution could do if all the Christian Science journals cited it.

    Your point about reputation is interesting. I wonder who would be tasked with allocating reputation and if this could lead to exactly the same problem.