Welcome Back Dr. Yan Ropert-Coudert! Click here to logoff.
Home Forum Community Resources Studies About My Profile

 



SAB News
Member Spotlight
Perspectives
The Lab Log
Discussion Forum




Did you know?
84% of scientists and health care providers currently belong to one or more professional society relating to science and/or medicine.
--Professional Societies: Making the Right Decisions
 
Perspectives

Broadband Science: Dangers and Injustice in Sight!
by Yan Ropert-Coudert1 and Rory P. Wilson2

Over the past few months, Nature has published a substantial number of letters that demonstrate the growing concern, especially among young researchers (Nature 425, 661; 2003), about the misuse of impact factors in the scientific community. If, initially, the impact factor was devoted to help librarians, publishers and researchers evaluate and compare the average number of readers of scientific journals, there is also a risk that this index could be used as a tool to rank one’s research ability (Nature 422, 259-261; 2003).

Even though a publication list and its corresponding impact factor should not be seen as the major criterion that decides on the acceptance or rejection of a candidate, who can honestly pretend that it would not have influence on the selection process. The chances are small that a young researcher with a substantial publication list, but low impact factor, will outrank another candidate with fewer articles but with higher impact factor. Is one Nature (ranked 27.955 in 2001) article really worth 10 in Pharmaceutical Research (ranked 2.801), for example?

Critically, researchers with highly specialized skills, may produce high level research articles but ones that are, however, only crucial to a restricted number of specialists and thus published in journals with lower impact factor. This situation is further amplified by the subjectivity of the peer-reviewed system (see discussion in Nature 422, 259-261; 2003), which contributes to restricting specialized articles to journals read only by the scientific researchers from the same area of research. This process should not be allowed to denigrate the quality, or importance of the work conducted.

Talking about the problem of the misuse of impact factors is not just a practical way of getting published in Nature. There is a major reason why journal impact factor is not necessarily a reliable index of the scientific value of a researcher: By definition, impact factors measure the degree of popularity of journals, not isolated articles. One might argue that journals with a high impact factor cautiously select “only” articles that may appeal to a broader range of readers. However, the number of times an article has been cited – corrected by the number of years since its publication – may be a better estimates than impact factor itself, bearing in mind the bias inherent in the Science Citation Index only considering articles cited in a large, but non exhaustive, list of journals. A number of other parameters could be added: the number of co-authors ‘genuinely’ involved in the article; the specific experimental difficulties inherent to each research field; etc.

As Piotr Skórka (Nature 425, 661; 2003) stated, young researchers are constantly engaged in a race where an article accepted in a high impact factor journal becomes the ultimate prize. How can we expect the senior researchers of tomorrow to be models of scientific objectivity if they were formed in this frenzy? Appealing to the mass or staying in tune with your field: make your choice!


###

1- National Institute of Polar Research, 1-9-10 kaga, Itabashi-ku, Tokyo 173-8515; Japan
yan@nipr.ac.jp


2- Institut für Meereskunde Düsternbrooker Weg 20 D-24105 Kiel, Germany
rwilson@ifm.uni-kiel.de



###

<< Previous    Next >>   

[ View All Perspectives ]


©1997-2003 BioInformatics, LLC.
The Science Advisory Board is a registered service mark of BioInformatics, LLC.