EGU Blogs

A thought on impact factors

OK, bear with me on this one. It’s a bit of a thought dump, but it would be interesting to see what people think.

You can’t go anywhere in academia these days without hearing about impact factors. An impact factor is a metric assigned to a journal that measures the average number of citations per article over the preceding two year interval. It was originally designed to help libraries select which journals were being used by academics in their research, and therefore which ones they could not renew subscriptions to. However, in modern day academia, it is often used to measure the individual ‘impact’, or quality, of a single paper within a journal – that is, the metric assigned to a journal is used as a proxy for the value of each article inside. It doesn’t make much sense on the face of things, especially when you here stories about how much impact factors are gamed (read: purchased) by journals and their publishers (see link below), to the extent that they are at the least meaningless, and at the worst complete lies.

The evidence suggests that the only thing that an impact factor, and journal rank, is reflective of is academic malpractice – that is, fraud. The higher an impact factor, the higher the probability that there has been data fudging of some sort (or higher probability of detection of such practice). A rather appealing option seems to be to do away with journals altogether, and replace them with an architecture built within universities that basically removes all the negative aspects of assessment of impact factors, at the same time as removing power from profit-driven parasitic publishers. It’s not really too much a stretch of the imagination to do this – for example, Latin America already uses the SciELO platform to publish its research, and is free from the potential negative consequences of the impact factor. University College London also recently established it’s own open access press, the first of its kind in the UK. The Higher Education Funding Council for England (HEFCE) recently released a report about the role of metrics in higher education, finding that the impact factor was too often mis-used or ‘gamed’ by academics, and recommended its discontinuation as a measure of personal assessment. So there is a lot of evidence that we are moving away from a system where impact factors and commercial publishers are dominating the system (although see this post by Zen Faulkes).

But I think there might be a hidden aspect behind impact factors that has often been over-looked, and is difficult to measure. Hear me out.

Impact factors, whether we like it or not, are still used as a proxy for quality. Everyone equates a higher impact factor with a better piece of research. We do it automatically as scientists, irrespective of whether we’ve even read an article. How many times do you hear “Oh, you got an article in Nature – nice one!” I’m not really sure if this means well done for publishing good work, or well done for beating the system and getting published in a glamour magazine. Either way, this is natural now within academia, it’s ingrained into the system (and by system, I include people). The flip side of this is that researchers then, following this practice, submit their research which they perceive to be of ‘higher quality’ (irrespective of any subjective ties or a priori semblance of what this might mean) to higher impact factor journals. The inverse is also true – research which is perceived to be less useful in terms of results, or lower quality, will be sent to lower impact factor journals. Quality in this case can refer to any combination of things – strong conclusions, a good data set, relevance to the field.

Now, I’m not trying to defend the impact factor and it’s use as a personal measure for researchers. But what if there is some qualitative aspect of quality that it is capturing, based on this? Instead of thinking “It’s been published in this journal, therefore it’s high quality”, it’s rethinking it as “This research is high quality, therefore I’m going to submit it to this journal.” Researchers know journals well, and they submit to venues for numerous reasons – among them is the appropriateness of that venue based on its publishing history and subject matter. If a journal publishers hardcore quantitative research, large-scale meta-analyses and the sort, then it’s probably going to accrue more citations because it’s of more ‘use’ – more applicable to a wider range of subjects or projects.

For example, in my field, Palaeontology, research typically published in high impact factor journals involves fairly ground-breaking new studies regarding developmental biology, macroevolution, extinctions – large-scale patterns that offer great insight into the history of life on Earth. On the other hand, those published in lower impact factor journals might be more technical and specialist, or perhaps regarding descriptive taxonomy or systematics – naming of a new species, for example. An obvious exception to this is anything with feathers, which makes it’s way into Nature, irrespective of it’s actual value in progressing the field (I’ll give you a clue: no-one cares about new feathered dinosaurs any more. Get over it, Nature).

So I’ll leave with a question: do you submit to higher impact factor journals if you think your research is ‘better’ in some way. And following this, do you think that impact factors capture a qualitative aspect of research quality, that you don’t really get if you think about what impact factors mean in a post-publication context? Thoughts below! Feel free to smash this thought to shreds.

Avatar photo
Jon began university life as a geologist, followed by a treacherous leap into the life sciences. He spent several years at Imperial College London, investigating the extinction and biodiversity patterns of Mesozoic tetrapods – anything with four legs or flippers – to discover whether or not there is evidence for a ‘hidden’ mass extinction 145 million years ago. Alongside this, Jon researched the origins and evolution of ‘dwarf’ crocodiles called atoposaurids. Prior to this, there was a brief interlude were Jon was immersed in the world of science policy and communication, which greatly shaped his views on the broader role that science can play, and in particular, the current ‘open’ debate. Jon tragically passed away in 2020.


9 Comments

  1. I agree; the problem is equating a high impact factor with high quality. Well structured, coherent, solid papers that make advancements in their field can be seen in low-impact journals; the opposite is also true (for example, the claim that caterpillars evolved from onychophorans by hybridogenesis was published in PNAS, which has the third highest impact factor for multidisciplinary journals). Honestly, this has gotten to the point where I see publishing papers like publishing music; high impact journals are the big record companies that push songs/bands that are sure to sell, which are not always the best in terms of music/composition/songwriting. Just take a look at the Billboard Top 100 (the equivalent of a very high impact journal for music) and try to find someone that doesn’t use autotune (the equivalent of fudging results in academia). It’s all done for the exposure, and not always for the quality of the work.
    It might get worse in the next few years, since most scientists growing up right now are becoming used to equating social media “likes” with quality content, whether they realize it or not, so they may try to bend the rules and smudge results to get more citations, which is usually done by publishing in a high impact factor journal.

  2. There is a famous phrase, “All politics is local,” coined by former Speaker the House Tip O’Neill in the USA.

    He meant that “a politician’s success is directly tied to the person’s ability to understand and influence the issues of their constituents..”

    Similarly, most junior faculty seeking additional pay and academic advancement would probably submit to articles which give them the most help in their own local situation. That’s determined by their own departmental heads and tenure & promotion committees.

    Senior faculty who might care less about tenure & promotion may have different private motives for publication outlet choice. Getting published in a high-impact prestige journal can be vital for one’s professional career for the long-term. Conjecturing about “what this really means” is an excellent, intriguing intellectual exercise. In the “real world” however, professional academics still have to pay the rent and buy food. They must publish in journals that help them for those necessarily simple reasons.

  3. I don’t personally make any use of IF when deciding where to send a manuscript although I’m certainly aware of what I’ll term ‘status’ of journals that are relevant to my field (drug discovery). Some journals use the number of citations to their articles as a criterion when deciding to send a manuscript for review and I’ve even heard of editors telling authors to cite more of their journal’s articles (although I’ve never experienced this first hand). IF could maybe just possibly be of value to a librarian who needs to prioritize journal subscriptions although bundling of journals makes this scenario unlikely.

    I do not believe that IF is a valid measure of article or author quality and any author who uses IF in this manner should be politely told to take his/her medication as directed. One way in which IF-thinking could be used would be to scale (or normalize if you prefer) the number of citations for an article by the journal average for the same period. This could highlight articles that had ‘tunneled’ into a high impact journal due to dozy reviewers although, to be honest, I regard the obsession with metrics as diagnostic of a serious malaise in science. Perhaps the real value of metrics such as IF is that they reveal how baldy science has lost its way and how desperately it needs to get back on track.

  4. one has to be a con artist to game the system and get published in the glamour journals. the system is an exclusionary mechanism to stay and claim superiority over others, not to create a level playing field. the training that one gets in, “so-called” famous labs is not how to do good science but how to con the system. everyone, behind the scene, will say this that they all know “how the system works?”.

    the current system of giving away scientific credit is undemocratic,feudal and unscientific. http://ow.ly/Qv7Yk . this must change.

  5. Thanks for asking me to comment Jon. I think this is a complex issue, but here are some thoughts.

    I would never chose a journal based purely on IF – I select a publication venue based on a combination of visibility and accessibility (a powerful argument for OA journals), suitability to the subject and length of the paper, and my own subjective assessment of journal ‘prestige’ within my research field. The subject-specific pyramid of journal ‘prestige’ is embedded in the heads of pretty much anyone who has been involved in academia for any length of time. As you are aware, for palaeontology, Nature and Science are at the top of this pyramid, followed by the likes of PNAS, Nature Communications, and the Royal Society journals, then by high quality specialist journals run by scholarly societies (e.g. Paleobiology, JVP, Palaeontology). IF may correlate in a broad sense with this pyramid, but I can certainly think of comparatively low IF journals (e.g. Biology Letters) that are considered quite ‘prestigious’ by some people, and high IF journals (e.g. Gondwana Research) that don’t necessarily have the same cachet.

    You might quite reasonably argue that choosing a journal on this criterion of ‘prestige’ should not be how science works, and that individuals should not be assessed based on where they have published. I would agree with you. Unfortunately though, publishing in ‘prestigious’ journals is undoubtedly critical to academic success, at least here in the UK. As someone who is still an early career researcher, I have seen ample evidence for this, ranging from assessments of my own grants and job applications, to my own more recent experiences of sitting on job, grant and award committees. I have seen colleagues’ careers transformed by a single Nature paper. The importance can be overstated however – with jobs I suspect it is probably most important at the shortlisting stage, where having publications in ‘prestigious’ journals may make your application stand out from the pack. Ultimately, however, departments assess candidate quality seriously via interviews and give jobs to the candidates that they think are the best fit to their needs and with greatest potential for future excellence in publication, teaching, grant capture etc. The idea of faceless administrators handing out jobs and grants solely on the basis of IF is a myth. Also, many other factors play into success on the job market, such as both quantity and quality of publications, grant track record, community recognition (e.g. awards from professional societies), references, personal contacts etc.

    As for ‘quality’ of papers, surely no-one who has been doing science for any length of time seriously thinks that papers in high-impact journals are always of better quality than papers in low-impact specialist journals. I have seen great work in Nature and great work in JVP, and I have seen shoddy work in both as well. The difference in venue choice is more about perceived broad appeal. In our group, we send our detailed anatomical, systematic, and phylogenetic papers to JVP, Palaeontology, PLoS ONE and the like. These papers often involve our most in-depth research and hopefully the greatest longevity, but, let’s be honest, most have a limited readership within the comparatively small community of Palaeozoic and Mesozoic fossil reptile specialists. When we have work that we think will reach beyond our immediate peers and be read by a much broader range of palaeontologists, and perhaps even by researchers beyond palaeontology, that’s usually when we consider aiming our paper at a ‘more prestigious’ journal such as Proceedings B or Nature Communications. Such contributions might include work on mass extinctions, long-term diversity patterns or global biogeography, for example. Being vertebrate workers, we of course also chance our arm when we think that we have a particularly interesting new fossil reptile taxon.

    In essence then, I don’t think our papers in higher-impact journals are necessarily better or worse than those in lower impact, specialist journals. They are usually just a different kind of paper.

  6. The posters here don’t seem to be a valid cross section. Let’s face it, valid or not, Science and Nature papers get you better jobs with better pay, and greater recognition. While sarcastically commenting about how ‘all you need Iis a pretty map’ and how once they write the nature paper, they’ll do the more useful applied research etc, many (most?) researchers desire the impact factor – for once in, you are part of the elite crowd – you made it. Papers and IF are a game that intelligent people play to get what they want while they remain in play, except perhaps some who have the luxury of already having tenure. But, let’s face it, high impact publications demand more WORK, in particular better clarity and quality of writing, exceptional graphical presentation of your work, and the use of up-to-date statistical methods. Assuming you possess the skills to meet these standards, you still wouldn’t send everything there. Not all research has pretty maps, or meets the other criteria: global, newsworthy, and novel. I don’t consider the ‘quality’ of my research when aiming high, but rather, if it meets the criteria, is it worth it? I consider: 1) the workload, and the payoff – how much of my time is the exposure worth? Can I write three lower impact publications in that time? 2) What’s my feel for the likelihood it will get in? Do the findings have a hook? Are they newsworthy and of braid appeal (the public still like dinosaur feathers, so they still get in Nature) and 3) does the work need the publicity/global platform? As a conservation scientist much of my work is policy related and high impacts publications are more widely read at a global scale and get more press, making it more likely that my work will be translated to policy. However in some ways I think you do have a point Jon – you certainly won’t be publishing results obtained with less than current methods etc.

  7. Journal impact factor has made its first appearance in Journal Citation Report in 1975. In the last 40 years, to my knowledge about 1500 articles have appeared on this topic. One may search through Web of Science or Scopus to find out the enormity of literature on the topic. As can be expected there have been opinions in favour or against. The questions that are raised about the usefulness or worthlessness of impact factor have all been answered.and are available in documentary sources. Generally the views expressed on impact factor are based on none’s personal experience, and not on the basis of the overall knowledge of impact factor. After one has completed an article, the question arises where to place the article, or which journal will be best suited for the article.? How to decide that. other than the impact factor? How to decide the quality of an article? Through peer review? Is it not fraught with multifarious flaws? Altmetrics – are they flawless? People consider papers published in Nature, Science or any other high impact journal are of high quality because there is stringent peer review. These journals have got wide circulation. The moment a paper is published in such journals, it attracts the world attention. If the paper is of quality, it will generate citations from all over the world. If majority of the articles generate large citations, the impact factor of the journal will go up along with its ranking. Even in these articles, there will be some which may not generate any citation. Even the best universities of the world do have students who fail. That does not bring down the rank of the university. Similarly a few bad articles published in a high ranking journal will not bring down its rank or impact factor.

  8. Green tea also has a long history.

Comments are now closed for this post.