Considerable effort is expended on tenure reviews, letters of recommendation, and extensive reports on citation counts and the impact factor of scholarly journals. Many Jr. faculty tell me that they are required to publish in only a very limited number of ‘high impact’ journals – often as few as five. In fact, one scholar surprised me with this requirement, as not only was the university where he taught not particularly top tier, but neither were his colleagues or the dean imposing the standard. Yet, without the five promising articles, he was out looking for another job. A totally wasted effort on the part of the institution and the scholar, who is very promising and has already ‘delivered’.
The number of universities incorporating these types of barriers seem to be growing, despite increasingly difficult hurdles and ridiculously ‘low’ odds of having a paper accepted for publication in one of these ‘sacred’ journals. It is as though tenure committees no longer have the capacity to think, to read, or to adjudicate. They just want a simple formula, and are just as happy to send a young scholar to another institution then they are to keep them. I just don’t see how that enhances the reputation or quality of the institution. Don’t we want to invest in our human capital? Are faculty simply a number generated by a few editors or by google-scholar? Is there no purpose whatsoever to the community and teaching activities that we might be engaged in, or to the publication outlets that we seek that might be more inclusive than the very top five?
I’ve attended numerous editorial board meetings over the years, and I would say that half of the time dedicated to these meetings revolves around the issue of journal impact factor. Editors with dropping impact factors seemed ashamed and hasten to share new strategies. I, myself, have observed the removal of case studies and other non-citable material from journals legitimated primarily to enhance citation impact. Editors with increasing impact factors loudly and broadly share their new found success like proud grandparents. Given this emphasis, one would think that a set of standard practices would be in order to compare one journal, fairly, with another. And yet, the reality is far from achieving even a semblance of objectivity.
For starters, many editors encourage authors to heavily site their own journals, reflected in the ‘coercive citation’ literature. In fact, a look at the Thompson list of citation impact factor for journals shows that many journals have heavily inflated impact factors due primarily to self-citation. JCR, the primary database for comparison, does provide a measure discounted by self-citations, but this is rarely used or referred to. Fields that are rather small claim the self-citation rate is necessary, as there is little information on their subject matter elsewhere. However, this can also be a convenient way to inflate the work of the editor, editorial board, and influential reviewers and gatekeepers. A very strange example of editorial manipulation occurred a couple of years ago regarding a citation cartel, whereby the editor of one journal edited a few special issues in two other journals. By directing the scholars in the special issues to cite the other journal, the impact factor grew to embarrassingly undeserved heights, resulting in the resignation of that editor.
Now, a recent article has uncovered yet another cynical editorial ‘trick’ to bias statistics and provide a higher impact factor.
An article by Ben Martin in Research Policy entitled “Editors JIF_boosting Stratagems” highlights the many ways editors now employ to upwardly bias their results (A nice summary of the article is provided by David Matthews in the times higher education). The ‘tricks’ are impressive, including keeping articles in a long queue (every wonder why your accepted paper takes two years to reach print?). This ensures that once a paper is published, it will already have a good number of citations attached to it. As stated by Ben “By holding a paper in the online queue for two years, when it is finally published, it is then earning citations at the Year 3 rate. Papers in Year 3 typically earn about the same number of citations as in Years 1 and 2 combined, and the Year 4 figure is broadly similar.25 Hence, the net effect of this is to add a further 50% or so to the doubling effect described above (the JIF accelerator effect)”.
One top management journal reportedly has 160 articles in their queue, another on management ethics 600!! Other strategies reported include cherry picking articles to hold back, and encouraging review articles, that get widely cited.
In sum, it appears that the ‘objective’ measures we tend to employ regarding journal quality and citation impact are far from objective, and subject to considerable bias and manipulation.
Isn’t it about time that tenure committees read material and focus on content, rather than on a publication’s location? Perhaps we, as a community, can provide a ‘gold standard’ set of recommendations? What do you think?