How are we evaluated as scholars?

Considerable effort is expended on tenure reviews, letters of recommendation, and extensive reports on citation counts and the impact factor of  scholarly journals.  Many Jr. faculty tell me that they are required to publish in only a very limited number of ‘high impact’ journals – often as few as five. In fact, one scholar surprised me with this requirement, as not only was the university where he taught not particularly top tier, but neither were his colleagues or the dean imposing the standard. Yet, without the five promising articles, he was out looking for another job. A totally wasted effort on the part of the institution and the scholar, who is very promising and has already ‘delivered’.

The number of universities incorporating these types of barriers seem to be growing, despite increasingly difficult hurdles and ridiculously ‘low’ odds of having a paper accepted for publication in one of these ‘sacred’ journals. It is as though tenure committees no longer have the capacity to think, to read, or to adjudicate. They just want a simple formula, and are just as happy to send a young scholar to another institution then they are to keep them. I just don’t see how that enhances the reputation or quality of the institution. Don’t we want to invest in our human capital? Are faculty simply a number generated by a few editors or by google-scholar? Is there no purpose whatsoever to the community and teaching activities that we might be engaged in, or to the publication outlets that we seek that might be more inclusive than the very top five?

I’ve attended numerous editorial board meetings over the years, and I would say that half of the time dedicated to these meetings revolves around the issue of journal impact factor.  Editors with dropping impact factors seemed ashamed and hasten to share new strategies. I, myself, have observed the removal of case studies and other non-citable material from journals legitimated primarily to enhance citation impact.  Editors with increasing impact factors loudly and broadly share their new found success like proud grandparents.  Given this emphasis, one would think that a set of standard practices would be in order to compare one journal, fairly, with another. And yet, the reality is far from achieving even a semblance of objectivity.

For starters, many editors encourage authors to heavily site their own journals, reflected in the ‘coercive citation’ literature. In fact, a look at the Thompson list of citation impact factor for journals shows that many journals have heavily inflated impact factors due primarily to self-citation. JCR, the primary database for comparison, does provide a measure discounted by self-citations, but this is rarely used or referred to. Fields that are rather small claim the self-citation rate is necessary, as there is little information on their subject matter elsewhere. However, this can also be a convenient way to inflate the work of the editor, editorial board, and influential reviewers and gatekeepers. A very strange example of editorial manipulation occurred a couple of years ago regarding a citation cartel, whereby the editor of one journal edited a few special issues in two other journals. By directing the scholars in the special issues to cite the other journal, the impact factor grew to embarrassingly undeserved heights, resulting in the resignation of that editor.

Now, a recent article has uncovered yet another cynical editorial ‘trick’ to bias statistics and provide a higher impact factor.

An article by Ben Martin in Research Policy entitled “Editors JIF_boosting Stratagems” highlights the many ways editors now employ to upwardly bias their results  (A nice summary of the article is provided by David Matthews  in the times higher education).  The ‘tricks’ are impressive, including keeping articles in a long queue (every wonder why your accepted paper takes two years to reach print?). This ensures that once a paper is published, it will already have a good number of citations attached to it.  As stated by Ben “By holding a paper in the online queue for two years, when it is finally published, it is then earning citations at the Year 3 rate. Papers in Year 3 typically earn about the same number of citations as in Years 1 and 2 combined, and the Year 4 figure is broadly similar.25 Hence, the net effect of this is to add a further 50% or so to the doubling effect described above (the JIF accelerator effect)”.

One top management journal reportedly has 160 articles in their queue, another on management ethics 600!! Other strategies reported include cherry picking articles to hold back, and encouraging review articles, that get widely cited.

In sum, it appears that the ‘objective’ measures we tend to employ regarding journal quality and citation impact are far from objective, and subject to considerable bias and manipulation.

Isn’t it about time that tenure committees read material and focus on content, rather than on a publication’s location? Perhaps we, as a community, can provide a ‘gold standard’ set of recommendations? What do you think?

 

5 thoughts on “How are we evaluated as scholars?”

  1. Fully agree with this. I would love to poll 100 AoM members and ask them to choose between three possible additions to their vita: 1) An AMJ article that gets zero citations, 2) an article in a second-tier journal that gets 100 citations or 2) an article in a third-tier journal that gets 500 citations. I would bet good money that the majority would choose the AMJ article.

  2. Sad but true. While we’re at it, we should also poll Deans on what they would prefer for their faculty of the three. Actually, this sounds like a worthwhile research experiment, Andrew.

  3. Yes, I imagine that the only reason that AoM members would choose the uncited AMJ article is because of how P&T committees and Deans see this issue. It is weird how the average citation of a journal has such power but the actual number of citations of the article has such little influence. Does sounds like an interesting research question – you are right.

  4. My business school is now in the midst of discussing a new “journal list policy.” It’s complicated by the fact that we have a public administration department, plus accounting and all the other stuff B-schools have, with no department with a critical mass of faculty. This means many of us aren’t familiar with the top journals in each others’ fields, so our current (to be modified) list is embarrassingly long. I do agree wholeheartedly with the posting, but one reason the journal “metrics” are included is it’s difficult for faculty in one “silo” to evaluate another’s work. So at least the metrics aim toward objectivity. Our School’s policy is vague, and there are lots of “top” journals, and believe me, that hasn’t helped much either.

  5. Harzing’s publish or perish list is a good start for comparative statistics – but now the question is – if journals are ‘cheating’ to maintain their status, how can we really compare? Ideally, we would read what was written, or people in the field would read it – but that seldom happens (except at tenure reviews – and even then, rather quickly). We should also value other contributions – we tend to undervalue community service, for example. .

Leave a Reply