Journal editors – unregulated and unmonitored

HI Friends

I’ve been quiet for a couple of months – summer schedule and all – and wanted to get back to the blogosphere. I’ll try and be more diligent.

Many strange things have been brought to my attention over the summer, but I thought I would start with a more personal experience. That way, if anyone want’s to comment, at least one side of the equation is available.

Last spring we sent a paper in to an unnamed FT50 journal. Normally, these top journals reply within three months – at least – that has been my experience until now, for the most part. One consequence of the enhanced competitive environment is that journal editors seem to invite submissions by promising faster turn around.

In any case, a full six months went by, without hearing from the journal. As a result, I contacted the editor directly.  The editor immediately responded, on a Friday,  by saying “I should have contacted him earlier” and that he would ‘get on it’. By Monday, we had our rejection, along with only one review, and a note from the editor saying he was unable to get a second review. He didn’t even bother adding his own comments to the rejection letter. Needless to say, the first review was not very helpful, but that is beside the point. This little exchange once again brings me to question the authority, transparency, and lack of professionalism sometime exhibited by editors of even top journals. One cannot help wondering, given the importance of these gate-keeping roles, how it happens that we have processes that appear cavalier, with no recourse regarding accountability, transparency, appeal, or arbitration. In this particular case, my career does not hinge on the outcome – but I must report – in many cases where individual careers are in jeopardy, I have more often observed arrogance than compassion.

So, this brings me to raise an important question – and I must highlight – this question does NOT apply to Academy of Management journals, where transparency and fairness seems to be much more institutionalized.

Who appoints these people as editors?

Who governs their behavior?

Why do we allow autocratic and incompetent behavior by editors, even of prestigious journals?

In my view, we have a serious professional need for an equivalent of ‘rate my professor’ for academic journals. Such an idea was posed a few years ago in the Chronicle of Higher Education by Robert Deaner who called for a “consumer reports for journals”. We could monitor and evaluate the review process, the editorial process, the time taken, and other aspects of peer review. If anyone is interested in starting such an activity, please let me know – as I think we really need some monitoring out there.

Happy Research!

Benson

 

What are we professors to do? Are we better than VW?

As members of the Academy, we each hold responsibility in upholding our professional ethics. Once the ‘egg is broken’, it will be very hard to re-establish public confidence. VW, for instance, will undoubtedly have a long road in convincing the public that their organization acts in an ethically responsible way. While the public seemed to quickly forgive GM for their ignoring a faulty ignition problem, they are less willing to forgive systemic premeditated corruption. We have seen the flashback from the American Psychological Association regarding members advising how best to conduct torture having an impact on their community. In short, many of these professional ethical issues have a way of impacting our field for the long run.

In the last posting, Greg Stephens AOM’s onbudperson, outlined a range of issues they examine, with instructions regarding how to proceed should you have a professional ethics dilemma. They do a fantastic job, often behind the scenes, and we should be very appreciative of their hard work.

Of course, these issues are primarily only of relevance to things that happen in and around the Academy of Management. If you observe something at another conference, or at a non AOM sponsored journal, well, there may be few if any options for you to pursue.

A  AOM recent censure ruling, the first I ever recall seeing, included the sanctioning of an academy member. Professor Andreas Hinterhuber had submitted a previously published paper for consideration at the upcoming Annual Meeting. The ruling was as follows:

The final sanctions include disqualification from participation in Academy of Management activities (including but not limited to submission to the conference, participation on the conference program, serving the Academy or any of its Divisions or Interest Groups in an elected or appointed role, or submission to any of the Academy of Management journals) for a period of three (3) years, public notice of the violation through publication in the AcadeMY News; formal notification to the journal where the work was previously published, and ethics counseling by the Ombuds Committee.

Seeing a public and formal sanction is a good professional start, and I applaud our organization for taking the trouble to demonstrate that we have professional limits that should be honored. However, what if  Professor Andreas Hinterhuber were found to have done the same thing at, say, EGOS, or BAM? Of what about someone who submits a paper simultaneously to two different journals for review? Would the consequences be the same? Likely not.

It would seem to me that we would all benefit from a larger professional ‘tent’ whereby public notice of violations and censure were more systematically discussed. I find it very odd that, out of 20,000 members, it is so rare for us to have a public censure (this is the first I am aware of – although there may have been other non-public consequences). Every year I hear of multiple cases of doctors and lawyers getting disbarred.  The odds are presumably the same for our profession, but the consequences far less, and the frequency of public humiliation quite rare. This would only provide incentives to engage in unprofessional conduct.  I am not suggesting we begin a yellow journalistic finger pointing exercise. Only that given the rise in competition, and the important stakes involved in our profession, we should collectively think about professional monitoring, public dialog, and the provision of clear ethical guidelines in our doctoral and professional career development.

Your thoughts on the matter are welcome.

How are we evaluated as scholars?

Considerable effort is expended on tenure reviews, letters of recommendation, and extensive reports on citation counts and the impact factor of  scholarly journals.  Many Jr. faculty tell me that they are required to publish in only a very limited number of ‘high impact’ journals – often as few as five. In fact, one scholar surprised me with this requirement, as not only was the university where he taught not particularly top tier, but neither were his colleagues or the dean imposing the standard. Yet, without the five promising articles, he was out looking for another job. A totally wasted effort on the part of the institution and the scholar, who is very promising and has already ‘delivered’.

The number of universities incorporating these types of barriers seem to be growing, despite increasingly difficult hurdles and ridiculously ‘low’ odds of having a paper accepted for publication in one of these ‘sacred’ journals. It is as though tenure committees no longer have the capacity to think, to read, or to adjudicate. They just want a simple formula, and are just as happy to send a young scholar to another institution then they are to keep them. I just don’t see how that enhances the reputation or quality of the institution. Don’t we want to invest in our human capital? Are faculty simply a number generated by a few editors or by google-scholar? Is there no purpose whatsoever to the community and teaching activities that we might be engaged in, or to the publication outlets that we seek that might be more inclusive than the very top five?

I’ve attended numerous editorial board meetings over the years, and I would say that half of the time dedicated to these meetings revolves around the issue of journal impact factor.  Editors with dropping impact factors seemed ashamed and hasten to share new strategies. I, myself, have observed the removal of case studies and other non-citable material from journals legitimated primarily to enhance citation impact.  Editors with increasing impact factors loudly and broadly share their new found success like proud grandparents.  Given this emphasis, one would think that a set of standard practices would be in order to compare one journal, fairly, with another. And yet, the reality is far from achieving even a semblance of objectivity.

For starters, many editors encourage authors to heavily site their own journals, reflected in the ‘coercive citation’ literature. In fact, a look at the Thompson list of citation impact factor for journals shows that many journals have heavily inflated impact factors due primarily to self-citation. JCR, the primary database for comparison, does provide a measure discounted by self-citations, but this is rarely used or referred to. Fields that are rather small claim the self-citation rate is necessary, as there is little information on their subject matter elsewhere. However, this can also be a convenient way to inflate the work of the editor, editorial board, and influential reviewers and gatekeepers. A very strange example of editorial manipulation occurred a couple of years ago regarding a citation cartel, whereby the editor of one journal edited a few special issues in two other journals. By directing the scholars in the special issues to cite the other journal, the impact factor grew to embarrassingly undeserved heights, resulting in the resignation of that editor.

Now, a recent article has uncovered yet another cynical editorial ‘trick’ to bias statistics and provide a higher impact factor.

An article by Ben Martin in Research Policy entitled “Editors JIF_boosting Stratagems” highlights the many ways editors now employ to upwardly bias their results  (A nice summary of the article is provided by David Matthews  in the times higher education).  The ‘tricks’ are impressive, including keeping articles in a long queue (every wonder why your accepted paper takes two years to reach print?). This ensures that once a paper is published, it will already have a good number of citations attached to it.  As stated by Ben “By holding a paper in the online queue for two years, when it is finally published, it is then earning citations at the Year 3 rate. Papers in Year 3 typically earn about the same number of citations as in Years 1 and 2 combined, and the Year 4 figure is broadly similar.25 Hence, the net effect of this is to add a further 50% or so to the doubling effect described above (the JIF accelerator effect)”.

One top management journal reportedly has 160 articles in their queue, another on management ethics 600!! Other strategies reported include cherry picking articles to hold back, and encouraging review articles, that get widely cited.

In sum, it appears that the ‘objective’ measures we tend to employ regarding journal quality and citation impact are far from objective, and subject to considerable bias and manipulation.

Isn’t it about time that tenure committees read material and focus on content, rather than on a publication’s location? Perhaps we, as a community, can provide a ‘gold standard’ set of recommendations? What do you think?

 

Ethics and Ethnography

I’ve been having some interesting conversations over at OrgTheory with Victor Tan Chen about the ethical dilemmas that ethnographers face in their research practices. This is closely related to the issues that Benson picked up on in his recent post, noting that our Code of Ehics requires us “to preserve and protect the privacy, dignity, well-being, and freedom of [our] research participants.” In this post, I’d like to bring out to important dimensions which we might distinguish into a concern with our “scientific” and our “professional” integrity.

As scientists, we are concerned with the truth. So, when we observe something in our fieldwork we feel a duty to report those events as they actually happened. But sometimes we have to modify our description of those events, leave them out, or even outright fictionalize them, in order to protect our research subjects from the consequences of making their actions public. (This is not always, but sometimes, because they are themselves involved in unethical or illegal activities, which raises an additional dilemma.) Once we do this, of course, we have made a compromise, we have sacrificed a little bit of truth for the sake of a, presumably, greater bit of justice.

But at the next level of analysis, we now have to ask ourselves whether we are inadvertently circulating falsehoods. Will our readers begin to tell certain anecdotes to their peers and students as though they are “true stories” even though the actual events are very different? What for us might merely be slight embellishment for the sake of concealing an identity or a location, might for our readers become an illuminating “fact” about how the world works.

Consider an analogy to medical science. Obviously, you don’t want to end up claiming that a pill has effects it doesn’t actually have or doesn’t have effects it actually does. That’s why you don’t leave out information about the population that you have tested it on. If you’ve only tested the pill on healthy men in their thirties, you don’t hide this fact in your write up because it’s important to know that its effects on seventy year-old women with high blood pressure are largely unknown. Similarly, if you’ve done your ethnographic research in rural China, you don’t “anonymize it” by saying it was done in India or the US. The context matters, and it is often very difficult to know how to characterize the context while also making it non-specific enough not to reveal who your actual research subjects were.

The broader professional issue has to do with preserving our collective to access to the communities that we want to remain knowledgeable about. If Wall Street bankers always find themselves written about by ethnographers as greedy sociopaths (and assuming they don’t self-identify as greedy sociopaths) or citizens of low-income neighborhoods always find themselves described as criminals, they will slowly develop a (not entirely unfounded) distrust of ethnographers and will, therefore, be less likely to open up their practices to our fieldwork. As Victor points out, these are issues that journalists also face, and which they have a variety of means to deal with. Many of these means can be sorted under “ethics”.

Let me emphasize that these are issues we must face collectively, i.e., as a profession. Losing access to empirical data is not just a risk you face personally in your own work. If your peers don’t enforce disciplinary standards then we’ll all lose credibility when engaging with practitioners. For this reason I also agree with the anonymous commenter on my last post: we must lead by example and, unfortunately, every now and then we must make examples of each other.