Journal editors – unregulated and unmonitored

HI Friends

I’ve been quiet for a couple of months – summer schedule and all – and wanted to get back to the blogosphere. I’ll try and be more diligent.

Many strange things have been brought to my attention over the summer, but I thought I would start with a more personal experience. That way, if anyone want’s to comment, at least one side of the equation is available.

Last spring we sent a paper in to an unnamed FT50 journal. Normally, these top journals reply within three months – at least – that has been my experience until now, for the most part. One consequence of the enhanced competitive environment is that journal editors seem to invite submissions by promising faster turn around.

In any case, a full six months went by, without hearing from the journal. As a result, I contacted the editor directly.  The editor immediately responded, on a Friday,  by saying “I should have contacted him earlier” and that he would ‘get on it’. By Monday, we had our rejection, along with only one review, and a note from the editor saying he was unable to get a second review. He didn’t even bother adding his own comments to the rejection letter. Needless to say, the first review was not very helpful, but that is beside the point. This little exchange once again brings me to question the authority, transparency, and lack of professionalism sometime exhibited by editors of even top journals. One cannot help wondering, given the importance of these gate-keeping roles, how it happens that we have processes that appear cavalier, with no recourse regarding accountability, transparency, appeal, or arbitration. In this particular case, my career does not hinge on the outcome – but I must report – in many cases where individual careers are in jeopardy, I have more often observed arrogance than compassion.

So, this brings me to raise an important question – and I must highlight – this question does NOT apply to Academy of Management journals, where transparency and fairness seems to be much more institutionalized.

Who appoints these people as editors?

Who governs their behavior?

Why do we allow autocratic and incompetent behavior by editors, even of prestigious journals?

In my view, we have a serious professional need for an equivalent of ‘rate my professor’ for academic journals. Such an idea was posed a few years ago in the Chronicle of Higher Education by Robert Deaner who called for a “consumer reports for journals”. We could monitor and evaluate the review process, the editorial process, the time taken, and other aspects of peer review. If anyone is interested in starting such an activity, please let me know – as I think we really need some monitoring out there.

Happy Research!

Benson

 

Educating the educators: Truth and justice in academic publishing

It seems I can’t visit anywhere without hearing harrowing stories of unethical and abusive editors, reviewers, and scholars. Before starting this blog, I would hear the odd tale or two – but now I seem to be ground zero for the often shocking admissions of disgruntled and abused colleagues the world over!

While it would be nice to view these unfortunate confessions as a biased sample, I am beginning to believe that the entire profession harbors within each of us, numerous examples of blatantly unethical conduct, all simmering and waiting to escape as some sort of neurotic or equally unjust retribution. In short, we may be the walking wounded. All of this has to do with our culture of scholarship – we need to carefully ask ourselves, what kind of culture are we promoting, and what are our overall objectives? How can we improve the cultural landscape that we operate in?

Just a few representative examples:

A junior colleague tells me an anonymous reviewer demands a series of coercive self-citations of their own, only tangentially relevant work. They also disclose, in the review, both who they are, along with insinuations that they know exactly who the jr. scholar is. The editor forwards this review with no comment.

A senior scholar reports presenting a paper with a unique novel analysis of public data during a conference. A few months later, she observes a conference paper written by a member of the audience who had attended the talk – utilizing the exact same methods and data.  There is no mention of her paper, not even an acknowledgement. Despite reminding the author of this sequence of events – by sending a copy of the proceedings as a reminder – the paper is eventually published, without a word of recognition, even though the editor is aware of the circumstances.

Dog eat dog…

Finally, we have the ‘curse’ of the special issue editors. These are often the unregulated wild west. I have heard more horror stories than I can relate in this short blog, but they range from ‘tit for tat’ expectations, to outstanding examples of favoritism, nepotism, and cronyism. Editors taking advantage of writing themselves or their friends into special issues is very common. These may represent closed networks of special subject reviewers who are primed   to support primarily insider work – and reject outsider material. Social expectations trump scientific merit, and the entire effort becomes mired in politics.

While these are but a few examples, one begins to suspect that what is published is often not recognition regarding the high quality of the research, rather, it has to do with the social processes underlying how the work is presented. Rather than rewarding the highest quality work – or the most innovative work – we wind up with a kind of replication of the norm. We pat each other on the back regarding out methodological rigor, without really considering the accuracy or consequences of our efforts. No wonder managers in the ‘real world’ seldom pay attention to anything we do.

All of which suggests that we need more transparency in our publication and review process, as well as more insight into the methodological and philosophical rigour we use to approach our work. The idea of double blind is good – as long as it is truly double blind, and the objective is to enhance the quality of the subsequent product. However, all too often, we’re simply going through a well rehearsed process of convincing the editors and reviewers that our work is normative, while they go through the ritual of telling us how to frame an acceptable ‘story’ that meets their standards, irrespective of the accuracy of the work.

In a very insightful article, Bill Starbuck in the  60 year anniversary issue of ASQ points out the inconsistencies in reviewer evaluations, including the problems of submissions from ‘low status institutions’, convoluted formulas, and ambiguous editorial feedback. He also highlights the problems of signalling inherent in language usage, whereby reviewers can identify the origin of any particular manuscript’s authors.

Next, Bill tackles the issue of our efforts to enhance the importance of our work, irrespective of the actual merit, sometimes leading to corrupt methodologies, HARKing (Hypothesizing after results are known) and p-Hacking (subjecting data to multiple manipulations until some sort of pattern emerges) both of which misrepresent the accuracy of the theories discussed. Bill points out that this leads to “a cynical ethos that treats research as primarily a way to advance careers”.

Bill concludes that cultural changes are needed, but that they happen only slowly. Senior scholars must take a very visible lead – editors and reviewers alike. In the end, it’s really a matter of education.

I fully agree with Bill – we need to start looking at ourselves carefully in the mirror, stop quoting our individual H indexes, and begin the difficult task of educating ourselves regarding how to advance our scientific capabilities.