Educating the educators: Truth and justice in academic publishing

It seems I can’t visit anywhere without hearing harrowing stories of unethical and abusive editors, reviewers, and scholars. Before starting this blog, I would hear the odd tale or two – but now I seem to be ground zero for the often shocking admissions of disgruntled and abused colleagues the world over!

While it would be nice to view these unfortunate confessions as a biased sample, I am beginning to believe that the entire profession harbors within each of us, numerous examples of blatantly unethical conduct, all simmering and waiting to escape as some sort of neurotic or equally unjust retribution. In short, we may be the walking wounded. All of this has to do with our culture of scholarship – we need to carefully ask ourselves, what kind of culture are we promoting, and what are our overall objectives? How can we improve the cultural landscape that we operate in?

Just a few representative examples:

A junior colleague tells me an anonymous reviewer demands a series of coercive self-citations of their own, only tangentially relevant work. They also disclose, in the review, both who they are, along with insinuations that they know exactly who the jr. scholar is. The editor forwards this review with no comment.

A senior scholar reports presenting a paper with a unique novel analysis of public data during a conference. A few months later, she observes a conference paper written by a member of the audience who had attended the talk – utilizing the exact same methods and data.  There is no mention of her paper, not even an acknowledgement. Despite reminding the author of this sequence of events – by sending a copy of the proceedings as a reminder – the paper is eventually published, without a word of recognition, even though the editor is aware of the circumstances.

Dog eat dog…

Finally, we have the ‘curse’ of the special issue editors. These are often the unregulated wild west. I have heard more horror stories than I can relate in this short blog, but they range from ‘tit for tat’ expectations, to outstanding examples of favoritism, nepotism, and cronyism. Editors taking advantage of writing themselves or their friends into special issues is very common. These may represent closed networks of special subject reviewers who are primed   to support primarily insider work – and reject outsider material. Social expectations trump scientific merit, and the entire effort becomes mired in politics.

While these are but a few examples, one begins to suspect that what is published is often not recognition regarding the high quality of the research, rather, it has to do with the social processes underlying how the work is presented. Rather than rewarding the highest quality work – or the most innovative work – we wind up with a kind of replication of the norm. We pat each other on the back regarding out methodological rigor, without really considering the accuracy or consequences of our efforts. No wonder managers in the ‘real world’ seldom pay attention to anything we do.

All of which suggests that we need more transparency in our publication and review process, as well as more insight into the methodological and philosophical rigour we use to approach our work. The idea of double blind is good – as long as it is truly double blind, and the objective is to enhance the quality of the subsequent product. However, all too often, we’re simply going through a well rehearsed process of convincing the editors and reviewers that our work is normative, while they go through the ritual of telling us how to frame an acceptable ‘story’ that meets their standards, irrespective of the accuracy of the work.

In a very insightful article, Bill Starbuck in the  60 year anniversary issue of ASQ points out the inconsistencies in reviewer evaluations, including the problems of submissions from ‘low status institutions’, convoluted formulas, and ambiguous editorial feedback. He also highlights the problems of signalling inherent in language usage, whereby reviewers can identify the origin of any particular manuscript’s authors.

Next, Bill tackles the issue of our efforts to enhance the importance of our work, irrespective of the actual merit, sometimes leading to corrupt methodologies, HARKing (Hypothesizing after results are known) and p-Hacking (subjecting data to multiple manipulations until some sort of pattern emerges) both of which misrepresent the accuracy of the theories discussed. Bill points out that this leads to “a cynical ethos that treats research as primarily a way to advance careers”.

Bill concludes that cultural changes are needed, but that they happen only slowly. Senior scholars must take a very visible lead – editors and reviewers alike. In the end, it’s really a matter of education.

I fully agree with Bill – we need to start looking at ourselves carefully in the mirror, stop quoting our individual H indexes, and begin the difficult task of educating ourselves regarding how to advance our scientific capabilities.

 

 

 

 

When journal editors are unprofessional

I recently read a NY times article highlighting an obvious conflict, when stock analysts own stock or options in the companies they are evaluating, or retain close ties with  those companies. It’s kind of horrifying to think that what is regarded as objective, unsolicited advice, may really be individuals trying to ‘game’ the system, by pushing up the price of their options for personal gain. Of course, that’s Wall Street, we’ve seen it before, and I’m sure we’ll see it again. But it got me thinking – what about journal editors?

Journal editors make decisions, often with considerable career implications, but their relationships – with the persons they evaluate, or the way they make decisions – is entirely opaque. It’s not like there’s some sort of appeals board one can go to if one thinks they have been slighted by an editor who bears a grudge against an author, their university, or even the theoretical or methodological paradigm they are writing about. This opens up not only questions of abuse of power and self interest, but also of due process.

We all want to think that the blind review process is objective – but what about the reviewer selection?  What about other practices? I don’t have to go far to find a litany of editor’s abusive activities. Just scratching the surface, we find the ‘tit for tat’ exchange – “ I will publish your paper in my journal, with the expectation that you will reciprocate with a publication your journal”. The special issue editor, that always seems to publish good friends and colleagues from their particular sphere of influence. Special issue editors are a particular problem, as they seem to go relatively unregulated. These practices effectively reduce the probability of a general submission being accepted, as there are few slots allocated to the genuine public of scholars. We also have coercive citations abuse, whereby the editor informs the author that they need to cite their journal (to improve the impact factor) in the editor’s R&R letter.  And, of course, we have the form letter rejection, sometimes not even reflecting the contents of the paper submitted, or addressing the material in a way demonstrating that the editor actually read anything.

What I find particularly surprising is that there is virtually no recourse. Many of us have experienced egregious editorial injustice, yet we simply grin and bear it. Students, on the other hand, seem to have figured out a way to vent their frustrations is a way that might, perhaps, temper the worst of academic injustice. Sites like ‘rate my professor’ allow students to voice their anger and frustration at what they view to be unjust or unprofessional activities. While I am the first to acknowledge that the site is relatively un-monitored and subject to potential biases and abuse – at least it provides a forum.

Academy of Management journals maintain a fairly transparent editorial policy, limiting the tenure of editors, and opening up nominations to our membership. This is good practice. Why don’t ALL journals publish a code of editorial ethics? Why don’t they ALL consider grievance procedures? Where is our academic forum? Why is it that we academics, have not devised a site to discuss perceived biases, unprofessional behavior, and irresponsible editing? I know, from talking with colleagues, that most of us have experienced unprofessional and sometimes outright unethical practices. Yet, we sit silently, submitting our papers to yet another journal, hoping for a fair evaluation at another venue. Meanwhile, some editors, even those demonstrating deeply abusive practices, are professionally rewarded.

Is there something we can do?  Does anyone have a suggestion? Or, are we all ‘happy campers”?

How should we treat each other as scientific subjects?

At the Academy meeting in Vancouver this year, it was brought to my attention that there were PDW’s collecting research data on participating members – without a clear ethics approval or apparent ethics protocol. That is, there was no informed consent, yet data appeared to be collected.

This was not the first time I observed our collective avoidance of Ethics Review Board (ERB) or Institutional Review Board (IRB) protocol when surveying ourselves.  As previous chair of AOM’s ethics education committee, I was tasked with repeating the ethics survey that we had administered to our entire membership some years before. The first thing that I did was to ask for the ethics review board protocol, in order to be sure I was following accepted procedures.

After a few weeks of embarrassing emails and back and forth confirmations, it was eventually clear that we had never submitted our own ethics survey to any kind of ethics review board. I was told that when the AOM board met to discuss this issue there was some hesitancy to constrain the activities of divisions surveying their membership – and no clear path to indicate who would serve as an accepted IRB for Academy research. My own decision was to obtain ERB approval and protocols from my own university, and proceed with the survey in that manner.

Many of us feel IRB’s are a burden. However, it is worth noting how many of these regulations came about.  For one, experiments on concentration camp victims horrified the scientific community, leading to the Nuremberg code. Much later, the experiments by Stanley Milgrom attempted to understand how people willingly agreed to do terrible acts to each other. His work, as well as famous Zimbardo prison simulation study, have led to tighter constraints on how to approach research, what is acceptable, and when ‘the line is crossed”.

One of my very first sociology professors was Laud Humphreys. He was famous for studying homosexual activities in public toilets, where he acted as the “watchqueen”. Later, he surreptitiously followed participants to their cars, identified their license plates, and showed up at their home disguised as a surveying health worker. This was done in 1960’s before IRB’s were mandated by the US federal government.

In fact, we have Academy members who come from countries where there is little of any oversight regarding research, particularly social science research.  However, I would argue we have a collective responsibility to observe the highest standards of research protocol, despite the burden, for our entire membership.

Our own code of ethics addresses this issue, although not as stridently as one might expect, as there is no specific mention of IRB procedures:

Participants. It is the duty of AOM members to preserve and protect the privacy, dignity, well-being,and freedom of research participants.

1.7. Informed Consent: When AOM members conduct research, including on behalf of the AOM or its divisions, they obtain the informed consent of the individual or individuals, using language that is reasonably understandable to that person or persons. Written or oral consent, permission, and assent are documented appropriately.

2.4. Anticipation of Possible Uses of Information:

2.4.1. When maintaining or accessing personal identifiers in databases or systems of records, such as division rosters, annual meeting submissions, or manuscript review systems, AOM members delete such identifiers before the information is made publicly available or employ other techniques that mask or control disclosure of individual identities.

2.4.2. When deletion of personal identifiers is not feasible, AOM members take reasonable steps to determine that the appropriate consent of personally identifiable individuals has been obtained before they transfer such data to others or review such data collected by others

Most North American universities are under strict IRB procedures.  They are virtually unanimous in stating that all surveys involving human subjects should be subjected to ERB committees. Here are a few statements from the Canadian “Tri Counsel” that governs Canadian universities:

If the primary purpose, design, content and/or function of such surveys is to conduct “research”2 involving humans, then it would generally require REB review, under TCPS Article 1.1(a):

Very similar statements appear at the Cornell Univ. website:

At the end of the day, each of us, no matter where we do our scholarly work, have a responsibility to protect the respondent as much as possible, in every conceivable way. The distance between our own behavior, and the 16 German doctors convicted of experimenting on human beings without their consent, is an essential red line that we cannot allow to become a ‘slippery slope’. Thus, even when we decide to research ourselves, as professors, and colleagues, I believe we should commit to the highest standards of scientific ethical inquiry. Even if IRB’s are a ‘burden’.