Voice and academia – when do we speak out?

Voice and academia – when do we speak out?

In his classic work, Hirschman (1970) refers to ‘exit, voice and loyalty”, noting that the easier it is to leave an environment of discontent, the lower the voice. Voice, however, is more helpful, in that it explains decline. Of course, exit from AOM is a rather simple task, we do not have a monopoly on scholarly conferences or journals in management. Yet, recently, there was active and serious discussion including members mentioning leaving, boycotting, and resigning their AOM membership.

On Jan.27, President Donald Trump issued the now familiar executive order restricting and/or banning anyone from 7 different countries from visiting the USA. What followed, in addition to the subsequent court order cancelling this directive, was a stream of protests from various organizations, including Academic organizations, such as the APA, ASA, etc.. I have listed many of these responses at the end of this blog for your reference. In most cases, the language is explicit: restricting travel to individuals according to their national origin went against the values of many of these organizations, and their objection was unambiguous.

The ruling also challenged our own Code of Ethics at AOM:

The AOM ensures that attention is paid to the rights and well-being of all organizational stakeholders.

 AOM members respect and protect civil and human rights and the central importance of freedom of inquiry and expression in research, teaching, and publication.


Worldview. Academy members have a duty to consider their responsibilities to the world community.

In their role as educators, members of the Academy can play a vital role in encouraging a broader horizon for decision making by viewing issues from a multiplicity of perspectives, including the perspectives of those who are the least advantaged.

Our own president, Anita M. McGahan, weighed in, but unfortunately, her letter lacked the robust character of many of the academies listed below. Rather, she attempted a ‘work around”, and I quote a few paragraphs as follows:

“First, the AOM is suspending the requirement of attendance as a condition of inclusion in the program at the Annual Meeting for those affected by the travel restrictions.  All scholars whose work is accepted to the conference but are not able to enter the United States from travel-restricted countries will have access to sessions in which they are presenting through virtual means.  Second, we will also share with you, via our website, the best information that we have about Visa application processes for those who want to attend.  We encourage any member from the affected countries who wishes to attend but cannot because of travel restrictions to contact us so that we can work with you toward participation”

“The vision of the AOM is to inspire and enable a better world through our scholarship and teaching about management and organizations.  I encourage AOM members to double down on the scholarly agenda. Let us be more engaged, creative, and committed to scholarship and teaching on the issues of our day.  Let us stand together in Atlanta in solidarity with our diverse membership as the world’s premiere association of management scholars and business-school professors.  Academic integrity is our strength.  Through our scholarly discussions and debate, we can find a way forward together.  This is the AOM’s purpose and this cannot and will not change”.


Many of members, including myself, wrote letters of protest to our president. We felt it important that AOM make a stand on this important issue. A healthy dialog subsequently ensued on numerous listservs.  It turned out that Anita was constrained by AOM policies that would not allow AOM to take political stands.

The policy was: “The Academy of Management does not take political stands. Officers and leaders are bound by this policy and may not make publicly stated political views in the name of the AOM or through use of AOM resources.”

As a result, I was very pleased to note that AOM policy has changed – albeit subtly, our policy as follows:

The newly amended policy on political stands is: “The Academy of Management does not take political stands. Officers and leaders are bound by this policy and may not make publicly stated political views in the name of the AOM or through use of AOM resources. However, under exceptional circumstances, and with the consensual support of the Executive Committee and in consultation with the Board of Governors, the President is authorized to issue a statement on behalf of the AOM when a political action threatens the existence, purpose, or functioning of the AOM as an organization.” This policy is under embargo for 90 days.

I wish to thank Anita, the Board of Governors, our members who voiced concerns, and all the other members involved for their work in rapidly addressing this important issue head on, by acknowledging that under certain circumstances, voice is important.

While many of us are fortunately enough to live in a democracy, we also are members of a global community of scholars. We have seen what happens when communities of scholars fail to adequately rise up against measures that limit or constrain academic freedom. We need not look far to see this freedom being denied our colleagues in various places, at this very moment. There are times when making a political stand is necessary to meet challenges attacking the very substance of what we do as scholars. While these will hopefully be few and far between, it is important that we acknowledge our own responsibility for voice, least we have only to exit. If nothing else, modifying our rules has engendered more loyalty.

Statements from various associations follow:



The AAG:















Journal editors – unregulated and unmonitored

HI Friends

I’ve been quiet for a couple of months – summer schedule and all – and wanted to get back to the blogosphere. I’ll try and be more diligent.

Many strange things have been brought to my attention over the summer, but I thought I would start with a more personal experience. That way, if anyone want’s to comment, at least one side of the equation is available.

Last spring we sent a paper in to an unnamed FT50 journal. Normally, these top journals reply within three months – at least – that has been my experience until now, for the most part. One consequence of the enhanced competitive environment is that journal editors seem to invite submissions by promising faster turn around.

In any case, a full six months went by, without hearing from the journal. As a result, I contacted the editor directly.  The editor immediately responded, on a Friday,  by saying “I should have contacted him earlier” and that he would ‘get on it’. By Monday, we had our rejection, along with only one review, and a note from the editor saying he was unable to get a second review. He didn’t even bother adding his own comments to the rejection letter. Needless to say, the first review was not very helpful, but that is beside the point. This little exchange once again brings me to question the authority, transparency, and lack of professionalism sometime exhibited by editors of even top journals. One cannot help wondering, given the importance of these gate-keeping roles, how it happens that we have processes that appear cavalier, with no recourse regarding accountability, transparency, appeal, or arbitration. In this particular case, my career does not hinge on the outcome – but I must report – in many cases where individual careers are in jeopardy, I have more often observed arrogance than compassion.

So, this brings me to raise an important question – and I must highlight – this question does NOT apply to Academy of Management journals, where transparency and fairness seems to be much more institutionalized.

Who appoints these people as editors?

Who governs their behavior?

Why do we allow autocratic and incompetent behavior by editors, even of prestigious journals?

In my view, we have a serious professional need for an equivalent of ‘rate my professor’ for academic journals. Such an idea was posed a few years ago in the Chronicle of Higher Education by Robert Deaner who called for a “consumer reports for journals”. We could monitor and evaluate the review process, the editorial process, the time taken, and other aspects of peer review. If anyone is interested in starting such an activity, please let me know – as I think we really need some monitoring out there.

Happy Research!



Educating the educators: Truth and justice in academic publishing

It seems I can’t visit anywhere without hearing harrowing stories of unethical and abusive editors, reviewers, and scholars. Before starting this blog, I would hear the odd tale or two – but now I seem to be ground zero for the often shocking admissions of disgruntled and abused colleagues the world over!

While it would be nice to view these unfortunate confessions as a biased sample, I am beginning to believe that the entire profession harbors within each of us, numerous examples of blatantly unethical conduct, all simmering and waiting to escape as some sort of neurotic or equally unjust retribution. In short, we may be the walking wounded. All of this has to do with our culture of scholarship – we need to carefully ask ourselves, what kind of culture are we promoting, and what are our overall objectives? How can we improve the cultural landscape that we operate in?

Just a few representative examples:

A junior colleague tells me an anonymous reviewer demands a series of coercive self-citations of their own, only tangentially relevant work. They also disclose, in the review, both who they are, along with insinuations that they know exactly who the jr. scholar is. The editor forwards this review with no comment.

A senior scholar reports presenting a paper with a unique novel analysis of public data during a conference. A few months later, she observes a conference paper written by a member of the audience who had attended the talk – utilizing the exact same methods and data.  There is no mention of her paper, not even an acknowledgement. Despite reminding the author of this sequence of events – by sending a copy of the proceedings as a reminder – the paper is eventually published, without a word of recognition, even though the editor is aware of the circumstances.

Dog eat dog…

Finally, we have the ‘curse’ of the special issue editors. These are often the unregulated wild west. I have heard more horror stories than I can relate in this short blog, but they range from ‘tit for tat’ expectations, to outstanding examples of favoritism, nepotism, and cronyism. Editors taking advantage of writing themselves or their friends into special issues is very common. These may represent closed networks of special subject reviewers who are primed   to support primarily insider work – and reject outsider material. Social expectations trump scientific merit, and the entire effort becomes mired in politics.

While these are but a few examples, one begins to suspect that what is published is often not recognition regarding the high quality of the research, rather, it has to do with the social processes underlying how the work is presented. Rather than rewarding the highest quality work – or the most innovative work – we wind up with a kind of replication of the norm. We pat each other on the back regarding out methodological rigor, without really considering the accuracy or consequences of our efforts. No wonder managers in the ‘real world’ seldom pay attention to anything we do.

All of which suggests that we need more transparency in our publication and review process, as well as more insight into the methodological and philosophical rigour we use to approach our work. The idea of double blind is good – as long as it is truly double blind, and the objective is to enhance the quality of the subsequent product. However, all too often, we’re simply going through a well rehearsed process of convincing the editors and reviewers that our work is normative, while they go through the ritual of telling us how to frame an acceptable ‘story’ that meets their standards, irrespective of the accuracy of the work.

In a very insightful article, Bill Starbuck in the  60 year anniversary issue of ASQ points out the inconsistencies in reviewer evaluations, including the problems of submissions from ‘low status institutions’, convoluted formulas, and ambiguous editorial feedback. He also highlights the problems of signalling inherent in language usage, whereby reviewers can identify the origin of any particular manuscript’s authors.

Next, Bill tackles the issue of our efforts to enhance the importance of our work, irrespective of the actual merit, sometimes leading to corrupt methodologies, HARKing (Hypothesizing after results are known) and p-Hacking (subjecting data to multiple manipulations until some sort of pattern emerges) both of which misrepresent the accuracy of the theories discussed. Bill points out that this leads to “a cynical ethos that treats research as primarily a way to advance careers”.

Bill concludes that cultural changes are needed, but that they happen only slowly. Senior scholars must take a very visible lead – editors and reviewers alike. In the end, it’s really a matter of education.

I fully agree with Bill – we need to start looking at ourselves carefully in the mirror, stop quoting our individual H indexes, and begin the difficult task of educating ourselves regarding how to advance our scientific capabilities.





When journal editors are unprofessional

I recently read a NY times article highlighting an obvious conflict, when stock analysts own stock or options in the companies they are evaluating, or retain close ties with  those companies. It’s kind of horrifying to think that what is regarded as objective, unsolicited advice, may really be individuals trying to ‘game’ the system, by pushing up the price of their options for personal gain. Of course, that’s Wall Street, we’ve seen it before, and I’m sure we’ll see it again. But it got me thinking – what about journal editors?

Journal editors make decisions, often with considerable career implications, but their relationships – with the persons they evaluate, or the way they make decisions – is entirely opaque. It’s not like there’s some sort of appeals board one can go to if one thinks they have been slighted by an editor who bears a grudge against an author, their university, or even the theoretical or methodological paradigm they are writing about. This opens up not only questions of abuse of power and self interest, but also of due process.

We all want to think that the blind review process is objective – but what about the reviewer selection?  What about other practices? I don’t have to go far to find a litany of editor’s abusive activities. Just scratching the surface, we find the ‘tit for tat’ exchange – “ I will publish your paper in my journal, with the expectation that you will reciprocate with a publication your journal”. The special issue editor, that always seems to publish good friends and colleagues from their particular sphere of influence. Special issue editors are a particular problem, as they seem to go relatively unregulated. These practices effectively reduce the probability of a general submission being accepted, as there are few slots allocated to the genuine public of scholars. We also have coercive citations abuse, whereby the editor informs the author that they need to cite their journal (to improve the impact factor) in the editor’s R&R letter.  And, of course, we have the form letter rejection, sometimes not even reflecting the contents of the paper submitted, or addressing the material in a way demonstrating that the editor actually read anything.

What I find particularly surprising is that there is virtually no recourse. Many of us have experienced egregious editorial injustice, yet we simply grin and bear it. Students, on the other hand, seem to have figured out a way to vent their frustrations is a way that might, perhaps, temper the worst of academic injustice. Sites like ‘rate my professor’ allow students to voice their anger and frustration at what they view to be unjust or unprofessional activities. While I am the first to acknowledge that the site is relatively un-monitored and subject to potential biases and abuse – at least it provides a forum.

Academy of Management journals maintain a fairly transparent editorial policy, limiting the tenure of editors, and opening up nominations to our membership. This is good practice. Why don’t ALL journals publish a code of editorial ethics? Why don’t they ALL consider grievance procedures? Where is our academic forum? Why is it that we academics, have not devised a site to discuss perceived biases, unprofessional behavior, and irresponsible editing? I know, from talking with colleagues, that most of us have experienced unprofessional and sometimes outright unethical practices. Yet, we sit silently, submitting our papers to yet another journal, hoping for a fair evaluation at another venue. Meanwhile, some editors, even those demonstrating deeply abusive practices, are professionally rewarded.

Is there something we can do?  Does anyone have a suggestion? Or, are we all ‘happy campers”?

What are we professors to do? Are we better than VW?

As members of the Academy, we each hold responsibility in upholding our professional ethics. Once the ‘egg is broken’, it will be very hard to re-establish public confidence. VW, for instance, will undoubtedly have a long road in convincing the public that their organization acts in an ethically responsible way. While the public seemed to quickly forgive GM for their ignoring a faulty ignition problem, they are less willing to forgive systemic premeditated corruption. We have seen the flashback from the American Psychological Association regarding members advising how best to conduct torture having an impact on their community. In short, many of these professional ethical issues have a way of impacting our field for the long run.

In the last posting, Greg Stephens AOM’s onbudperson, outlined a range of issues they examine, with instructions regarding how to proceed should you have a professional ethics dilemma. They do a fantastic job, often behind the scenes, and we should be very appreciative of their hard work.

Of course, these issues are primarily only of relevance to things that happen in and around the Academy of Management. If you observe something at another conference, or at a non AOM sponsored journal, well, there may be few if any options for you to pursue.

A  AOM recent censure ruling, the first I ever recall seeing, included the sanctioning of an academy member. Professor Andreas Hinterhuber had submitted a previously published paper for consideration at the upcoming Annual Meeting. The ruling was as follows:

The final sanctions include disqualification from participation in Academy of Management activities (including but not limited to submission to the conference, participation on the conference program, serving the Academy or any of its Divisions or Interest Groups in an elected or appointed role, or submission to any of the Academy of Management journals) for a period of three (3) years, public notice of the violation through publication in the AcadeMY News; formal notification to the journal where the work was previously published, and ethics counseling by the Ombuds Committee.

Seeing a public and formal sanction is a good professional start, and I applaud our organization for taking the trouble to demonstrate that we have professional limits that should be honored. However, what if  Professor Andreas Hinterhuber were found to have done the same thing at, say, EGOS, or BAM? Of what about someone who submits a paper simultaneously to two different journals for review? Would the consequences be the same? Likely not.

It would seem to me that we would all benefit from a larger professional ‘tent’ whereby public notice of violations and censure were more systematically discussed. I find it very odd that, out of 20,000 members, it is so rare for us to have a public censure (this is the first I am aware of – although there may have been other non-public consequences). Every year I hear of multiple cases of doctors and lawyers getting disbarred.  The odds are presumably the same for our profession, but the consequences far less, and the frequency of public humiliation quite rare. This would only provide incentives to engage in unprofessional conduct.  I am not suggesting we begin a yellow journalistic finger pointing exercise. Only that given the rise in competition, and the important stakes involved in our profession, we should collectively think about professional monitoring, public dialog, and the provision of clear ethical guidelines in our doctoral and professional career development.

Your thoughts on the matter are welcome.

Predatory journals, and the arrogance of peer review

Sorry for the long absence, but I’ve been on the road quite a bit lately, providing me with an excuse for taking a short holiday from  blogging in the ethicist.

I recently returned from Nairobi, Kenya, where I was involved in running a track for a management conference for the Africa Academy of Management. The conference was a wonderful success, and it was great seeing so many management scholars from all over the world converging on Africa.

Of course, with any international scholarly conference, there are significant cultural norms, attitudes, and differences that we carry with us to our meetings. I thought it would be worthwhile to discuss just one of them: the perennial elephant in the room: publication, and in particular, predatory publication.

Surprisingly, while I was attending the conference, our Assoc. dean back home in Canada circulated a tract on the hazards of predatory journals. In particular, the email circulated informing the faculty of Beall’s list of predatory publishers.  The list provides perhaps a dozen journal titles specifically tailored toward management scholars. It also includes so called “hijacked journals” which emulate the name or acronym of other famous journals, presumably to confuse readers, as well as misleading metrics, whereby journals publish misleading impact factors (note: inflating  impact factor by journal self plagiarism is NOT considered a misleading practice!). So, for example, there are two ‘South African Journal of Business Management” publishers, presumably a legitimate one, and a ‘hacked’ one. Information Systems Management must contend with ‘Journal of Information System Management’, etc.. etc…

What surprised me most about our Canadian encounters is the reactions of some of my colleagues. An initial request was made to indicate that journals from this list would not be considered for hiring, tenure or promotion. This seemed like a reasonable request. Surprisingly, there was considerable debate, which ranged from “who created this list, anyway, it seems subjective” to “We’ve always been able to distinguish quality in the past, we should just continue as we have always done”.

While this was going on in the home front, my African colleagues were telling me stories of their own.  Publication is now de-rigueur for academic hiring and promotion at most African universities, even though they have barely established a research culture of their own. Incentives can vary widely, but many institutions pay bonuses for publications, and faculty are often offered opportunities to publish in little known journals for a ‘publication fee’ that can be considerable. During our ‘how to publish’ seminars, faculty repeatedly asked us how to distinguish between these predatory journals and the ‘other’ ones. Young scholars proudly shared how they had published six or seven articles in two years (in what journals, one might ask?). Doctoral students asked how to deal with advisers that insist on having their names on publications, despite them having absolutely nothing to do with any aspect of the research in question. In short, they had little information regarding the full range of scholarship, their own institutions rarely subscribed to the better journals, and they were often in a position of working in the dark regarding quality and scholarly norms.

So, coming full circle, it seems we have a problem of global proportions, one that might impact weaker institutions somewhat more (those without the governing systems to adequately ‘sniff out’ the corruption), but one that nevertheless impacts all of our work.

Of course, I can’t help but reflect the culture I live in – North America (I spend a lot of time in Europe as well…). So many of us would like to thumb our noses at our less fortunate colleagues and explain to them, with our own self importance, how our standards of publications reign supreme, and are only to emulated. To those of you, I’d like to refer to to Andrew Gelman’s recent posting that points out some serious weaknesses of our peer review system, where he critiques ‘power pose’ research.  Gelman points out that If you want to really review a paper, you need peer reviewers who can tell you if you’re missing something within the literature—and you need outside reviewers who can rescue you from groupthink”….” peer-review doesn’t get you much. The peers of the power-pose researchers are . . . other power-pose researchers. Or researchers on embodied cognition, or on other debatable claims in experimental psychology. Or maybe other scientists who don’t work in this area but have heard good things about it and want to be supportive of this work.”

So, let’s come full circle for a moment. We are in an international arena. Institutional norms are diffusing such that everyone wants to get into the same ‘game’. However, the rules of the game are subtle, often manipulated, rarely challenged, and heavily biased in favor of insiders over outsiders. No doubt, clarity would help everyone involved. How can we overcome our own blind spots? How can we validate and authenticate the publication process? What kind of measures might we employ to do so?

I disagree with my colleagues who argue ‘it worked in the past, we can continue doing it in the future’. First, I’m not certain how effective we were in the past. There may be numerous failings hidden inside our collective closets, some of which may come to light in the form of future retractions. Second, I’m not so certain we’ve made enormous progress in our own field of study. And finally, and most importantly, new mechanisms for corruption, cheating, and exploiting seem to pop up each and every day.

Which brings me to the central question I’ve been pondering: What can we do, as a community, to improve the quality of our work, while sifting out potential corrupt forces?




How are we evaluated as scholars?

Considerable effort is expended on tenure reviews, letters of recommendation, and extensive reports on citation counts and the impact factor of  scholarly journals.  Many Jr. faculty tell me that they are required to publish in only a very limited number of ‘high impact’ journals – often as few as five. In fact, one scholar surprised me with this requirement, as not only was the university where he taught not particularly top tier, but neither were his colleagues or the dean imposing the standard. Yet, without the five promising articles, he was out looking for another job. A totally wasted effort on the part of the institution and the scholar, who is very promising and has already ‘delivered’.

The number of universities incorporating these types of barriers seem to be growing, despite increasingly difficult hurdles and ridiculously ‘low’ odds of having a paper accepted for publication in one of these ‘sacred’ journals. It is as though tenure committees no longer have the capacity to think, to read, or to adjudicate. They just want a simple formula, and are just as happy to send a young scholar to another institution then they are to keep them. I just don’t see how that enhances the reputation or quality of the institution. Don’t we want to invest in our human capital? Are faculty simply a number generated by a few editors or by google-scholar? Is there no purpose whatsoever to the community and teaching activities that we might be engaged in, or to the publication outlets that we seek that might be more inclusive than the very top five?

I’ve attended numerous editorial board meetings over the years, and I would say that half of the time dedicated to these meetings revolves around the issue of journal impact factor.  Editors with dropping impact factors seemed ashamed and hasten to share new strategies. I, myself, have observed the removal of case studies and other non-citable material from journals legitimated primarily to enhance citation impact.  Editors with increasing impact factors loudly and broadly share their new found success like proud grandparents.  Given this emphasis, one would think that a set of standard practices would be in order to compare one journal, fairly, with another. And yet, the reality is far from achieving even a semblance of objectivity.

For starters, many editors encourage authors to heavily site their own journals, reflected in the ‘coercive citation’ literature. In fact, a look at the Thompson list of citation impact factor for journals shows that many journals have heavily inflated impact factors due primarily to self-citation. JCR, the primary database for comparison, does provide a measure discounted by self-citations, but this is rarely used or referred to. Fields that are rather small claim the self-citation rate is necessary, as there is little information on their subject matter elsewhere. However, this can also be a convenient way to inflate the work of the editor, editorial board, and influential reviewers and gatekeepers. A very strange example of editorial manipulation occurred a couple of years ago regarding a citation cartel, whereby the editor of one journal edited a few special issues in two other journals. By directing the scholars in the special issues to cite the other journal, the impact factor grew to embarrassingly undeserved heights, resulting in the resignation of that editor.

Now, a recent article has uncovered yet another cynical editorial ‘trick’ to bias statistics and provide a higher impact factor.

An article by Ben Martin in Research Policy entitled “Editors JIF_boosting Stratagems” highlights the many ways editors now employ to upwardly bias their results  (A nice summary of the article is provided by David Matthews  in the times higher education).  The ‘tricks’ are impressive, including keeping articles in a long queue (every wonder why your accepted paper takes two years to reach print?). This ensures that once a paper is published, it will already have a good number of citations attached to it.  As stated by Ben “By holding a paper in the online queue for two years, when it is finally published, it is then earning citations at the Year 3 rate. Papers in Year 3 typically earn about the same number of citations as in Years 1 and 2 combined, and the Year 4 figure is broadly similar.25 Hence, the net effect of this is to add a further 50% or so to the doubling effect described above (the JIF accelerator effect)”.

One top management journal reportedly has 160 articles in their queue, another on management ethics 600!! Other strategies reported include cherry picking articles to hold back, and encouraging review articles, that get widely cited.

In sum, it appears that the ‘objective’ measures we tend to employ regarding journal quality and citation impact are far from objective, and subject to considerable bias and manipulation.

Isn’t it about time that tenure committees read material and focus on content, rather than on a publication’s location? Perhaps we, as a community, can provide a ‘gold standard’ set of recommendations? What do you think?


Student abuse by faculty

I’ve been doing a lot of traveling, lately, which in part accounts for my relative silence over the past few weeks. However, in the course of traveling, I keep coming across a set of similar stories, from throughout the world, although principally from developing countries, in particular, China and those in Africa.

The stories tend to be similar, in terms of exploitation of doctoral or junior faculty members, and go like this:

“At our university, the senior faculty insist that their names go first on every paper I produce, even those that they have not contributed to, in any way”.


“At our university, doctoral students do all the data collection analysis, and writing, but our names are never put on the paper – only that of the senior faculty”.

When I hear these stories, as both a scholar and an editor, I am outraged.  How is it that faculty members can openly exploit students and junior scholars is such a blatant fashion?  Why is it that no professional organization exists to come to their defense? Unfortunately, we are collectively partially responsible, as the publish or perish norms and intense competition that we have helped develop only exacerbates this problem.  It is, after all, a collective problem.

Of course, the exploitation of students is not entirely new. The recent movie “the Stanford Prison Experiment” shows the extent to which Lombardo went during his study, and the expense those participants must have paid.  Zimbardo, then an ambitious recently tenured scholar, was a consultant on the film, and it reportedly accurately reflects what took place. His subsequent work was devoted to more pristine socially progressive causes, such as understanding shyness. No surprise there…

Our code of ethics does address these issues, although not as directly as one might think. For example, on the aspiration side, with regard to integrity:

  1. Integrity

AOM members seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of their profession. In these activities AOM members do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. They strive to keep their promises, to avoid unwise or unclear commitments, and to reach for excellence in teaching, scholarship, and practice. They treat students, colleagues, research subjects, and clients with respect, dignity, fairness, and caring. They accurately and fairly represent their areas and degrees of expertise.

Regarding specifically students:

  1. To our students. Relationships with students require respect, fairness, and caring, along with commitment to our subject matter and to teaching excellence. We accomplish these aims by:

Showing respect for students. It is the duty of AOM members who are educators to show appropriate

respect for students’ feelings, interests, needs, contributions, intellectual freedom, and rights to

privacy.  Maintaining objectivity and fairness. It is the duty of AOM members who are educators to treat  students equitably. Impartiality, objectivity, and fairness are required in all dealings with students.

1.6. Exploitative Relationships: AOM members do not exploit persons over whom they have evaluative or other authority, such as authors, job seekers, or student members.

And finally,

4.2.2. Authorship Credit AOM members ensure that authorship and other publication credits are based on the scientific or professional contributions of the individuals involved. AOM members take responsibility and credit, including authorship credit, only for work they

have actually performed or to which they have contributed. AOM members usually list a student as principal author on multiply authored publications that  substantially derive from the student s dissertation or thesis.

I underlined the word “usually” under I assume that the exception referred to are situations where the student would not be listed as principal author, but would be listed as co-author (although what these would be, and why they would be exceptions, is a bit of a mystery to me). However, it appears that from the perspective of some of our international members, this ‘exception’ may leave open the interpretation that a scholar may occasionally NOT cite a student as co-author, even when they are  a principal author. Further, and unfortunately, there is no mention of adding authorship to work where the scholar did NOT make a contribution (although does seem to imply that such a deed would not be welcome).

So, what can we do collectively to eradicate this practice of exploitive advising? How can we get the message across different cultures and institutions that when an author submits a paper, the assumption by the editor – indeed, the social contract – ensures that an appropriate amount of work has been conducted by that author reflecting the order of authorship? Further, perhaps it is time our code of ethics become a bit more specific regarding some of these practices, in order to make it explicitly clear that exploitation of any sort is unwelcome in our profession.

How should we treat each other as scientific subjects?

At the Academy meeting in Vancouver this year, it was brought to my attention that there were PDW’s collecting research data on participating members – without a clear ethics approval or apparent ethics protocol. That is, there was no informed consent, yet data appeared to be collected.

This was not the first time I observed our collective avoidance of Ethics Review Board (ERB) or Institutional Review Board (IRB) protocol when surveying ourselves.  As previous chair of AOM’s ethics education committee, I was tasked with repeating the ethics survey that we had administered to our entire membership some years before. The first thing that I did was to ask for the ethics review board protocol, in order to be sure I was following accepted procedures.

After a few weeks of embarrassing emails and back and forth confirmations, it was eventually clear that we had never submitted our own ethics survey to any kind of ethics review board. I was told that when the AOM board met to discuss this issue there was some hesitancy to constrain the activities of divisions surveying their membership – and no clear path to indicate who would serve as an accepted IRB for Academy research. My own decision was to obtain ERB approval and protocols from my own university, and proceed with the survey in that manner.

Many of us feel IRB’s are a burden. However, it is worth noting how many of these regulations came about.  For one, experiments on concentration camp victims horrified the scientific community, leading to the Nuremberg code. Much later, the experiments by Stanley Milgrom attempted to understand how people willingly agreed to do terrible acts to each other. His work, as well as famous Zimbardo prison simulation study, have led to tighter constraints on how to approach research, what is acceptable, and when ‘the line is crossed”.

One of my very first sociology professors was Laud Humphreys. He was famous for studying homosexual activities in public toilets, where he acted as the “watchqueen”. Later, he surreptitiously followed participants to their cars, identified their license plates, and showed up at their home disguised as a surveying health worker. This was done in 1960’s before IRB’s were mandated by the US federal government.

In fact, we have Academy members who come from countries where there is little of any oversight regarding research, particularly social science research.  However, I would argue we have a collective responsibility to observe the highest standards of research protocol, despite the burden, for our entire membership.

Our own code of ethics addresses this issue, although not as stridently as one might expect, as there is no specific mention of IRB procedures:

Participants. It is the duty of AOM members to preserve and protect the privacy, dignity, well-being,and freedom of research participants.

1.7. Informed Consent: When AOM members conduct research, including on behalf of the AOM or its divisions, they obtain the informed consent of the individual or individuals, using language that is reasonably understandable to that person or persons. Written or oral consent, permission, and assent are documented appropriately.

2.4. Anticipation of Possible Uses of Information:

2.4.1. When maintaining or accessing personal identifiers in databases or systems of records, such as division rosters, annual meeting submissions, or manuscript review systems, AOM members delete such identifiers before the information is made publicly available or employ other techniques that mask or control disclosure of individual identities.

2.4.2. When deletion of personal identifiers is not feasible, AOM members take reasonable steps to determine that the appropriate consent of personally identifiable individuals has been obtained before they transfer such data to others or review such data collected by others

Most North American universities are under strict IRB procedures.  They are virtually unanimous in stating that all surveys involving human subjects should be subjected to ERB committees. Here are a few statements from the Canadian “Tri Counsel” that governs Canadian universities:

If the primary purpose, design, content and/or function of such surveys is to conduct “research”2 involving humans, then it would generally require REB review, under TCPS Article 1.1(a):

Very similar statements appear at the Cornell Univ. website:

At the end of the day, each of us, no matter where we do our scholarly work, have a responsibility to protect the respondent as much as possible, in every conceivable way. The distance between our own behavior, and the 16 German doctors convicted of experimenting on human beings without their consent, is an essential red line that we cannot allow to become a ‘slippery slope’. Thus, even when we decide to research ourselves, as professors, and colleagues, I believe we should commit to the highest standards of scientific ethical inquiry. Even if IRB’s are a ‘burden’.




Should an anonymous peer review always remain anonymous?

From all accounts, the Academy meeting in Vancouver was a huge success. We had record breaking attendance, beautiful weather, and a wealth of interesting and provocative sessions. I also experienced a few interesting ethically related discussions, and I thought it would be worthwhile sharing a few of them in these next few blogs. The first ‘discussion’ had to do with peer review disclosure.

I was having a conversation with a very well known scholar when another colleague approached us, recognized this individual, and proceeded to tell him how much he liked the paper that was published in journal XYZ, as that he was one of the blind reviewers for that article.  Suddenly, realizing that he was standing next to the ethicist blogger, he looked at me and stated “Oh, maybe I shouldn’t have disclosed that, what do you think Benson”?   As the well known scholar ‘rolled his eyes’, I proceeded to explain that, in my opinion, a blind review is designed to be anonymous not only before publication, but afterwards as well. The reviewer wanted to know why that was the case – and I shared my own perspective: Identifying oneself as a reviewer on a published work could only create some sort of obligation – a social exchange that might be bartered later into some sort of expected favor. After all, if we didn’t expect continued anonymity, why wouldn’t journals simply state, upon publication, the names of the blind reviewers? Surely they would deserve some of the credit for the publication? My view is that the reviewing process should be maintained as an anonymous volunteer activity, disassociated with any sense of possible obligation or appreciation, beyond what the editor and author (anonymously) provides.  Forever….

Later on, I discussed this small incident with another senior scholar and an editor. Surprisingly, the editor couldn’t see a reason why not to disclose the review after publication. His point was that once the publication was accepted, disclosure would only show support and respect for the work undertaken.  The other senior scholar in the conversation pointed out the ‘slippery slope’ problem – that opening up this door would suggest other possible avenues of potential influence. For example, if you were the one reviewer that wrote the most problematic reviews, would you want this disclosed? If you had two favorable reviewers, and one that was a real ‘pain’, would it be fair to help reveal who that third person might be through a process of elimination? Further, if you let it be know that you are the ‘good cop’ in the reviewing process, would you develop a reputation that attracted certain benefits or advantages, that more silent reviewers failed to appreciate?

In searching for an official answer to these questions, I first began with our own code of ethics. While it addresses the issue of confidentiality, there is insufficient detail to precisely indicate what our normative behavior should be, although there is an emphasis on maintaining confidentiality. Specifically:

Maintaining Confidentiality:

2.1.1. AOM members take reasonable precautions to protect the confidentiality rights of others.

2.1.2. Confidential information is treated as such even if it lacks legal protection or privilege.

2.1.3. AOM members maintain the integrity of confidential deliberations, activities, or roles, including, where applicable, those of committees, review panels, or advisory groups (e.g., the AOM Placement Committee, the AOM Ethics Adjudication Committee, etc.). In reviewing material submitted for publication or other evaluation purposes, AOM members respect the confidentiality of the process and the proprietary rights of those who submitted the material.

Given our own code is not particularly explicit, I took a look at the peer review policy of Nature, one of the preeminent scientific journals of our time:

As a condition of agreeing to assess the manuscript, all reviewers undertake to keep submitted manuscripts, associated data, and their own peer review comments confidential, and not to redistribute them without permission from the journal. If a reviewer seeks advice from colleagues while assessing a manuscript, he or she ensures that confidentiality is maintained and that the names of any such colleagues are provided to the journal with the final report. By this and by other means, Nature journals endeavour to keep the content of all submissions confidential until the publication date other than in the specific case of its embargoed press release available to registered journalists. Peer review comments should remain confidential after publication unless the referee obtains permission from the corresponding author of the reviewed manuscript and the Nature journal the comments were delivered to. Although we go to every effort to ensure reviewers honour their promise to ensure confidentiality, we are not responsible for the conduct of reviewers.

Following that, I also took a look at the policy of Science, another outstanding scientific journal:

Confidentiality: We expect reviewers to protect the confidentiality of the manuscript and ensure that it is not disseminated or exploited. Please destroy your copy of the manuscript when you are done. Only discuss the paper with a colleague with permission from the editor. We do not disclose the identity of our reviewers

Thus, while some journals seem to indicate continued confidentiality is expected, it appears that there may be differences of opinion, interpretation, and possibly even confusion regarding what is expected of a blind reviewer, and what would be considered professional or unprofessional conduct.

It would be great if a some of our members weighed in regarding their own opinion on this matter: Do you think it appropriate for a reviewer to disclose that they were part of the double blind process, after publication?