A while back, I got into an interesting discussion with Andrew on the subject of courage, which stemmed from my temporary reticence about speaking my mind in public, or my resentment, if you will, of my “obligation to publish”. (I’m happy to say that I’m much better now.) One thought led to another and I soon found myself warning against a situation in which it might take “heroic amounts” of courage to tell the truth in the social sciences, management studies included. Andrew rightly found that prospect depressing.

But along the way I also noticed the particular virtue that might make all the difference here. It’s an insight worth explicating, if for no other reason than to reveal its flaws. (Let me know if you can spot any, please.) It seems to me that we depend on the decency of others not to make too great demands of our courage. What is this strange comportment we call decency that it could have this power?

In the comments, Erik suggested that the anonymity of peer review removes the need for a great deal of courage. And in an important sense, this is exactly the sort of thing I mean. It’s not that I think anonymous reviewers are congenital cowards, though I’m sure many disgruntled authors would like me to validate them in this belief. Rather, since it takes no courage to review a paper (in ordinary cases), we have to rely on the reviewer’s decency. Since they are protected from our personal judgment of them, we can only hope that they will not exploit their freedom to cruelly abuse us, or lead us on a wild goose chase for pointless references, or waste our time with needless revision. We count on them not reject (or accept us) for their own personal gain, and to tell us honestly what they of our work.

But by the same token, where strong institutions ensure decency, e.g., where editorial oversight protects authors from unhelpful reviews, it also takes less courage to submit a paper for review. We know the editor is not going to let our reviewers abuse us and we can rest assured that if they do form a very negative of opinion of our work, they will not be able to form a correspondingly negative opinion of us.

I’ll never forget the lightbulb that went off in my head many years ago when I was reading Edward Johnson’s Handbook of Good English. He said that it’s an editor’s job is to “protect the author from criticism”, meaning unconstructive complaints about language and grammar from the end reader. An associate editor’s job at journal, by extension, is to protect the author from unconstructive criticism of one’s ideas, first from the reviewers, by demanding a certain standard of them, and thereafter from readers, by selecting competent reviewers that are actually able to evaluate the strengths and weaknesses of the paper that has been submitted.

I like to think of decency as the virtue of “immediate rightness”, or appropriateness in the moment. It’s a matter of keeping the surfaces of social interaction tolerably pleasant. Our code requires us to “respect the dignity and worth of all people” in our activities as management scholars and professionals. We might also say we are bound to be decent. It’s akin to “civility”, but that will have to be a topic for another day.

How are we evaluated as scholars?

Considerable effort is expended on tenure reviews, letters of recommendation, and extensive reports on citation counts and the impact factor of  scholarly journals.  Many Jr. faculty tell me that they are required to publish in only a very limited number of ‘high impact’ journals – often as few as five. In fact, one scholar surprised me with this requirement, as not only was the university where he taught not particularly top tier, but neither were his colleagues or the dean imposing the standard. Yet, without the five promising articles, he was out looking for another job. A totally wasted effort on the part of the institution and the scholar, who is very promising and has already ‘delivered’.

The number of universities incorporating these types of barriers seem to be growing, despite increasingly difficult hurdles and ridiculously ‘low’ odds of having a paper accepted for publication in one of these ‘sacred’ journals. It is as though tenure committees no longer have the capacity to think, to read, or to adjudicate. They just want a simple formula, and are just as happy to send a young scholar to another institution then they are to keep them. I just don’t see how that enhances the reputation or quality of the institution. Don’t we want to invest in our human capital? Are faculty simply a number generated by a few editors or by google-scholar? Is there no purpose whatsoever to the community and teaching activities that we might be engaged in, or to the publication outlets that we seek that might be more inclusive than the very top five?

I’ve attended numerous editorial board meetings over the years, and I would say that half of the time dedicated to these meetings revolves around the issue of journal impact factor.  Editors with dropping impact factors seemed ashamed and hasten to share new strategies. I, myself, have observed the removal of case studies and other non-citable material from journals legitimated primarily to enhance citation impact.  Editors with increasing impact factors loudly and broadly share their new found success like proud grandparents.  Given this emphasis, one would think that a set of standard practices would be in order to compare one journal, fairly, with another. And yet, the reality is far from achieving even a semblance of objectivity.

For starters, many editors encourage authors to heavily site their own journals, reflected in the ‘coercive citation’ literature. In fact, a look at the Thompson list of citation impact factor for journals shows that many journals have heavily inflated impact factors due primarily to self-citation. JCR, the primary database for comparison, does provide a measure discounted by self-citations, but this is rarely used or referred to. Fields that are rather small claim the self-citation rate is necessary, as there is little information on their subject matter elsewhere. However, this can also be a convenient way to inflate the work of the editor, editorial board, and influential reviewers and gatekeepers. A very strange example of editorial manipulation occurred a couple of years ago regarding a citation cartel, whereby the editor of one journal edited a few special issues in two other journals. By directing the scholars in the special issues to cite the other journal, the impact factor grew to embarrassingly undeserved heights, resulting in the resignation of that editor.

Now, a recent article has uncovered yet another cynical editorial ‘trick’ to bias statistics and provide a higher impact factor.

An article by Ben Martin in Research Policy entitled “Editors JIF_boosting Stratagems” highlights the many ways editors now employ to upwardly bias their results  (A nice summary of the article is provided by David Matthews  in the times higher education).  The ‘tricks’ are impressive, including keeping articles in a long queue (every wonder why your accepted paper takes two years to reach print?). This ensures that once a paper is published, it will already have a good number of citations attached to it.  As stated by Ben “By holding a paper in the online queue for two years, when it is finally published, it is then earning citations at the Year 3 rate. Papers in Year 3 typically earn about the same number of citations as in Years 1 and 2 combined, and the Year 4 figure is broadly similar.25 Hence, the net effect of this is to add a further 50% or so to the doubling effect described above (the JIF accelerator effect)”.

One top management journal reportedly has 160 articles in their queue, another on management ethics 600!! Other strategies reported include cherry picking articles to hold back, and encouraging review articles, that get widely cited.

In sum, it appears that the ‘objective’ measures we tend to employ regarding journal quality and citation impact are far from objective, and subject to considerable bias and manipulation.

Isn’t it about time that tenure committees read material and focus on content, rather than on a publication’s location? Perhaps we, as a community, can provide a ‘gold standard’ set of recommendations? What do you think?



“Thus conscience does make cowards of us all.”


In the comments to my last post, Andrew quite literally encouraged me to speak my mind. Truth be told, I’ve always been ambivalent about “intellectual courage”. Sometimes the exercise of our ethical obligations seems to require us to be courageous. But is courage itself an ethical obligation?

Courage is, of course, a virtue and it is presumably what is required of us when we “speak truth to power”. In the paradigm case, some form of social power asks us to lie or to remain silent, and when we defy this power we exercise courage. The consequences can be quite serious because, in so far as the power is real, it is also dangerous. If the powerful person or institution we are defying chooses to punish us for speaking the truth, then it has, by definition, the power to do so.

To understand my ambivalence, consider the ethical obligations that follow from being physically strong. “Ought,” they say, “implies can.” If someone is trapped under a car I have an ethical obligation to lift the car off them, but only, of course, if I have the strength to do so. Is courage a kind of “strength” in that sense?

Courage is a virtue and cowardice is a vice. But some part of our everyday moral psychology also sees them as character traits, i.e., as qualities we are either born with or develop through practice but, in any case, simply have a certain amount of at any given time. Suppose I know a “truth” that “power” would have me remain silent about. To speak it is to risk my career. Now, suppose I simply lack the courage to do it. I’m a “coward”, to be sure, but am I violating my ethics? How much courage can be demanded of my ethical behavior?

We are getting to the core of the issue I want to raise. How much courage should it take to speak the truth in an academic environment? Should it take courage to tell someone they are wrong?

On the one hand, we’d think universities would be a premier site of intellectual courage, much like the military should offer regular occasions for valour.* But let’s think this through. Suppose speaking the truth generally takes a great deal of courage. We will then rely on “heroes” to know what is going on. As students, we must assume that learning how the world works will itself require a great deal of courage, not just intelligence and diligence. Worse, the pressures that require truth-tellers to be courageous would also, of course, make cowards of the rest of us, those of us who are disinclined towards heroic acts of speaking truth to power.

In fact, what our academic institutions ought to do is to insulate inquirers from the social pressures that would require them to be courageous. Perhaps we could say an academic should never have to speak truth to power, but always to knowledge, i.e., to something that won’t hurt them, but might correct them. Don’t we want to know truths even if they are discovered by natural born cowards?

From this point of view, it is unfortunate that academics do, throughout the course of their career, amass real, if somewhat parochial, power. They have the power to exploit (and even harass and abuse) their students, for example, or the power to promote ideologies or products, sometimes for something as base as money. Finally, academics have the power to promote or obstruct their colleagues in their careers.

I want here to focus on the cases in which the abuse of power is also the distortion of truth. Sexual harassment, while certainly wrong, and often worse than intellectual dishonesty, does not directly distort our understanding of a given social phenomenon or exaggerate our confidence in a particular theory. (Because of the concomitant lying, to be sure, it does distort the reality experienced by the harassed persons and their colleagues. But this is not a fully or, if you will, a “merely” academic distortion.)

While it seems petty, and certainly unethical, there is really no question about whether academics have an incentive to punish each other for pointing out each other’s mistakes. An academic who is known for making mistakes will be less successful than one who is known for getting things right. So, if I have the power to prevent someone from pointing out my mistakes, I also, whatever else is true, have an incentive to use it. I may simply bribe the would-be truth-teller with promises of advancement, or I may threaten them with unpleasant consequences. This would be unethical.

In an ethical environment, of course, we would trust that I will not be punished for pointing out a mistake. But this will probably require that no one is ever punished for making them (removing the incentive to punish me for pointing it out). That is, I would be able even to be wrong about your mistakes, more or less without consequences. That’s a truly “utopian” situation.

The dystopian situation, however, is one in which it is very dangerous to speak what Al Gore famously called “an inconvenient truth”. Science would only be done by heroes, and, since these are rare, we would have to resign ourselves to the fact that most scientists are intellectual cowards. In my view, ethics is what ensures that only a reasonable, “ordinary” if you will, amount of courage is needed. We would, for the most part, rely on the decency of our colleagues.** And it would also ensure that science, as a social institution, wouldn’t have much need for cowards; wouldn’t encourage them, if you will.

We will, no doubt, always have to speak the truth, if we speak it at all, to some form of power. And so our knowledge will always depend to some extent on our intellectual courage. We can hope, however, that it does not depend on heroic amounts of courage. That same situation is much more likely to make cowards of most of us.
*Movies that construe soldiers as heroes are, of course, very common. But we sometimes forget how rare they make real courage seem, even among soldiers. In most war movies (and novels), most members of the military, often including high-level officers, are “just following orders”, many of them out of lust for reward or fear of punishment. Much of the conflict pits the hero against these mediocrities.

Indeed, it is possible to raise the question of whether the modern army isn’t actually an attempt to wage war without courage or valour. (This is a common critique of drones, but was already an issue in the British navy, I was once told, when missiles were introduced that allowed one ship to sink another it couldn’t see.) Modernity aside, perhaps this has always been the purpose of a standing army; kings and emperors were finding heroes a bit too rare or too capricious (or perhaps even too honourable!) to realize their foreign policies.

**There’s probably an important relationship between courage and decency. I will explore this in a later post.

The Obligation to Publish

Lately, I’ve been feeling a bit melancholy about the my obligations to speak publicly about what I know. This has affected both my contributions to this blog, and my work on my longstanding blog about academic writing. It’s not, of course, that I don’t know anything, nor that I don’t have anything I want to say; it’s just a sort of reticence about engaging with others. It will, of course, pass in due time, and it’s probably not something to worry about. But it does raise an interesting ethical question: do we have an ethical obligation to say publicly what’s on our mind?

The Code tells us that we have an obligation

2. To the advancement of managerial knowledge. Prudence in research design, human subject use, and confidentiality and reporting of results is essential. Proper attribution of work is a necessity. We accomplish these aims through:

  •   Conducting and reporting. It is the duty of AOM members conducting research to design, implement, analyze, report, and present their findings rigorously.

I imagine most people read this with an emphasis on “rigorously”, i.e., as a responsibility when we do conduct research and report it to do so rigorously. But I think we do well to keep in mind that if we spent our entire scholarly careers conducting no research at all, or not reporting whatever research we did conduct, we would in fact be shirking an important responsibility.

Reporting our research opens it to criticism by our peers. It allows us to be corrected in our views wherever they happen to be erroneous. One of the most important reasons to publish, that is, is to give our peers an opportunity to tell us where we have gone wrong, so we can stop misleading our students about it, for example. But it is also a way of informing others about results that might call their previously held views into question. If I know that something you think is true is actually false (or vice versa) then I have an obligation to share that knowledge with you. That’s part of what it means to be an academic.

There’s an interesting variation on this theme in the current discussion of the publication of “null results”. If 9 out of 10 studies show no significant effect of a particular managerial practice, but only the 1 out of 10 studies that shows an effect is published, then we are being systematically misled about the efficacy of that practice. And yet, in today’s publishing culture, authors and journals are much less incentivized to publish null results than significant ones.

The Code does say that it is the responsibility of “journal editors and reviewers [towards the larger goal of advancing managerial knowledge] to exercise the privilege of their positions in a confidential, unbiased, prompt, constructive, and sensitive manner.” Perhaps I’m once again grasping at straws, but it is possible to construe “unbiased” as requiring us to publish valid but insignificant findings, i.e., studies that show no effect where one was hypothesized.

This becomes a personal ethical concern for individual scholars when they don’t publish results that call their own favoured theory into question, always, of course, citing the unwillingness of journals to publish null results. But whether it’s the authors or the editors that are to blame, the overall effect is that the truth remains hidden. So, in a sense, it is a species of dishonesty.

For that reason alone, I hope this melancholy of mine soon passes and that I once again start doing the responsible thing, namely, putting my ideas out there for all to see.

Student abuse by faculty

I’ve been doing a lot of traveling, lately, which in part accounts for my relative silence over the past few weeks. However, in the course of traveling, I keep coming across a set of similar stories, from throughout the world, although principally from developing countries, in particular, China and those in Africa.

The stories tend to be similar, in terms of exploitation of doctoral or junior faculty members, and go like this:

“At our university, the senior faculty insist that their names go first on every paper I produce, even those that they have not contributed to, in any way”.


“At our university, doctoral students do all the data collection analysis, and writing, but our names are never put on the paper – only that of the senior faculty”.

When I hear these stories, as both a scholar and an editor, I am outraged.  How is it that faculty members can openly exploit students and junior scholars is such a blatant fashion?  Why is it that no professional organization exists to come to their defense? Unfortunately, we are collectively partially responsible, as the publish or perish norms and intense competition that we have helped develop only exacerbates this problem.  It is, after all, a collective problem.

Of course, the exploitation of students is not entirely new. The recent movie “the Stanford Prison Experiment” shows the extent to which Lombardo went during his study, and the expense those participants must have paid.  Zimbardo, then an ambitious recently tenured scholar, was a consultant on the film, and it reportedly accurately reflects what took place. His subsequent work was devoted to more pristine socially progressive causes, such as understanding shyness. No surprise there…

Our code of ethics does address these issues, although not as directly as one might think. For example, on the aspiration side, with regard to integrity:

  1. Integrity

AOM members seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of their profession. In these activities AOM members do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. They strive to keep their promises, to avoid unwise or unclear commitments, and to reach for excellence in teaching, scholarship, and practice. They treat students, colleagues, research subjects, and clients with respect, dignity, fairness, and caring. They accurately and fairly represent their areas and degrees of expertise.

Regarding specifically students:

  1. To our students. Relationships with students require respect, fairness, and caring, along with commitment to our subject matter and to teaching excellence. We accomplish these aims by:

Showing respect for students. It is the duty of AOM members who are educators to show appropriate

respect for students’ feelings, interests, needs, contributions, intellectual freedom, and rights to

privacy.  Maintaining objectivity and fairness. It is the duty of AOM members who are educators to treat  students equitably. Impartiality, objectivity, and fairness are required in all dealings with students.

1.6. Exploitative Relationships: AOM members do not exploit persons over whom they have evaluative or other authority, such as authors, job seekers, or student members.

And finally,

4.2.2. Authorship Credit AOM members ensure that authorship and other publication credits are based on the scientific or professional contributions of the individuals involved. AOM members take responsibility and credit, including authorship credit, only for work they

have actually performed or to which they have contributed. AOM members usually list a student as principal author on multiply authored publications that  substantially derive from the student s dissertation or thesis.

I underlined the word “usually” under I assume that the exception referred to are situations where the student would not be listed as principal author, but would be listed as co-author (although what these would be, and why they would be exceptions, is a bit of a mystery to me). However, it appears that from the perspective of some of our international members, this ‘exception’ may leave open the interpretation that a scholar may occasionally NOT cite a student as co-author, even when they are  a principal author. Further, and unfortunately, there is no mention of adding authorship to work where the scholar did NOT make a contribution (although does seem to imply that such a deed would not be welcome).

So, what can we do collectively to eradicate this practice of exploitive advising? How can we get the message across different cultures and institutions that when an author submits a paper, the assumption by the editor – indeed, the social contract – ensures that an appropriate amount of work has been conducted by that author reflecting the order of authorship? Further, perhaps it is time our code of ethics become a bit more specific regarding some of these practices, in order to make it explicitly clear that exploitation of any sort is unwelcome in our profession.


This blog is committed to facilitating a conversation about ethics among the members of the Academy of Management. There are two reasons for this. First, the topic demands it. It is not enough for a professional organization to have a code of ethics, nor even for that code to be rigorously enforced. In order to have a positive effect, ethics must be the subject of an ongoing conversation among the practitioners that work in the relevant communities. There’s no straightforwardly “right and wrong” way of doing a particular thing. We become “better people” by talking about what we do and how we do it, and the consequences of our actions on other people.

Second, it is my firm belief that blogs are best engaged with as conversations, even if only as conversations “overheard”. When I write a blog post, I’m not really pretending to be an “author”. It is certainly not my intention to “lecture”. Your role, as a reader, is not simply to try to understand and then believe what I tell you. Rather, implicitly at the end of the post, there is the question, What do you think? Often (since this is a blog about ethical behavior), What would you do?

So I’ve been thrilled to talk to an anonymous reader in the comments to my post from a couple of weeks ago. Focusing mainly on publication ethics, Anon123 began by saying that he* was “deeply skeptical of any attempts to teach ethics other than by our everyday conduct and, perhaps more importantly, the conduct of the leaders of our field.” I share his worry but am, perhaps, a bit more optimistic. I think that, if the conversation about ethics is being had throughout the many forums of the Academy, our leaders will have both better conditions and better opportunities to set a good example. Perhaps they’ll even find their efforts rewarded in journal and business school rankings. But, for the past 20 years or so, it is true that we have taken ethics somewhat for granted, assuming that people are generally well-intentioned and that errors are generally honest. This has perhaps made us less vigilant than we should be–even, I often emphasize, as regards catching those honest mistakes.

The result, as Anon123 points out, can sometimes be a bit dispiriting:

I have been in the field a fairly long time but I find myself unwilling to believe much of what is published in our journals anymore. The work on the Chrysalis Effect, researcher degrees of freedom, p-hacking and HARKing makes it clear that a substantial proportion of our collective scholarship cannot be trusted, but it is impossible to know precisely what to trust and what not to trust.

These are all issues that concern me too. I’d highly recommend Andrew Gelman’s blog for anyone who is interested in a technical discussion of the many ways in which statistics can be misused, out of either malice or ignorance. (See this post, for example, about how what is sometimes called p-hacking often actually results from perfectly sincere statistical naivety.) Of course, it hardly matters whether people are cheating or just careless (and we do, of course, have an ethical obligation to be careful) if the result is that the published literature becomes an unreliable source of knowledge. And that’s exactly what Anon123 suggests, in very strong terms:

If you told me that 5% or 10% of my favorite cereal brand is infested with worms but that I can only tell that after I have purchased the cereal (or have tried to eat it) I can guarantee you that I would no longer purchase that cereal. Similarly, I feel disinclined to continue to “purchase” many of the paper published in journals like AMJ or JOM – or recommend them to others.

That is, he would not simply buy the cereal with greater caution–testing it for worms, for example, before eating it. Rather, he’d simply stop buying it. This reminds me that I once discovered a shelf-full of hot wings in the local supermarket that were a month over their best-before date. The store clerk I pointed it out to didn’t really seem interested. He didn’t hurry over to check out the problem (even to make sure that my absurd claim was indeed mistaken), but sort of sauntered on with his day. I guess he’d “get to them” when he was ready. Needless to say, I’ve had a hard time buying anything there ever since. Certainly, I confined my purchases on that day to a few imperishables.

Notice that it wasn’t just the extremely out-of-date hot wings that turned me off the store. It was the conversation about it (or lack thereof) that ensued that undermined my trust. Likewise, knowing that 60% of the results of psychological studies can’t be replicated does not mean (though I am sometimes tempted to let it) that we shouldn’t ever take psychology seriously. It is how the psychological sciences deal with this new knowledge that is important. If we get the sense that they are sweeping it under the rug, or simply not really bothered by it, then it will indeed affect how seriously we can take them.

The recent correction of an ASQ paper about CEO narcissism, has given me some hope that the system is improving. Here’s how Jerry Davis described the exemplary process to Retraction Watch:

A concerned reader notified me of the issues with a published table in this paper a few weeks ago, and also contacted the authors.  The authors came forward with a correction, which we promptly published.  We did not consider this sufficient for a full  retraction.  The concerned reader reports that he/she is satisfied with the corrigendum.  The journal is always looking for ways to enhance the quality of the review process, and if errors end up in print, we aim to correct them promptly.

To me, the key here is that the “concerned reader … is satisfied with the corrigendum”. It is all about feeling that when you share your concerns they are taken seriously. That’s the sort of leadership that is likely to rebuild the trust we need in the management literature. Hopefully, over time, even Anon123 can be brought around.



* I had to think about this pronoun for awhile, and I’m sorry if I got it wrong. It is of course possible to get it wrong even when a name (like Jesse or Shawn) is given. In this case, I’ve gone with my intuition based on the style of the comment, its “voice, if you will. If my “ear” has misled me I hope it will cause as little offence as the time I assumed an Italian commenter named Gabriele was a woman.

Ethics and Ethnography

I’ve been having some interesting conversations over at OrgTheory with Victor Tan Chen about the ethical dilemmas that ethnographers face in their research practices. This is closely related to the issues that Benson picked up on in his recent post, noting that our Code of Ehics requires us “to preserve and protect the privacy, dignity, well-being, and freedom of [our] research participants.” In this post, I’d like to bring out to important dimensions which we might distinguish into a concern with our “scientific” and our “professional” integrity.

As scientists, we are concerned with the truth. So, when we observe something in our fieldwork we feel a duty to report those events as they actually happened. But sometimes we have to modify our description of those events, leave them out, or even outright fictionalize them, in order to protect our research subjects from the consequences of making their actions public. (This is not always, but sometimes, because they are themselves involved in unethical or illegal activities, which raises an additional dilemma.) Once we do this, of course, we have made a compromise, we have sacrificed a little bit of truth for the sake of a, presumably, greater bit of justice.

But at the next level of analysis, we now have to ask ourselves whether we are inadvertently circulating falsehoods. Will our readers begin to tell certain anecdotes to their peers and students as though they are “true stories” even though the actual events are very different? What for us might merely be slight embellishment for the sake of concealing an identity or a location, might for our readers become an illuminating “fact” about how the world works.

Consider an analogy to medical science. Obviously, you don’t want to end up claiming that a pill has effects it doesn’t actually have or doesn’t have effects it actually does. That’s why you don’t leave out information about the population that you have tested it on. If you’ve only tested the pill on healthy men in their thirties, you don’t hide this fact in your write up because it’s important to know that its effects on seventy year-old women with high blood pressure are largely unknown. Similarly, if you’ve done your ethnographic research in rural China, you don’t “anonymize it” by saying it was done in India or the US. The context matters, and it is often very difficult to know how to characterize the context while also making it non-specific enough not to reveal who your actual research subjects were.

The broader professional issue has to do with preserving our collective to access to the communities that we want to remain knowledgeable about. If Wall Street bankers always find themselves written about by ethnographers as greedy sociopaths (and assuming they don’t self-identify as greedy sociopaths) or citizens of low-income neighborhoods always find themselves described as criminals, they will slowly develop a (not entirely unfounded) distrust of ethnographers and will, therefore, be less likely to open up their practices to our fieldwork. As Victor points out, these are issues that journalists also face, and which they have a variety of means to deal with. Many of these means can be sorted under “ethics”.

Let me emphasize that these are issues we must face collectively, i.e., as a profession. Losing access to empirical data is not just a risk you face personally in your own work. If your peers don’t enforce disciplinary standards then we’ll all lose credibility when engaging with practitioners. For this reason I also agree with the anonymous commenter on my last post: we must lead by example and, unfortunately, every now and then we must make examples of each other.


It’s been a while since I’ve posted here, and I had better get my act together again. I thought a good way to get going would be say a few words about the practical work of the Ethics Education Committee in the year to come, very much in the hopes that some of our readers here at the Ethicist will see an angle in it that they might find engaging. In addition to attracting Academy members who might like to work directly with the committee, I’m also looking for ways that the committee might make a contribution to the work of the various divisions.

Let me begin with the blog, which we’re hoping will become a major site of activity in the months to come. This is a place where we can discuss the sorts of ethical issues that are faced by Academy members, both as scholars and as professionals. It is also a place where we can can develop the form and content of the materials we contribute to ethics education throughout the Academy. Currently, I’m very focused on the contribution we can make to the doctoral and early-career researcher consortia over the coming years. I will have some news about that soon.

My hope is that the blog can be a place where the Academy’s members can have some influence on what we mean by ethics and how we teach it.  This is the sort of question I tried to raise in my post about the two major approaches to ethics education we tried out in Vancouver.

In Vancouver I was also given the “keys” to the Ethicist’s Twitter account, which I will be trying to promote in the weeks to come. Do please help me help its future followers find it by retweeting the stuff you think is interesting. This, of course, will also give us a better sense of what you do, in fact, find interesting to talk about.

As a general framework for thinking about what the Committee can contribute, I want to propose we think about the ideal presentation, centered on the contents of the Academy of Management’s Code of Ethics, that might be delivered in 5, 15, 30, 45 and 60 minutes. What would be the most important topics and principles to cover? What would be the best way to engage an audience of the Academy’s members (usually doctoral students or early-career researchers)? What’s a sure-fire way to lose them?

To my mind, ethics is a practice by which we form our moral characters. It is both individual and social. It’s the means by which we help each other become better people, and remain good in the face of life’s many pressures. It is a very practical business.

How should we treat each other as scientific subjects?

At the Academy meeting in Vancouver this year, it was brought to my attention that there were PDW’s collecting research data on participating members – without a clear ethics approval or apparent ethics protocol. That is, there was no informed consent, yet data appeared to be collected.

This was not the first time I observed our collective avoidance of Ethics Review Board (ERB) or Institutional Review Board (IRB) protocol when surveying ourselves.  As previous chair of AOM’s ethics education committee, I was tasked with repeating the ethics survey that we had administered to our entire membership some years before. The first thing that I did was to ask for the ethics review board protocol, in order to be sure I was following accepted procedures.

After a few weeks of embarrassing emails and back and forth confirmations, it was eventually clear that we had never submitted our own ethics survey to any kind of ethics review board. I was told that when the AOM board met to discuss this issue there was some hesitancy to constrain the activities of divisions surveying their membership – and no clear path to indicate who would serve as an accepted IRB for Academy research. My own decision was to obtain ERB approval and protocols from my own university, and proceed with the survey in that manner.

Many of us feel IRB’s are a burden. However, it is worth noting how many of these regulations came about.  For one, experiments on concentration camp victims horrified the scientific community, leading to the Nuremberg code. Much later, the experiments by Stanley Milgrom attempted to understand how people willingly agreed to do terrible acts to each other. His work, as well as famous Zimbardo prison simulation study, have led to tighter constraints on how to approach research, what is acceptable, and when ‘the line is crossed”.

One of my very first sociology professors was Laud Humphreys. He was famous for studying homosexual activities in public toilets, where he acted as the “watchqueen”. Later, he surreptitiously followed participants to their cars, identified their license plates, and showed up at their home disguised as a surveying health worker. This was done in 1960’s before IRB’s were mandated by the US federal government.

In fact, we have Academy members who come from countries where there is little of any oversight regarding research, particularly social science research.  However, I would argue we have a collective responsibility to observe the highest standards of research protocol, despite the burden, for our entire membership.

Our own code of ethics addresses this issue, although not as stridently as one might expect, as there is no specific mention of IRB procedures:

Participants. It is the duty of AOM members to preserve and protect the privacy, dignity, well-being,and freedom of research participants.

1.7. Informed Consent: When AOM members conduct research, including on behalf of the AOM or its divisions, they obtain the informed consent of the individual or individuals, using language that is reasonably understandable to that person or persons. Written or oral consent, permission, and assent are documented appropriately.

2.4. Anticipation of Possible Uses of Information:

2.4.1. When maintaining or accessing personal identifiers in databases or systems of records, such as division rosters, annual meeting submissions, or manuscript review systems, AOM members delete such identifiers before the information is made publicly available or employ other techniques that mask or control disclosure of individual identities.

2.4.2. When deletion of personal identifiers is not feasible, AOM members take reasonable steps to determine that the appropriate consent of personally identifiable individuals has been obtained before they transfer such data to others or review such data collected by others

Most North American universities are under strict IRB procedures.  They are virtually unanimous in stating that all surveys involving human subjects should be subjected to ERB committees. Here are a few statements from the Canadian “Tri Counsel” that governs Canadian universities:

If the primary purpose, design, content and/or function of such surveys is to conduct “research”2 involving humans, then it would generally require REB review, under TCPS Article 1.1(a):

Very similar statements appear at the Cornell Univ. website:

At the end of the day, each of us, no matter where we do our scholarly work, have a responsibility to protect the respondent as much as possible, in every conceivable way. The distance between our own behavior, and the 16 German doctors convicted of experimenting on human beings without their consent, is an essential red line that we cannot allow to become a ‘slippery slope’. Thus, even when we decide to research ourselves, as professors, and colleagues, I believe we should commit to the highest standards of scientific ethical inquiry. Even if IRB’s are a ‘burden’.




Who Needs Ethics Education?

At this year’s Academy meeting we had some interesting conversations in the Ethics Education Committee about our approach to teaching the Code. The traditional approach is to assume that our audiences need resources to help them to reflect on what is right and wrong in their professional practices. This can involve everything from from helping them to clarify their underlying values  to helping them decide whether to credit a particular author in a particular circumstance. The presumption is that people want to learn how to become, for lack of a better word, better people. They want to learn what is right. We’re certainly willing and able to provide such support, even if we often approach it by telling them what is wrong, what not to do.

But I had the opportunity to talk a few consortium organizers in the divisions this year and I got the sense that not all our audiences feel that this is the right approach. An alternative, and one for which I’ve been arguing lately every chance I get, is to educate people about what to do when they run into ethically questionable behavior in others. Sometimes it is just that: merely “questionable”, and when the questions are answered everything turns out to be fine. But sometimes there is a need to take action, either to protect yourself from harm or to mitigate the harm that may have been done to someone else. Even when you’re blameless, you need ethics to help guide you towards a constructive resolution of the conflict.

That’s why we’ve been working to incorporate a sense of the various processes and procedures within the Academy of Management in our educational initiatives. In a sense, we want to shift the focus from the “bad guys”, who need to be told what not to do, to the “good guys”, who need to be told what can be done when bad things happen. And it’s even more hopeful than that, in fact. Sometimes, a robustly ethical perspective can give us the hope we need to discover that an apparent wrong was actually not as harmful as we thought, perhaps not a wrong at all.

Let me offer a simple example. One topic that came up a few times was the increasing problem of “coercive citation”. This is the practice of requiring someone to cite your favorite paper (perhaps even one you’ve written yourself) before you’ll publish them. Such power can be exerted by both editors and reviewers, though most of the focus these days is on the editors who do it to boost their impact factors. Now, on the traditional approach we’d try to encourage editors not to be coercive in this way. But do we really think that the Ethics Education Committee will reach the hearts and minds of senior scholars who have become editors of important journals? I’m not very hopeful about this at all.

Instead, therefore, we can try to instruct authors in how to interpret and respond to what appears to be an attempt to coerce a citation. The first rule would be to assume good faith. At first pass, a suggested citation is just that: a suggestion to read a particular paper because including it may strengthen your own. The problem arises after you read it and deem it to be either deeply flawed or simply irrelevant to your aims. At this point, a cynical author might decide to cite the paper anyway, on the understanding that it is required for publication. But a less cynical one–one that has been ethically educated, let’s say–might simply thank the editor or the reviewer for the suggestion and explain that the paper is not, in the author’s judgment, appropriate to include. If the suggestion was indeed intended to be coercive, it just ran into an obstacle (and then we can talk about what might happen next), but if it wasn’t, it would have been tragic to let it harm the quality of the original argument and corrupt the author’s integrity.

I think this sort of instruction in what our options are when something appears to be amiss but might not actually be is too often left out of ethics education. Ethics education is not really for bad people who need to become better. It’s for good people who need strategies and support for maintaining their goodness in the face all sorts of mixed signals and strange incentives. Ethics education is about telling people that there is a community in place to support their attempts to be good, not a surveillance state to thwart their attempts to cheat. In this way, ethics education might even be edifying.