The Meaning of Ethics

We’re long overdue here at the Ethicist to get back into the swing of things! Obviously, we’re now looking forward to the annual meeting in Anaheim, which has an especially fortuitous theme for the Ethics Education Committee. As this year’s program chair, Mary Ann Glynn, points out, meaning and ethics are closely related issues for organizations:

Recently, there have been highly publicized corporate scandals, Wall Street corruption, and failures of government to meet the needs of its citizens, with a resulting rise in public distrust and questioning of organizations’ reasons for being.  We often take as given that an organization’s purpose to produce economic value; and, although economic value can often add to social value, sometimes it does not.  This disjuncture raises the question of meaningfulness.

Avoiding scandal and fostering trust is arguably the very meaning of ethics. And if it is true, as I think it is, that communities are essentially constituted by their ethics (show me your ethics and I’ll tell you a great deal about what kind of community you’ve got) then there’s no doubt that frequent scandals and and deepening mistrust would raise fundamental questions about an organization’s “reason for being”. What’s going on? What’s it all about? Why are we here? What does it all mean? Important issues to think about.

Nor is the academy (and even the Academy) able to keep itself entirely aloof from these questions. In the world of research, there have also been scandals and mistrust, and we need to face these issues squarely in our own research practice and communities. That’s where the Ethicist and the Ethics Education Committee would like to be of help.

We’re currently reaching out the AOM divisions to offer our help in developing and implementing ethics modules in their PDW’s and consortia sessions. It is our view that ethics is best approached as a conversation, not a merely a code of conduct. While we have the Code to guide us, the  really interesting work happens in a discussion of the details. And that conversation always has to be anchored in the actual practices that constitute the wide variety of research communities with the Academy of Management. Accordingly, our first responsibility as ethics educators is to listen and learn from members about the situations they face.

This post, then, is an invitation to engage in conversation. Feel free to leave comments below about the sorts of issues you think need to be faced, either by your own discipline or by the management field in general. Also, I am coordinating our consortia contributions this year, so if you are leading a consortium session for your division and would like include the EEC, either in your presentation or in your planning, please contact me by mail so we can work out the details.

 

Decency

A while back, I got into an interesting discussion with Andrew on the subject of courage, which stemmed from my temporary reticence about speaking my mind in public, or my resentment, if you will, of my “obligation to publish”. (I’m happy to say that I’m much better now.) One thought led to another and I soon found myself warning against a situation in which it might take “heroic amounts” of courage to tell the truth in the social sciences, management studies included. Andrew rightly found that prospect depressing.

But along the way I also noticed the particular virtue that might make all the difference here. It’s an insight worth explicating, if for no other reason than to reveal its flaws. (Let me know if you can spot any, please.) It seems to me that we depend on the decency of others not to make too great demands of our courage. What is this strange comportment we call decency that it could have this power?

In the comments, Erik suggested that the anonymity of peer review removes the need for a great deal of courage. And in an important sense, this is exactly the sort of thing I mean. It’s not that I think anonymous reviewers are congenital cowards, though I’m sure many disgruntled authors would like me to validate them in this belief. Rather, since it takes no courage to review a paper (in ordinary cases), we have to rely on the reviewer’s decency. Since they are protected from our personal judgment of them, we can only hope that they will not exploit their freedom to cruelly abuse us, or lead us on a wild goose chase for pointless references, or waste our time with needless revision. We count on them not reject (or accept us) for their own personal gain, and to tell us honestly what they of our work.

But by the same token, where strong institutions ensure decency, e.g., where editorial oversight protects authors from unhelpful reviews, it also takes less courage to submit a paper for review. We know the editor is not going to let our reviewers abuse us and we can rest assured that if they do form a very negative of opinion of our work, they will not be able to form a correspondingly negative opinion of us.

I’ll never forget the lightbulb that went off in my head many years ago when I was reading Edward Johnson’s Handbook of Good English. He said that it’s an editor’s job is to “protect the author from criticism”, meaning unconstructive complaints about language and grammar from the end reader. An associate editor’s job at journal, by extension, is to protect the author from unconstructive criticism of one’s ideas, first from the reviewers, by demanding a certain standard of them, and thereafter from readers, by selecting competent reviewers that are actually able to evaluate the strengths and weaknesses of the paper that has been submitted.

I like to think of decency as the virtue of “immediate rightness”, or appropriateness in the moment. It’s a matter of keeping the surfaces of social interaction tolerably pleasant. Our code requires us to “respect the dignity and worth of all people” in our activities as management scholars and professionals. We might also say we are bound to be decent. It’s akin to “civility”, but that will have to be a topic for another day.

Courage

“Thus conscience does make cowards of us all.”

Hamlet

In the comments to my last post, Andrew quite literally encouraged me to speak my mind. Truth be told, I’ve always been ambivalent about “intellectual courage”. Sometimes the exercise of our ethical obligations seems to require us to be courageous. But is courage itself an ethical obligation?

Courage is, of course, a virtue and it is presumably what is required of us when we “speak truth to power”. In the paradigm case, some form of social power asks us to lie or to remain silent, and when we defy this power we exercise courage. The consequences can be quite serious because, in so far as the power is real, it is also dangerous. If the powerful person or institution we are defying chooses to punish us for speaking the truth, then it has, by definition, the power to do so.

To understand my ambivalence, consider the ethical obligations that follow from being physically strong. “Ought,” they say, “implies can.” If someone is trapped under a car I have an ethical obligation to lift the car off them, but only, of course, if I have the strength to do so. Is courage a kind of “strength” in that sense?

Courage is a virtue and cowardice is a vice. But some part of our everyday moral psychology also sees them as character traits, i.e., as qualities we are either born with or develop through practice but, in any case, simply have a certain amount of at any given time. Suppose I know a “truth” that “power” would have me remain silent about. To speak it is to risk my career. Now, suppose I simply lack the courage to do it. I’m a “coward”, to be sure, but am I violating my ethics? How much courage can be demanded of my ethical behavior?

We are getting to the core of the issue I want to raise. How much courage should it take to speak the truth in an academic environment? Should it take courage to tell someone they are wrong?

On the one hand, we’d think universities would be a premier site of intellectual courage, much like the military should offer regular occasions for valour.* But let’s think this through. Suppose speaking the truth generally takes a great deal of courage. We will then rely on “heroes” to know what is going on. As students, we must assume that learning how the world works will itself require a great deal of courage, not just intelligence and diligence. Worse, the pressures that require truth-tellers to be courageous would also, of course, make cowards of the rest of us, those of us who are disinclined towards heroic acts of speaking truth to power.

In fact, what our academic institutions ought to do is to insulate inquirers from the social pressures that would require them to be courageous. Perhaps we could say an academic should never have to speak truth to power, but always to knowledge, i.e., to something that won’t hurt them, but might correct them. Don’t we want to know truths even if they are discovered by natural born cowards?

From this point of view, it is unfortunate that academics do, throughout the course of their career, amass real, if somewhat parochial, power. They have the power to exploit (and even harass and abuse) their students, for example, or the power to promote ideologies or products, sometimes for something as base as money. Finally, academics have the power to promote or obstruct their colleagues in their careers.

I want here to focus on the cases in which the abuse of power is also the distortion of truth. Sexual harassment, while certainly wrong, and often worse than intellectual dishonesty, does not directly distort our understanding of a given social phenomenon or exaggerate our confidence in a particular theory. (Because of the concomitant lying, to be sure, it does distort the reality experienced by the harassed persons and their colleagues. But this is not a fully or, if you will, a “merely” academic distortion.)

While it seems petty, and certainly unethical, there is really no question about whether academics have an incentive to punish each other for pointing out each other’s mistakes. An academic who is known for making mistakes will be less successful than one who is known for getting things right. So, if I have the power to prevent someone from pointing out my mistakes, I also, whatever else is true, have an incentive to use it. I may simply bribe the would-be truth-teller with promises of advancement, or I may threaten them with unpleasant consequences. This would be unethical.

In an ethical environment, of course, we would trust that I will not be punished for pointing out a mistake. But this will probably require that no one is ever punished for making them (removing the incentive to punish me for pointing it out). That is, I would be able even to be wrong about your mistakes, more or less without consequences. That’s a truly “utopian” situation.

The dystopian situation, however, is one in which it is very dangerous to speak what Al Gore famously called “an inconvenient truth”. Science would only be done by heroes, and, since these are rare, we would have to resign ourselves to the fact that most scientists are intellectual cowards. In my view, ethics is what ensures that only a reasonable, “ordinary” if you will, amount of courage is needed. We would, for the most part, rely on the decency of our colleagues.** And it would also ensure that science, as a social institution, wouldn’t have much need for cowards; wouldn’t encourage them, if you will.

We will, no doubt, always have to speak the truth, if we speak it at all, to some form of power. And so our knowledge will always depend to some extent on our intellectual courage. We can hope, however, that it does not depend on heroic amounts of courage. That same situation is much more likely to make cowards of most of us.
__________
*Movies that construe soldiers as heroes are, of course, very common. But we sometimes forget how rare they make real courage seem, even among soldiers. In most war movies (and novels), most members of the military, often including high-level officers, are “just following orders”, many of them out of lust for reward or fear of punishment. Much of the conflict pits the hero against these mediocrities.

Indeed, it is possible to raise the question of whether the modern army isn’t actually an attempt to wage war without courage or valour. (This is a common critique of drones, but was already an issue in the British navy, I was once told, when missiles were introduced that allowed one ship to sink another it couldn’t see.) Modernity aside, perhaps this has always been the purpose of a standing army; kings and emperors were finding heroes a bit too rare or too capricious (or perhaps even too honourable!) to realize their foreign policies.

**There’s probably an important relationship between courage and decency. I will explore this in a later post.

The Obligation to Publish

Lately, I’ve been feeling a bit melancholy about the my obligations to speak publicly about what I know. This has affected both my contributions to this blog, and my work on my longstanding blog about academic writing. It’s not, of course, that I don’t know anything, nor that I don’t have anything I want to say; it’s just a sort of reticence about engaging with others. It will, of course, pass in due time, and it’s probably not something to worry about. But it does raise an interesting ethical question: do we have an ethical obligation to say publicly what’s on our mind?

The Code tells us that we have an obligation

2. To the advancement of managerial knowledge. Prudence in research design, human subject use, and confidentiality and reporting of results is essential. Proper attribution of work is a necessity. We accomplish these aims through:

  •   Conducting and reporting. It is the duty of AOM members conducting research to design, implement, analyze, report, and present their findings rigorously.

I imagine most people read this with an emphasis on “rigorously”, i.e., as a responsibility when we do conduct research and report it to do so rigorously. But I think we do well to keep in mind that if we spent our entire scholarly careers conducting no research at all, or not reporting whatever research we did conduct, we would in fact be shirking an important responsibility.

Reporting our research opens it to criticism by our peers. It allows us to be corrected in our views wherever they happen to be erroneous. One of the most important reasons to publish, that is, is to give our peers an opportunity to tell us where we have gone wrong, so we can stop misleading our students about it, for example. But it is also a way of informing others about results that might call their previously held views into question. If I know that something you think is true is actually false (or vice versa) then I have an obligation to share that knowledge with you. That’s part of what it means to be an academic.

There’s an interesting variation on this theme in the current discussion of the publication of “null results”. If 9 out of 10 studies show no significant effect of a particular managerial practice, but only the 1 out of 10 studies that shows an effect is published, then we are being systematically misled about the efficacy of that practice. And yet, in today’s publishing culture, authors and journals are much less incentivized to publish null results than significant ones.

The Code does say that it is the responsibility of “journal editors and reviewers [towards the larger goal of advancing managerial knowledge] to exercise the privilege of their positions in a confidential, unbiased, prompt, constructive, and sensitive manner.” Perhaps I’m once again grasping at straws, but it is possible to construe “unbiased” as requiring us to publish valid but insignificant findings, i.e., studies that show no effect where one was hypothesized.

This becomes a personal ethical concern for individual scholars when they don’t publish results that call their own favoured theory into question, always, of course, citing the unwillingness of journals to publish null results. But whether it’s the authors or the editors that are to blame, the overall effect is that the truth remains hidden. So, in a sense, it is a species of dishonesty.

For that reason alone, I hope this melancholy of mine soon passes and that I once again start doing the responsible thing, namely, putting my ideas out there for all to see.

Trust

This blog is committed to facilitating a conversation about ethics among the members of the Academy of Management. There are two reasons for this. First, the topic demands it. It is not enough for a professional organization to have a code of ethics, nor even for that code to be rigorously enforced. In order to have a positive effect, ethics must be the subject of an ongoing conversation among the practitioners that work in the relevant communities. There’s no straightforwardly “right and wrong” way of doing a particular thing. We become “better people” by talking about what we do and how we do it, and the consequences of our actions on other people.

Second, it is my firm belief that blogs are best engaged with as conversations, even if only as conversations “overheard”. When I write a blog post, I’m not really pretending to be an “author”. It is certainly not my intention to “lecture”. Your role, as a reader, is not simply to try to understand and then believe what I tell you. Rather, implicitly at the end of the post, there is the question, What do you think? Often (since this is a blog about ethical behavior), What would you do?

So I’ve been thrilled to talk to an anonymous reader in the comments to my post from a couple of weeks ago. Focusing mainly on publication ethics, Anon123 began by saying that he* was “deeply skeptical of any attempts to teach ethics other than by our everyday conduct and, perhaps more importantly, the conduct of the leaders of our field.” I share his worry but am, perhaps, a bit more optimistic. I think that, if the conversation about ethics is being had throughout the many forums of the Academy, our leaders will have both better conditions and better opportunities to set a good example. Perhaps they’ll even find their efforts rewarded in journal and business school rankings. But, for the past 20 years or so, it is true that we have taken ethics somewhat for granted, assuming that people are generally well-intentioned and that errors are generally honest. This has perhaps made us less vigilant than we should be–even, I often emphasize, as regards catching those honest mistakes.

The result, as Anon123 points out, can sometimes be a bit dispiriting:

I have been in the field a fairly long time but I find myself unwilling to believe much of what is published in our journals anymore. The work on the Chrysalis Effect, researcher degrees of freedom, p-hacking and HARKing makes it clear that a substantial proportion of our collective scholarship cannot be trusted, but it is impossible to know precisely what to trust and what not to trust.

These are all issues that concern me too. I’d highly recommend Andrew Gelman’s blog for anyone who is interested in a technical discussion of the many ways in which statistics can be misused, out of either malice or ignorance. (See this post, for example, about how what is sometimes called p-hacking often actually results from perfectly sincere statistical naivety.) Of course, it hardly matters whether people are cheating or just careless (and we do, of course, have an ethical obligation to be careful) if the result is that the published literature becomes an unreliable source of knowledge. And that’s exactly what Anon123 suggests, in very strong terms:

If you told me that 5% or 10% of my favorite cereal brand is infested with worms but that I can only tell that after I have purchased the cereal (or have tried to eat it) I can guarantee you that I would no longer purchase that cereal. Similarly, I feel disinclined to continue to “purchase” many of the paper published in journals like AMJ or JOM – or recommend them to others.

That is, he would not simply buy the cereal with greater caution–testing it for worms, for example, before eating it. Rather, he’d simply stop buying it. This reminds me that I once discovered a shelf-full of hot wings in the local supermarket that were a month over their best-before date. The store clerk I pointed it out to didn’t really seem interested. He didn’t hurry over to check out the problem (even to make sure that my absurd claim was indeed mistaken), but sort of sauntered on with his day. I guess he’d “get to them” when he was ready. Needless to say, I’ve had a hard time buying anything there ever since. Certainly, I confined my purchases on that day to a few imperishables.

Notice that it wasn’t just the extremely out-of-date hot wings that turned me off the store. It was the conversation about it (or lack thereof) that ensued that undermined my trust. Likewise, knowing that 60% of the results of psychological studies can’t be replicated does not mean (though I am sometimes tempted to let it) that we shouldn’t ever take psychology seriously. It is how the psychological sciences deal with this new knowledge that is important. If we get the sense that they are sweeping it under the rug, or simply not really bothered by it, then it will indeed affect how seriously we can take them.

The recent correction of an ASQ paper about CEO narcissism, has given me some hope that the system is improving. Here’s how Jerry Davis described the exemplary process to Retraction Watch:

A concerned reader notified me of the issues with a published table in this paper a few weeks ago, and also contacted the authors.  The authors came forward with a correction, which we promptly published.  We did not consider this sufficient for a full  retraction.  The concerned reader reports that he/she is satisfied with the corrigendum.  The journal is always looking for ways to enhance the quality of the review process, and if errors end up in print, we aim to correct them promptly.

To me, the key here is that the “concerned reader … is satisfied with the corrigendum”. It is all about feeling that when you share your concerns they are taken seriously. That’s the sort of leadership that is likely to rebuild the trust we need in the management literature. Hopefully, over time, even Anon123 can be brought around.

 

_________

* I had to think about this pronoun for awhile, and I’m sorry if I got it wrong. It is of course possible to get it wrong even when a name (like Jesse or Shawn) is given. In this case, I’ve gone with my intuition based on the style of the comment, its “voice, if you will. If my “ear” has misled me I hope it will cause as little offence as the time I assumed an Italian commenter named Gabriele was a woman.

Ethics and Ethnography

I’ve been having some interesting conversations over at OrgTheory with Victor Tan Chen about the ethical dilemmas that ethnographers face in their research practices. This is closely related to the issues that Benson picked up on in his recent post, noting that our Code of Ehics requires us “to preserve and protect the privacy, dignity, well-being, and freedom of [our] research participants.” In this post, I’d like to bring out to important dimensions which we might distinguish into a concern with our “scientific” and our “professional” integrity.

As scientists, we are concerned with the truth. So, when we observe something in our fieldwork we feel a duty to report those events as they actually happened. But sometimes we have to modify our description of those events, leave them out, or even outright fictionalize them, in order to protect our research subjects from the consequences of making their actions public. (This is not always, but sometimes, because they are themselves involved in unethical or illegal activities, which raises an additional dilemma.) Once we do this, of course, we have made a compromise, we have sacrificed a little bit of truth for the sake of a, presumably, greater bit of justice.

But at the next level of analysis, we now have to ask ourselves whether we are inadvertently circulating falsehoods. Will our readers begin to tell certain anecdotes to their peers and students as though they are “true stories” even though the actual events are very different? What for us might merely be slight embellishment for the sake of concealing an identity or a location, might for our readers become an illuminating “fact” about how the world works.

Consider an analogy to medical science. Obviously, you don’t want to end up claiming that a pill has effects it doesn’t actually have or doesn’t have effects it actually does. That’s why you don’t leave out information about the population that you have tested it on. If you’ve only tested the pill on healthy men in their thirties, you don’t hide this fact in your write up because it’s important to know that its effects on seventy year-old women with high blood pressure are largely unknown. Similarly, if you’ve done your ethnographic research in rural China, you don’t “anonymize it” by saying it was done in India or the US. The context matters, and it is often very difficult to know how to characterize the context while also making it non-specific enough not to reveal who your actual research subjects were.

The broader professional issue has to do with preserving our collective to access to the communities that we want to remain knowledgeable about. If Wall Street bankers always find themselves written about by ethnographers as greedy sociopaths (and assuming they don’t self-identify as greedy sociopaths) or citizens of low-income neighborhoods always find themselves described as criminals, they will slowly develop a (not entirely unfounded) distrust of ethnographers and will, therefore, be less likely to open up their practices to our fieldwork. As Victor points out, these are issues that journalists also face, and which they have a variety of means to deal with. Many of these means can be sorted under “ethics”.

Let me emphasize that these are issues we must face collectively, i.e., as a profession. Losing access to empirical data is not just a risk you face personally in your own work. If your peers don’t enforce disciplinary standards then we’ll all lose credibility when engaging with practitioners. For this reason I also agree with the anonymous commenter on my last post: we must lead by example and, unfortunately, every now and then we must make examples of each other.

Practicalities

It’s been a while since I’ve posted here, and I had better get my act together again. I thought a good way to get going would be say a few words about the practical work of the Ethics Education Committee in the year to come, very much in the hopes that some of our readers here at the Ethicist will see an angle in it that they might find engaging. In addition to attracting Academy members who might like to work directly with the committee, I’m also looking for ways that the committee might make a contribution to the work of the various divisions.

Let me begin with the blog, which we’re hoping will become a major site of activity in the months to come. This is a place where we can discuss the sorts of ethical issues that are faced by Academy members, both as scholars and as professionals. It is also a place where we can can develop the form and content of the materials we contribute to ethics education throughout the Academy. Currently, I’m very focused on the contribution we can make to the doctoral and early-career researcher consortia over the coming years. I will have some news about that soon.

My hope is that the blog can be a place where the Academy’s members can have some influence on what we mean by ethics and how we teach it.  This is the sort of question I tried to raise in my post about the two major approaches to ethics education we tried out in Vancouver.

In Vancouver I was also given the “keys” to the Ethicist’s Twitter account, which I will be trying to promote in the weeks to come. Do please help me help its future followers find it by retweeting the stuff you think is interesting. This, of course, will also give us a better sense of what you do, in fact, find interesting to talk about.

As a general framework for thinking about what the Committee can contribute, I want to propose we think about the ideal presentation, centered on the contents of the Academy of Management’s Code of Ethics, that might be delivered in 5, 15, 30, 45 and 60 minutes. What would be the most important topics and principles to cover? What would be the best way to engage an audience of the Academy’s members (usually doctoral students or early-career researchers)? What’s a sure-fire way to lose them?

To my mind, ethics is a practice by which we form our moral characters. It is both individual and social. It’s the means by which we help each other become better people, and remain good in the face of life’s many pressures. It is a very practical business.

Who Needs Ethics Education?

At this year’s Academy meeting we had some interesting conversations in the Ethics Education Committee about our approach to teaching the Code. The traditional approach is to assume that our audiences need resources to help them to reflect on what is right and wrong in their professional practices. This can involve everything from from helping them to clarify their underlying values  to helping them decide whether to credit a particular author in a particular circumstance. The presumption is that people want to learn how to become, for lack of a better word, better people. They want to learn what is right. We’re certainly willing and able to provide such support, even if we often approach it by telling them what is wrong, what not to do.

But I had the opportunity to talk a few consortium organizers in the divisions this year and I got the sense that not all our audiences feel that this is the right approach. An alternative, and one for which I’ve been arguing lately every chance I get, is to educate people about what to do when they run into ethically questionable behavior in others. Sometimes it is just that: merely “questionable”, and when the questions are answered everything turns out to be fine. But sometimes there is a need to take action, either to protect yourself from harm or to mitigate the harm that may have been done to someone else. Even when you’re blameless, you need ethics to help guide you towards a constructive resolution of the conflict.

That’s why we’ve been working to incorporate a sense of the various processes and procedures within the Academy of Management in our educational initiatives. In a sense, we want to shift the focus from the “bad guys”, who need to be told what not to do, to the “good guys”, who need to be told what can be done when bad things happen. And it’s even more hopeful than that, in fact. Sometimes, a robustly ethical perspective can give us the hope we need to discover that an apparent wrong was actually not as harmful as we thought, perhaps not a wrong at all.

Let me offer a simple example. One topic that came up a few times was the increasing problem of “coercive citation”. This is the practice of requiring someone to cite your favorite paper (perhaps even one you’ve written yourself) before you’ll publish them. Such power can be exerted by both editors and reviewers, though most of the focus these days is on the editors who do it to boost their impact factors. Now, on the traditional approach we’d try to encourage editors not to be coercive in this way. But do we really think that the Ethics Education Committee will reach the hearts and minds of senior scholars who have become editors of important journals? I’m not very hopeful about this at all.

Instead, therefore, we can try to instruct authors in how to interpret and respond to what appears to be an attempt to coerce a citation. The first rule would be to assume good faith. At first pass, a suggested citation is just that: a suggestion to read a particular paper because including it may strengthen your own. The problem arises after you read it and deem it to be either deeply flawed or simply irrelevant to your aims. At this point, a cynical author might decide to cite the paper anyway, on the understanding that it is required for publication. But a less cynical one–one that has been ethically educated, let’s say–might simply thank the editor or the reviewer for the suggestion and explain that the paper is not, in the author’s judgment, appropriate to include. If the suggestion was indeed intended to be coercive, it just ran into an obstacle (and then we can talk about what might happen next), but if it wasn’t, it would have been tragic to let it harm the quality of the original argument and corrupt the author’s integrity.

I think this sort of instruction in what our options are when something appears to be amiss but might not actually be is too often left out of ethics education. Ethics education is not really for bad people who need to become better. It’s for good people who need strategies and support for maintaining their goodness in the face all sorts of mixed signals and strange incentives. Ethics education is about telling people that there is a community in place to support their attempts to be good, not a surveillance state to thwart their attempts to cheat. In this way, ethics education might even be edifying.

Openness

There’s an interesting conversation in progress on Andrew Gelman’s blog. He has long argued for the value of openly sharing your data with other researchers, and in the post in question he is promoting Jeff Rouder’s call for “born-open data”, i.e., for data that is collected in a manner that allows it to be made public immediately, even before it has been analyzed by the researcher.

Andrew goes on to cite an example of data that was not open, and was indeed deliberately not made available to him when he requested it. The reason he was given was that he had criticized the researchers in question in an article that was published in Slate, i.e., an online magazine, without first contacting them. As Jessica Tracy, one of the researchers, explains in a comment, they felt they could not “trust that [Andrew would] use the data in a good faith way,” since he did not ensure that they had chance to review and respond to his criticisms before he published them. It was because of Andrew’s “willingness (even eagerness) to publicly critique [Beal and Tracy’s] work without taking care to give [them] an opportunity to respond or correct errors,” that Tracy “made the decision that [Andrew is] not a part of the open-science community of scholars with whom I would (and have) readily shared our data” (my emphasis).

This way of putting is, in my opinion, essentially “ethical”, i.e., a matter of how one constructs one’s community of peers, the “company one keeps”. Our values, and the ethical behaviors they inform, shape our identity as researchers, i.e., both who we are and who, as in this case, we therefore associate with and are “open” to criticism from. Though she does not put it quite as strongly as I’m about to, and it is certainly a mild form of what I’m talking about, Tracy is actually saying that Andrew has behaved unethically, i.e., he has violated her sense of appropriate conduct, or, to put in in a way that resonates more closely with the letter of her remarks, he has violated what she perceives as her community’s standards. In the comments, Keith O’Rourke correctly points out why this sort of violation, whether real or merely perceived, is a problem in science:

It seems that if one’s initial contact with someone is to call their abilities into question somehow – it is very difficult for them to believe any further interactions are meant to cause anything but further harm. Worse this makes it difficult for them to actually attend to the substance of the original criticisms.

Both Andrew’s and Jessica Tracy’s arguments can be read as “ethical” ones in the sense that they are about the values that maintain communities of practice. Tracy is saying that she wants to work in a community that only requires her to share her data with people she trusts. Andrew is saying we should either trust everyone (in one sense) or not demand that we should trust (in another sense) anyone. Both are articulating constitutive values, values that shape who we are when we call ourselves scholars. They construct a research identity, a scholarly persona, a scientific ethos.

For my part, I’m generally in agreement with Andrew. I think when we spot something in the published work of others we should make our critique public before we contact the researchers in question. (I usually then send them a friendly email letting them know of my criticism.) The reason is that I value the correction of the error above the maintenance of my relationship to the relevant community. (I’m not without sympathy for people prioritize differently.) Also, no matter how the initial contact is framed, it will always open the possibility of keeping the criticism quiet, and this can lead to all sorts of uncomfortable situations and misunderstandings. (I’ve previously discussed this issue in the case of plagiarism charges.)

In any case, here’s what the AOM Code of Ethics says about sharing data and results, and about reacting to the discovery of error (presumably this as includes the discovery of error by others, i.e., errors revealed by public criticism):

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Notice that we’re in principle committed to open data at AOM, but we also acknowledge something like Tracy’s “good faith” requirement, and her discernment about whether the people who we show our data to are really members of our “community of scholars”, in that we specify that we “permit the open assessment and verification” of our data “by other responsible researchers,” If we decide, as Tracy did in Gelman’s case, that the researcher in question is not going to be “responsible”, then we are not obligated to share.

One argument, in my opinion, for the “born-open” approach is that is obviates the need for this kind of judgment call. Everyone (in my utopia), no matter how good their faith appears to be, is in principle a member of what Tracy calls “the open-science community of scholars“. I don’t think you should be able to choose your critics in science, though you are free to ignore the ones you don’t think are serious. The question, of course, will then be whether the peers you do want to talk to share your judgment of the critic you are ignoring.

Freedom

Without freedom, there’d be no need for an ethics. If every act of disobedience were punished by death, any deliberation about the “the right thing to do” would be foolish, except as a reflection upon the meaning of the orders one had been given. But notice that even exile would be “punishment” enough to ensure that a culture had no ethical dimension. If everyone who disobeyed were exiled, their ethical deliberations would still have a place, but it would be outside the culture that banished them.

This leads to a final variation on this dystopian scenario. Suppose obedience were made a requirement for entrance into a culture. Here you would have a powerful selection pressure that would favor those who have a tendency to conflate the question “What should I do?” with “What have I been told to do?” If getting into the culture demands that one be good at answering the second question, not the first, we can expect ethical deliberation to be a somewhat rare occurrence among those who do get in.

Building a strong ethical culture, therefore, means giving ample room for the free exercise of judgment. An educational system in which all instances of plagiarism are caught and punished with expulsion is no place to learn about the ethics of crediting your sources. If it is not really possible to do wrong, then it is also not possible to do right. That’s what freedom is really all about. (Of course, under all these hypothetical cases there is the insight that they are utterly unrealistic. It is impossible to punish all and only acts of disobedience. Even deciding whether someone has broken a rule is an act of interpretation.)

The section of the Code that deals most explicitly with this is the third part of our Professional Principles. Here we read that “The AOM must have continuous infusions of members and new points of view to remain viable and relevant as a professional association” and that “It is the duty of AOM members to interact with others in our community in a manner that recognizes individual dignity and merit.” That is, we must have a culture that does not, first, require the loyalty or obedience of its members, but actively seeks their “point of view” and recognizes their “individual dignity.” In short, as a professional association, we see ourselves as a community of free people.

One of the most important freedoms in intellectual contexts, to my mind at least, is the right to be wrong. In a free society we are free to make mistakes. That freedom, of course, comes with the obligation to acknowledge and correct our mistakes when they are pointed out to us. It follows that an intellectual community should select members not on the basis of the loyalty or obedience, i.e., their willingness to give up their freedom, but on the amount of errors they have made and corrected, i.e., their insistence on exercising their intellectual freedom.