How should we treat each other as scientific subjects?

At the Academy meeting in Vancouver this year, it was brought to my attention that there were PDW’s collecting research data on participating members – without a clear ethics approval or apparent ethics protocol. That is, there was no informed consent, yet data appeared to be collected.

This was not the first time I observed our collective avoidance of Ethics Review Board (ERB) or Institutional Review Board (IRB) protocol when surveying ourselves.  As previous chair of AOM’s ethics education committee, I was tasked with repeating the ethics survey that we had administered to our entire membership some years before. The first thing that I did was to ask for the ethics review board protocol, in order to be sure I was following accepted procedures.

After a few weeks of embarrassing emails and back and forth confirmations, it was eventually clear that we had never submitted our own ethics survey to any kind of ethics review board. I was told that when the AOM board met to discuss this issue there was some hesitancy to constrain the activities of divisions surveying their membership – and no clear path to indicate who would serve as an accepted IRB for Academy research. My own decision was to obtain ERB approval and protocols from my own university, and proceed with the survey in that manner.

Many of us feel IRB’s are a burden. However, it is worth noting how many of these regulations came about.  For one, experiments on concentration camp victims horrified the scientific community, leading to the Nuremberg code. Much later, the experiments by Stanley Milgrom attempted to understand how people willingly agreed to do terrible acts to each other. His work, as well as famous Zimbardo prison simulation study, have led to tighter constraints on how to approach research, what is acceptable, and when ‘the line is crossed”.

One of my very first sociology professors was Laud Humphreys. He was famous for studying homosexual activities in public toilets, where he acted as the “watchqueen”. Later, he surreptitiously followed participants to their cars, identified their license plates, and showed up at their home disguised as a surveying health worker. This was done in 1960’s before IRB’s were mandated by the US federal government.

In fact, we have Academy members who come from countries where there is little of any oversight regarding research, particularly social science research.  However, I would argue we have a collective responsibility to observe the highest standards of research protocol, despite the burden, for our entire membership.

Our own code of ethics addresses this issue, although not as stridently as one might expect, as there is no specific mention of IRB procedures:

Participants. It is the duty of AOM members to preserve and protect the privacy, dignity, well-being,and freedom of research participants.

1.7. Informed Consent: When AOM members conduct research, including on behalf of the AOM or its divisions, they obtain the informed consent of the individual or individuals, using language that is reasonably understandable to that person or persons. Written or oral consent, permission, and assent are documented appropriately.

2.4. Anticipation of Possible Uses of Information:

2.4.1. When maintaining or accessing personal identifiers in databases or systems of records, such as division rosters, annual meeting submissions, or manuscript review systems, AOM members delete such identifiers before the information is made publicly available or employ other techniques that mask or control disclosure of individual identities.

2.4.2. When deletion of personal identifiers is not feasible, AOM members take reasonable steps to determine that the appropriate consent of personally identifiable individuals has been obtained before they transfer such data to others or review such data collected by others

Most North American universities are under strict IRB procedures.  They are virtually unanimous in stating that all surveys involving human subjects should be subjected to ERB committees. Here are a few statements from the Canadian “Tri Counsel” that governs Canadian universities:

If the primary purpose, design, content and/or function of such surveys is to conduct “research”2 involving humans, then it would generally require REB review, under TCPS Article 1.1(a):

Very similar statements appear at the Cornell Univ. website:

At the end of the day, each of us, no matter where we do our scholarly work, have a responsibility to protect the respondent as much as possible, in every conceivable way. The distance between our own behavior, and the 16 German doctors convicted of experimenting on human beings without their consent, is an essential red line that we cannot allow to become a ‘slippery slope’. Thus, even when we decide to research ourselves, as professors, and colleagues, I believe we should commit to the highest standards of scientific ethical inquiry. Even if IRB’s are a ‘burden’.

 

 

 

Who Needs Ethics Education?

At this year’s Academy meeting we had some interesting conversations in the Ethics Education Committee about our approach to teaching the Code. The traditional approach is to assume that our audiences need resources to help them to reflect on what is right and wrong in their professional practices. This can involve everything from from helping them to clarify their underlying values  to helping them decide whether to credit a particular author in a particular circumstance. The presumption is that people want to learn how to become, for lack of a better word, better people. They want to learn what is right. We’re certainly willing and able to provide such support, even if we often approach it by telling them what is wrong, what not to do.

But I had the opportunity to talk a few consortium organizers in the divisions this year and I got the sense that not all our audiences feel that this is the right approach. An alternative, and one for which I’ve been arguing lately every chance I get, is to educate people about what to do when they run into ethically questionable behavior in others. Sometimes it is just that: merely “questionable”, and when the questions are answered everything turns out to be fine. But sometimes there is a need to take action, either to protect yourself from harm or to mitigate the harm that may have been done to someone else. Even when you’re blameless, you need ethics to help guide you towards a constructive resolution of the conflict.

That’s why we’ve been working to incorporate a sense of the various processes and procedures within the Academy of Management in our educational initiatives. In a sense, we want to shift the focus from the “bad guys”, who need to be told what not to do, to the “good guys”, who need to be told what can be done when bad things happen. And it’s even more hopeful than that, in fact. Sometimes, a robustly ethical perspective can give us the hope we need to discover that an apparent wrong was actually not as harmful as we thought, perhaps not a wrong at all.

Let me offer a simple example. One topic that came up a few times was the increasing problem of “coercive citation”. This is the practice of requiring someone to cite your favorite paper (perhaps even one you’ve written yourself) before you’ll publish them. Such power can be exerted by both editors and reviewers, though most of the focus these days is on the editors who do it to boost their impact factors. Now, on the traditional approach we’d try to encourage editors not to be coercive in this way. But do we really think that the Ethics Education Committee will reach the hearts and minds of senior scholars who have become editors of important journals? I’m not very hopeful about this at all.

Instead, therefore, we can try to instruct authors in how to interpret and respond to what appears to be an attempt to coerce a citation. The first rule would be to assume good faith. At first pass, a suggested citation is just that: a suggestion to read a particular paper because including it may strengthen your own. The problem arises after you read it and deem it to be either deeply flawed or simply irrelevant to your aims. At this point, a cynical author might decide to cite the paper anyway, on the understanding that it is required for publication. But a less cynical one–one that has been ethically educated, let’s say–might simply thank the editor or the reviewer for the suggestion and explain that the paper is not, in the author’s judgment, appropriate to include. If the suggestion was indeed intended to be coercive, it just ran into an obstacle (and then we can talk about what might happen next), but if it wasn’t, it would have been tragic to let it harm the quality of the original argument and corrupt the author’s integrity.

I think this sort of instruction in what our options are when something appears to be amiss but might not actually be is too often left out of ethics education. Ethics education is not really for bad people who need to become better. It’s for good people who need strategies and support for maintaining their goodness in the face all sorts of mixed signals and strange incentives. Ethics education is about telling people that there is a community in place to support their attempts to be good, not a surveillance state to thwart their attempts to cheat. In this way, ethics education might even be edifying.

Should an anonymous peer review always remain anonymous?

From all accounts, the Academy meeting in Vancouver was a huge success. We had record breaking attendance, beautiful weather, and a wealth of interesting and provocative sessions. I also experienced a few interesting ethically related discussions, and I thought it would be worthwhile sharing a few of them in these next few blogs. The first ‘discussion’ had to do with peer review disclosure.

I was having a conversation with a very well known scholar when another colleague approached us, recognized this individual, and proceeded to tell him how much he liked the paper that was published in journal XYZ, as that he was one of the blind reviewers for that article.  Suddenly, realizing that he was standing next to the ethicist blogger, he looked at me and stated “Oh, maybe I shouldn’t have disclosed that, what do you think Benson”?   As the well known scholar ‘rolled his eyes’, I proceeded to explain that, in my opinion, a blind review is designed to be anonymous not only before publication, but afterwards as well. The reviewer wanted to know why that was the case – and I shared my own perspective: Identifying oneself as a reviewer on a published work could only create some sort of obligation – a social exchange that might be bartered later into some sort of expected favor. After all, if we didn’t expect continued anonymity, why wouldn’t journals simply state, upon publication, the names of the blind reviewers? Surely they would deserve some of the credit for the publication? My view is that the reviewing process should be maintained as an anonymous volunteer activity, disassociated with any sense of possible obligation or appreciation, beyond what the editor and author (anonymously) provides.  Forever….

Later on, I discussed this small incident with another senior scholar and an editor. Surprisingly, the editor couldn’t see a reason why not to disclose the review after publication. His point was that once the publication was accepted, disclosure would only show support and respect for the work undertaken.  The other senior scholar in the conversation pointed out the ‘slippery slope’ problem – that opening up this door would suggest other possible avenues of potential influence. For example, if you were the one reviewer that wrote the most problematic reviews, would you want this disclosed? If you had two favorable reviewers, and one that was a real ‘pain’, would it be fair to help reveal who that third person might be through a process of elimination? Further, if you let it be know that you are the ‘good cop’ in the reviewing process, would you develop a reputation that attracted certain benefits or advantages, that more silent reviewers failed to appreciate?

In searching for an official answer to these questions, I first began with our own code of ethics. While it addresses the issue of confidentiality, there is insufficient detail to precisely indicate what our normative behavior should be, although there is an emphasis on maintaining confidentiality. Specifically:

Maintaining Confidentiality:

2.1.1. AOM members take reasonable precautions to protect the confidentiality rights of others.

2.1.2. Confidential information is treated as such even if it lacks legal protection or privilege.

2.1.3. AOM members maintain the integrity of confidential deliberations, activities, or roles, including, where applicable, those of committees, review panels, or advisory groups (e.g., the AOM Placement Committee, the AOM Ethics Adjudication Committee, etc.).

4.2.5.1. In reviewing material submitted for publication or other evaluation purposes, AOM members respect the confidentiality of the process and the proprietary rights of those who submitted the material.

Given our own code is not particularly explicit, I took a look at the peer review policy of Nature, one of the preeminent scientific journals of our time:

As a condition of agreeing to assess the manuscript, all reviewers undertake to keep submitted manuscripts, associated data, and their own peer review comments confidential, and not to redistribute them without permission from the journal. If a reviewer seeks advice from colleagues while assessing a manuscript, he or she ensures that confidentiality is maintained and that the names of any such colleagues are provided to the journal with the final report. By this and by other means, Nature journals endeavour to keep the content of all submissions confidential until the publication date other than in the specific case of its embargoed press release available to registered journalists. Peer review comments should remain confidential after publication unless the referee obtains permission from the corresponding author of the reviewed manuscript and the Nature journal the comments were delivered to. Although we go to every effort to ensure reviewers honour their promise to ensure confidentiality, we are not responsible for the conduct of reviewers.

Following that, I also took a look at the policy of Science, another outstanding scientific journal:

Confidentiality: We expect reviewers to protect the confidentiality of the manuscript and ensure that it is not disseminated or exploited. Please destroy your copy of the manuscript when you are done. Only discuss the paper with a colleague with permission from the editor. We do not disclose the identity of our reviewers

Thus, while some journals seem to indicate continued confidentiality is expected, it appears that there may be differences of opinion, interpretation, and possibly even confusion regarding what is expected of a blind reviewer, and what would be considered professional or unprofessional conduct.

It would be great if a some of our members weighed in regarding their own opinion on this matter: Do you think it appropriate for a reviewer to disclose that they were part of the double blind process, after publication?

 

 

When scholars are sued for their research

I was told an interesting story recently, and while I am not fully certain about the details or validity, I thought it would make an interesting subject to discuss. The story went like this:

Two faculty members were involved, separately, in doing research on a similar subject – in this case, something related to genetic research. One faculty member believes the other has followed a completely erroneous methodology, essentially invalidating that persons’ work. They sit and discuss this over lunch, and the person accused of making the error get’s furious and storms out. A week or two later, the ‘accuser’ gets a letter from the other faculty member’s lawyer. It is a cease and desist letter, indicating that any further mention of his opinion regarding the asserted methodological errors or limitations, in any public capacity whatsoever, will be met with an anti-defamation lawsuit.

Lawsuits by the private sector against academics are not particularly new. Firms have a vested interest in protecting their businesses and their reputations against slander. So, when one professor asserted that a private firm was a consistent violator of labor laws, they engaged a lawyer in an attempt to set the record straight.

In this case, the lawyer for the firm stated: ”It’s not our desire to destroy anyone or harm anyone. If we could arrive at some accommodation to set this situation right and to set the record straight, that’s all we’re looking for. We just want the truth.”

Of course, what is ‘true’ can be quite subjective, as any reviewer or scholar will freely admit. While many of us maintain we are in the business of seeking truth, or something closely akin, one person’s truth is another person’s nonsense. Fortunately, academics, as it turns out, are typically insulated from such lawsuits in most countries due to protections of academic freedom and free speech. Imagine if every paper we wrote was subject to a lawsuit by anyone, colleague or not, who didn’t agree with our methodology, conclusions or theoretical framework.

So, what does AOM’s code of ethics have to say on this subject? Well, broadly, we begin with integrity:

AOM members seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of their profession. In these activities AOM members do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. They strive to keep their promises, to avoid unwise or unclear commitments, and to reach for excellence in teaching, scholarship, and practice. They treat students, colleagues, research subjects, and clients with respect, dignity, fairness, and caring. They accurately and fairly represent their areas and degrees of expertise.

Further, we have clear suggestion in reference to our professional environment:

  1. To the Academy of Management and the larger professional environment. Support of the AOMs mission and objectives, service to the AOM and its institutions, and recognition of the dignity and personal worth of colleagues are required. We accomplish these aims through:
  •  Sharing and dissemination of information. To encourage meaningful exchange, AOM members should foster a climate of free interchange and constructive criticism within the AOM and should be willing to share research findings and insights fully with other members.
  •  Membership in the professional community. It is the duty of AOM members to interact with others in our community in a manner that recognizes individual dignity and merit.

4.1.3.  AOM members take particular care to present relevant qualifications to their research or to the findings and interpretations of their research. AOM members also disclose underlying assumptions, theories, methods, measures, and research designs that are relevant to the findings and interpretations of their work.

  • 1.4.  In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.
  • 1.5.  If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Thus, in the case of our two faculty members , it appears as though resorting to legal means in order to protect one’s reputation – accurately or not – is a deviation from our professional norms. Rather than hiring a lawyer, the offended scholar would be better advised to take the argument to the journals. Further, if true, it demonstrates a scholarly insecurity and fear of a critique of that person’s reputation that almost defies my imagination. If our scholarly opinions become shuttered due to the capability of a disagreeing scholar to employ an expensive lawyer, the entire foundation of our scholarship becomes undermined.

John Doe endorsed me as an expert in watching paint dry!

It seems as though few days pass between individuals ‘recommending me’ on researchgate or Linkedin, for all types of skills. It started innocuously enough – when I received a recommendation on my main field of study from a colleague who knew me fairly well. What began as a friendly ‘tip of the hat’ has blossomed into a full scale encyclopedia of ratings (all, naturally positive – as nobody seems to dis-recommend me for anything) of just about everything related to what I might or might not do as a scholar, faculty member, or even dog walker. There are now dozens of references, some by people I barely know, or might not even know, or ever meet. So, what’s going on?

Presumably, there is a tit-for-tat process going on, whereby I am expected, in turn, to recommend the colleague I don’t even know or have hardly spoken with as a ‘expert’ in the field of xyz. In fact, these two rating organizations ‘helpfully’ remind me every so often to do so, by providing lists of people I may or may not know, along with questions “do you recommend Rumplestilskin as an expert public speaker”? (never mind that I don’t know who they are, and have never heard them speak publicly or privately). The assumption is that given that they were so nice as to recommend my paint drying observational skills, I will return the favor by also highly recommending them (I often do). It reminds me a bit of when I collected baseball cards as a kid – more was always better. Unfortunately, mom gave away the box and I no longer have my Mickey Mantle rookie card – although I’m not sure who will take my expired Linked-in recommendations – maybe I’ll just have to assume a new identity.

So, what’s going on, anyway, with all these internet evaluation schemes? In an op-ed in the NY times last year, David Brooks pointed out that while the private sector demands for peer-to-peer rating systems have obvious advantages, there is as yet no role for government in terms of regulating peer based reference systems. They exist in a ‘grey zone’ between bake sales and personal security. Yet, as these systems increasingly take on currency in our professional lives (e.g. rate my professor), it seems like our Academy might want to play a more active role. Certainly, our professional associations should be drawing certain red lines regarding appropriate behavior of our membership, including consulting roles that seem to be reflected in these reference systems. The recent shocking revelation that the American Psychology Association provided supporting recommendations to the US government regarding consulting about torture suggests that professional organizations will increasingly play a public role in providing both public and private ethical guidance and boundaries related to public trust in the integrity of our professional activities.

I was able to find two related passages in our code of ethics that obliquely address the importance of providing accurate assessments:

Credentials and capabilities. It is the duty of consultants to represent their credentials and capabilities in an accurate and objective manner.

And later: AOM members do not make public statements, of any kind, that are false, deceptive, misleading, or fraudulent, either because of what they state, convey, or suggest or because of what they omit, including, but not limited to, false or deceptive statements concerning other AOM members.

So, the next time you are asked to evaluate a colleague – someone you might barely know – and have the urge to ‘return the favor’, give some consideration to who you are recommending, and what you are recommending them for. If we are ever able to enhance peer-to-peer reference systems such that they actually have an impact, it will be critical to carefully follow the recommendations outlined in our own code of ethics.

Working and trusting your co-authors

In the past few weeks, I’ve received a couple of examples of co-author oddities that I will shortly discuss. Because many of us work simultaneously in many virtual teams, we may have a less than comfortable knowledge of the ethical red lines of our co-authors. I have seen the result of working with co-authors we are unfamiliar with to seriously compromise the scholarly integrity of seemingly innocent contributors. In a few cases, reputations and careers have been seriously undermined.

I have always found it strange that we scholars tend to assume high integrity on all members of our profession, as though ethical norms are universally agreed upon and followed. In fact, cultures vary greatly, as do individual interpretations of what is ‘right’ and what is ‘wrong’. We do ourselves a service when we carefully investigate the norms and practices of potential collaborators. Sometimes, asking a former collaborator can provide insight. Otherwise, frank discussions regarding what they deem acceptable, or not, can be quite illuminating. What is most important is that the conversation take place before collaborative work is undertaken.

In one case discussed with me, a scholar was informed that a paper was published, with their name on it, only upon publication of the manuscript. The journal was not a very prestigious one, but the scholar had no idea the paper was being submitted, and it was published with another co-author that he was unfamiliar with.

In the second case, a scholar was working on a manuscript over a number of years, that had already gone through a few rejections, and elaborate revisions. Only following a specific inquiry was the co-author informed that her colleague had already published a ‘stripped down’ version of the paper in a lower tier journal, as sole author. This scholar was concerned that this prior publication invalidated the subsequent, more developed one (which had never cited the earlier work) and was unsure what the correct ethical protocol was.

Both of these cases illustrate problems surrounding intellectual property, integrity of authorship, and the importance of trusting, working relationships. Most journals accept submissions whereby a single author signs for the copy write on behalf of the remaining authors. However, consultation with the entire authorship team is not only expected, it is also specified in AOM’s code of ethics, as follows:

4.2.3.1. In cases of multiple authorship, AOM members confer with all other authors prior to submitting work for publication, and they establish mutually acceptable agreements regarding submission.

The second case, where related work is left un-cited, is perhaps somewhat more common. Many journals are now explicitly requiring submitting authors to verify that the work they are submitting is not based on previously published data or research, and if so, precisely where, and how the work differs. Because of increasing pressures to publish, scholars are increasingly enticed to ‘salami slice’ their work into multiple articles, even when publishing all the findings in a single article would be more appropriate. Editors have told me that they expect at least 60% of a paper to be new, however, they require the initial work in order to evaluate the measure of originality of the submission. Fortunately, AOM’s code of ethics is explicitly clear on the issue of citing previously related work:

4.2.3.5. When AOM members publish data or findings that overlap with work they have previously published elsewhere, they cite these publications. AOM members must also send the prior publication or in-press work to the AOM journal editor to whom they are submitting their work.

Very few of us would do banking with an uninsured bank or money lender lacking clear and transparent procedures. We would want to know that the staff of the bank are properly bonded, adhere to strict ethical guidelines, and will perform according to normative expectations. While the official laws regulating academic research are less institutionalized and so less codified, working with someone entails even more trust than we might have in our day to day banking. Money can easily be replaced (and is often insured). While we often focus on the technical competency of our co-authors, it behooves all of us to closely examine the ethical compasses of those we work with.

Reputations, once damaged, are very difficult to reestablish.

 

 

Management Without Borders

As many of us get ready for the annual AOM conference, it is worthwhile considering the theme for a moment, “Opening Governance”. We are invited “ to consider opportunities to improve the effectiveness and creativity of organizations by restructuring systems at the highest organizational levels.”

I believe we can begin with ourselves, as professionals, by enhancing our ability to act as organizational catalysts, stakeholders, managers, and global leaders. Certainly, AOM has created some very important mechanisms to ensure fair and transparent governance, and we refer to our global responsibilities clearly in our code of ethics:

  1. To all people with whom we live and work in the world community.

Sensitivity to other people, to diverse cultures, to the needs of the poor and disadvantaged, to ethical issues, and to newly emerging ethical dilemmas is required. We accomplish this aim through:  Worldview. Academy members have a duty to consider their responsibilities to the world community. In their role as educators, members of the Academy can play a vital role in encouraging a broader horizon for decision making by viewing issues from a multiplicity of perspectives, including the perspectives of those who are the least advantaged.

Like most of you, I’ve attended numerous academic conferences where great world issues are actively discussed and debated, including the relevance of management scholarship, of public policy research, Corporate Social Responsibility, and the like. Yet, as I think of our activities revolving around our conference and our professional roles, I often come up empty handed regarding the actual contribution our field makes in today’s current environments, particularly on a global basis. Most of us are fortunate enough to have established positions in wealthy ‘western/northern’ countries. We are rarely forced to worry about basic health care, nutrition, housing, and education, never mind political instability, personal freedom, safety and security, all of concern to most of the world’s population.

Henry Mintzberg, in his most recent book “Rebalancing Society” points out the need for a balance between government, business, and civil society (often referred to as the third sector, or by Mintzberg as the plural sector). He argues that the fall of the Soviet Union in 1989 was due to an imbalance (overly centralized government unbalanced with other forces) rather than a triumph of capitalism over communism. Our responsibility – as elite professional intellectuals – arguably includes helping to re-establish a balance that, according to Mintzberg (as well as many other scholars of public policy who examine empirical evidence) has become skewed, pushing civil society into the margins as a minority position. Resulting inequality, one consequence, should concern us all.

So, besides attending a conference exploring good governance, what else can we academicians do? What if the Academy developed and sponsored an ‘Academic Management Without Borders’ program? Is there any interest out there?

Openness

There’s an interesting conversation in progress on Andrew Gelman’s blog. He has long argued for the value of openly sharing your data with other researchers, and in the post in question he is promoting Jeff Rouder’s call for “born-open data”, i.e., for data that is collected in a manner that allows it to be made public immediately, even before it has been analyzed by the researcher.

Andrew goes on to cite an example of data that was not open, and was indeed deliberately not made available to him when he requested it. The reason he was given was that he had criticized the researchers in question in an article that was published in Slate, i.e., an online magazine, without first contacting them. As Jessica Tracy, one of the researchers, explains in a comment, they felt they could not “trust that [Andrew would] use the data in a good faith way,” since he did not ensure that they had chance to review and respond to his criticisms before he published them. It was because of Andrew’s “willingness (even eagerness) to publicly critique [Beal and Tracy’s] work without taking care to give [them] an opportunity to respond or correct errors,” that Tracy “made the decision that [Andrew is] not a part of the open-science community of scholars with whom I would (and have) readily shared our data” (my emphasis).

This way of putting is, in my opinion, essentially “ethical”, i.e., a matter of how one constructs one’s community of peers, the “company one keeps”. Our values, and the ethical behaviors they inform, shape our identity as researchers, i.e., both who we are and who, as in this case, we therefore associate with and are “open” to criticism from. Though she does not put it quite as strongly as I’m about to, and it is certainly a mild form of what I’m talking about, Tracy is actually saying that Andrew has behaved unethically, i.e., he has violated her sense of appropriate conduct, or, to put in in a way that resonates more closely with the letter of her remarks, he has violated what she perceives as her community’s standards. In the comments, Keith O’Rourke correctly points out why this sort of violation, whether real or merely perceived, is a problem in science:

It seems that if one’s initial contact with someone is to call their abilities into question somehow – it is very difficult for them to believe any further interactions are meant to cause anything but further harm. Worse this makes it difficult for them to actually attend to the substance of the original criticisms.

Both Andrew’s and Jessica Tracy’s arguments can be read as “ethical” ones in the sense that they are about the values that maintain communities of practice. Tracy is saying that she wants to work in a community that only requires her to share her data with people she trusts. Andrew is saying we should either trust everyone (in one sense) or not demand that we should trust (in another sense) anyone. Both are articulating constitutive values, values that shape who we are when we call ourselves scholars. They construct a research identity, a scholarly persona, a scientific ethos.

For my part, I’m generally in agreement with Andrew. I think when we spot something in the published work of others we should make our critique public before we contact the researchers in question. (I usually then send them a friendly email letting them know of my criticism.) The reason is that I value the correction of the error above the maintenance of my relationship to the relevant community. (I’m not without sympathy for people prioritize differently.) Also, no matter how the initial contact is framed, it will always open the possibility of keeping the criticism quiet, and this can lead to all sorts of uncomfortable situations and misunderstandings. (I’ve previously discussed this issue in the case of plagiarism charges.)

In any case, here’s what the AOM Code of Ethics says about sharing data and results, and about reacting to the discovery of error (presumably this as includes the discovery of error by others, i.e., errors revealed by public criticism):

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Notice that we’re in principle committed to open data at AOM, but we also acknowledge something like Tracy’s “good faith” requirement, and her discernment about whether the people who we show our data to are really members of our “community of scholars”, in that we specify that we “permit the open assessment and verification” of our data “by other responsible researchers,” If we decide, as Tracy did in Gelman’s case, that the researcher in question is not going to be “responsible”, then we are not obligated to share.

One argument, in my opinion, for the “born-open” approach is that is obviates the need for this kind of judgment call. Everyone (in my utopia), no matter how good their faith appears to be, is in principle a member of what Tracy calls “the open-science community of scholars“. I don’t think you should be able to choose your critics in science, though you are free to ignore the ones you don’t think are serious. The question, of course, will then be whether the peers you do want to talk to share your judgment of the critic you are ignoring.

Freedom

Without freedom, there’d be no need for an ethics. If every act of disobedience were punished by death, any deliberation about the “the right thing to do” would be foolish, except as a reflection upon the meaning of the orders one had been given. But notice that even exile would be “punishment” enough to ensure that a culture had no ethical dimension. If everyone who disobeyed were exiled, their ethical deliberations would still have a place, but it would be outside the culture that banished them.

This leads to a final variation on this dystopian scenario. Suppose obedience were made a requirement for entrance into a culture. Here you would have a powerful selection pressure that would favor those who have a tendency to conflate the question “What should I do?” with “What have I been told to do?” If getting into the culture demands that one be good at answering the second question, not the first, we can expect ethical deliberation to be a somewhat rare occurrence among those who do get in.

Building a strong ethical culture, therefore, means giving ample room for the free exercise of judgment. An educational system in which all instances of plagiarism are caught and punished with expulsion is no place to learn about the ethics of crediting your sources. If it is not really possible to do wrong, then it is also not possible to do right. That’s what freedom is really all about. (Of course, under all these hypothetical cases there is the insight that they are utterly unrealistic. It is impossible to punish all and only acts of disobedience. Even deciding whether someone has broken a rule is an act of interpretation.)

The section of the Code that deals most explicitly with this is the third part of our Professional Principles. Here we read that “The AOM must have continuous infusions of members and new points of view to remain viable and relevant as a professional association” and that “It is the duty of AOM members to interact with others in our community in a manner that recognizes individual dignity and merit.” That is, we must have a culture that does not, first, require the loyalty or obedience of its members, but actively seeks their “point of view” and recognizes their “individual dignity.” In short, as a professional association, we see ourselves as a community of free people.

One of the most important freedoms in intellectual contexts, to my mind at least, is the right to be wrong. In a free society we are free to make mistakes. That freedom, of course, comes with the obligation to acknowledge and correct our mistakes when they are pointed out to us. It follows that an intellectual community should select members not on the basis of the loyalty or obedience, i.e., their willingness to give up their freedom, but on the amount of errors they have made and corrected, i.e., their insistence on exercising their intellectual freedom.

Confessions of a hoarder

I admit it, I am a hoarder. Buried in a few large boxes in my garage, are the original surveys from my doctoral dissertation, collected in Jamaica 25 years ago. They have traveled with me to three countries crossing oceans twice – and yet, I have barely looked at them since placing them in the box a quarter of a century ago. Yet, in the back of my mind, I have this footnote – my work is not only archived, but is available to anyone who wants to wade through my handwritten notes, or listen to audio tapes (are they still usable? I wonder…). I may have erred in my work, but those errors can be examined under scholarly scrutiny should the need ever arise. It should be possible to determine the accuracy of virtually any data point.

At the last academy, there was considerable debate and heated arguments concerning the retraction of certain articles. One of the issues was that the author had lost the primary data backed up on jump drives – and there was no way to authenticate or replicate the analyses from a number of papers. This happened despite a journal requirement that primary data be stored for at least five years after publication.

This is not an isolated incident. More recently, I became aware of a colleague who lost primary digital files and was unable to replicate her own work. The result was a discussion regarding possible retraction – clearly an unfortunate and, in this era of cheap digital storage – a totally unnecessary event. Had she simply taken a minimum effort to ensure her original files were positioned somewhere, perhaps a much more effective approach might be in order.

The Academy of Management’s code of ethics is surprisingly silent regarding the storage of relevant data – perhaps this is an area of development for our code. However, two related points can be read:

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Thus, we can see that our code not only expects us to share relevant data, but also to report our errors in a public forum. Obviously, the implication is that our data will be secure, available, and not “lost”. However, as of now, the onus is on each of us to perform the necessary data archival work, hording our respective files in our garages and our disk drives, in perpetuity.

Clearly, a more professional model would be for each journal to arrange to archive relevant material for each published article. The data could then be dispensed, with appropriate safeguards, to other interested scholars, students, and the public. Given that much of our data is paid for with public funds, this should be a minimum requirement of prestigious “A” journals.

I’m sure my wife looks forward to the day I can scan, and  finally dispose of those ancient boxes taking up space in our garage.