Openness

There’s an interesting conversation in progress on Andrew Gelman’s blog. He has long argued for the value of openly sharing your data with other researchers, and in the post in question he is promoting Jeff Rouder’s call for “born-open data”, i.e., for data that is collected in a manner that allows it to be made public immediately, even before it has been analyzed by the researcher.

Andrew goes on to cite an example of data that was not open, and was indeed deliberately not made available to him when he requested it. The reason he was given was that he had criticized the researchers in question in an article that was published in Slate, i.e., an online magazine, without first contacting them. As Jessica Tracy, one of the researchers, explains in a comment, they felt they could not “trust that [Andrew would] use the data in a good faith way,” since he did not ensure that they had chance to review and respond to his criticisms before he published them. It was because of Andrew’s “willingness (even eagerness) to publicly critique [Beal and Tracy’s] work without taking care to give [them] an opportunity to respond or correct errors,” that Tracy “made the decision that [Andrew is] not a part of the open-science community of scholars with whom I would (and have) readily shared our data” (my emphasis).

This way of putting is, in my opinion, essentially “ethical”, i.e., a matter of how one constructs one’s community of peers, the “company one keeps”. Our values, and the ethical behaviors they inform, shape our identity as researchers, i.e., both who we are and who, as in this case, we therefore associate with and are “open” to criticism from. Though she does not put it quite as strongly as I’m about to, and it is certainly a mild form of what I’m talking about, Tracy is actually saying that Andrew has behaved unethically, i.e., he has violated her sense of appropriate conduct, or, to put in in a way that resonates more closely with the letter of her remarks, he has violated what she perceives as her community’s standards. In the comments, Keith O’Rourke correctly points out why this sort of violation, whether real or merely perceived, is a problem in science:

It seems that if one’s initial contact with someone is to call their abilities into question somehow – it is very difficult for them to believe any further interactions are meant to cause anything but further harm. Worse this makes it difficult for them to actually attend to the substance of the original criticisms.

Both Andrew’s and Jessica Tracy’s arguments can be read as “ethical” ones in the sense that they are about the values that maintain communities of practice. Tracy is saying that she wants to work in a community that only requires her to share her data with people she trusts. Andrew is saying we should either trust everyone (in one sense) or not demand that we should trust (in another sense) anyone. Both are articulating constitutive values, values that shape who we are when we call ourselves scholars. They construct a research identity, a scholarly persona, a scientific ethos.

For my part, I’m generally in agreement with Andrew. I think when we spot something in the published work of others we should make our critique public before we contact the researchers in question. (I usually then send them a friendly email letting them know of my criticism.) The reason is that I value the correction of the error above the maintenance of my relationship to the relevant community. (I’m not without sympathy for people prioritize differently.) Also, no matter how the initial contact is framed, it will always open the possibility of keeping the criticism quiet, and this can lead to all sorts of uncomfortable situations and misunderstandings. (I’ve previously discussed this issue in the case of plagiarism charges.)

In any case, here’s what the AOM Code of Ethics says about sharing data and results, and about reacting to the discovery of error (presumably this as includes the discovery of error by others, i.e., errors revealed by public criticism):

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Notice that we’re in principle committed to open data at AOM, but we also acknowledge something like Tracy’s “good faith” requirement, and her discernment about whether the people who we show our data to are really members of our “community of scholars”, in that we specify that we “permit the open assessment and verification” of our data “by other responsible researchers,” If we decide, as Tracy did in Gelman’s case, that the researcher in question is not going to be “responsible”, then we are not obligated to share.

One argument, in my opinion, for the “born-open” approach is that is obviates the need for this kind of judgment call. Everyone (in my utopia), no matter how good their faith appears to be, is in principle a member of what Tracy calls “the open-science community of scholars“. I don’t think you should be able to choose your critics in science, though you are free to ignore the ones you don’t think are serious. The question, of course, will then be whether the peers you do want to talk to share your judgment of the critic you are ignoring.

Leave a Reply