When scholars are sued for their research

I was told an interesting story recently, and while I am not fully certain about the details or validity, I thought it would make an interesting subject to discuss. The story went like this:

Two faculty members were involved, separately, in doing research on a similar subject – in this case, something related to genetic research. One faculty member believes the other has followed a completely erroneous methodology, essentially invalidating that persons’ work. They sit and discuss this over lunch, and the person accused of making the error get’s furious and storms out. A week or two later, the ‘accuser’ gets a letter from the other faculty member’s lawyer. It is a cease and desist letter, indicating that any further mention of his opinion regarding the asserted methodological errors or limitations, in any public capacity whatsoever, will be met with an anti-defamation lawsuit.

Lawsuits by the private sector against academics are not particularly new. Firms have a vested interest in protecting their businesses and their reputations against slander. So, when one professor asserted that a private firm was a consistent violator of labor laws, they engaged a lawyer in an attempt to set the record straight.

In this case, the lawyer for the firm stated: ”It’s not our desire to destroy anyone or harm anyone. If we could arrive at some accommodation to set this situation right and to set the record straight, that’s all we’re looking for. We just want the truth.”

Of course, what is ‘true’ can be quite subjective, as any reviewer or scholar will freely admit. While many of us maintain we are in the business of seeking truth, or something closely akin, one person’s truth is another person’s nonsense. Fortunately, academics, as it turns out, are typically insulated from such lawsuits in most countries due to protections of academic freedom and free speech. Imagine if every paper we wrote was subject to a lawsuit by anyone, colleague or not, who didn’t agree with our methodology, conclusions or theoretical framework.

So, what does AOM’s code of ethics have to say on this subject? Well, broadly, we begin with integrity:

AOM members seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of their profession. In these activities AOM members do not steal, cheat, or engage in fraud, subterfuge, or intentional misrepresentation of fact. They strive to keep their promises, to avoid unwise or unclear commitments, and to reach for excellence in teaching, scholarship, and practice. They treat students, colleagues, research subjects, and clients with respect, dignity, fairness, and caring. They accurately and fairly represent their areas and degrees of expertise.

Further, we have clear suggestion in reference to our professional environment:

  1. To the Academy of Management and the larger professional environment. Support of the AOMs mission and objectives, service to the AOM and its institutions, and recognition of the dignity and personal worth of colleagues are required. We accomplish these aims through:
  •  Sharing and dissemination of information. To encourage meaningful exchange, AOM members should foster a climate of free interchange and constructive criticism within the AOM and should be willing to share research findings and insights fully with other members.
  •  Membership in the professional community. It is the duty of AOM members to interact with others in our community in a manner that recognizes individual dignity and merit.

4.1.3.  AOM members take particular care to present relevant qualifications to their research or to the findings and interpretations of their research. AOM members also disclose underlying assumptions, theories, methods, measures, and research designs that are relevant to the findings and interpretations of their work.

  • 1.4.  In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.
  • 1.5.  If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Thus, in the case of our two faculty members , it appears as though resorting to legal means in order to protect one’s reputation – accurately or not – is a deviation from our professional norms. Rather than hiring a lawyer, the offended scholar would be better advised to take the argument to the journals. Further, if true, it demonstrates a scholarly insecurity and fear of a critique of that person’s reputation that almost defies my imagination. If our scholarly opinions become shuttered due to the capability of a disagreeing scholar to employ an expensive lawyer, the entire foundation of our scholarship becomes undermined.

John Doe endorsed me as an expert in watching paint dry!

It seems as though few days pass between individuals ‘recommending me’ on researchgate or Linkedin, for all types of skills. It started innocuously enough – when I received a recommendation on my main field of study from a colleague who knew me fairly well. What began as a friendly ‘tip of the hat’ has blossomed into a full scale encyclopedia of ratings (all, naturally positive – as nobody seems to dis-recommend me for anything) of just about everything related to what I might or might not do as a scholar, faculty member, or even dog walker. There are now dozens of references, some by people I barely know, or might not even know, or ever meet. So, what’s going on?

Presumably, there is a tit-for-tat process going on, whereby I am expected, in turn, to recommend the colleague I don’t even know or have hardly spoken with as a ‘expert’ in the field of xyz. In fact, these two rating organizations ‘helpfully’ remind me every so often to do so, by providing lists of people I may or may not know, along with questions “do you recommend Rumplestilskin as an expert public speaker”? (never mind that I don’t know who they are, and have never heard them speak publicly or privately). The assumption is that given that they were so nice as to recommend my paint drying observational skills, I will return the favor by also highly recommending them (I often do). It reminds me a bit of when I collected baseball cards as a kid – more was always better. Unfortunately, mom gave away the box and I no longer have my Mickey Mantle rookie card – although I’m not sure who will take my expired Linked-in recommendations – maybe I’ll just have to assume a new identity.

So, what’s going on, anyway, with all these internet evaluation schemes? In an op-ed in the NY times last year, David Brooks pointed out that while the private sector demands for peer-to-peer rating systems have obvious advantages, there is as yet no role for government in terms of regulating peer based reference systems. They exist in a ‘grey zone’ between bake sales and personal security. Yet, as these systems increasingly take on currency in our professional lives (e.g. rate my professor), it seems like our Academy might want to play a more active role. Certainly, our professional associations should be drawing certain red lines regarding appropriate behavior of our membership, including consulting roles that seem to be reflected in these reference systems. The recent shocking revelation that the American Psychology Association provided supporting recommendations to the US government regarding consulting about torture suggests that professional organizations will increasingly play a public role in providing both public and private ethical guidance and boundaries related to public trust in the integrity of our professional activities.

I was able to find two related passages in our code of ethics that obliquely address the importance of providing accurate assessments:

Credentials and capabilities. It is the duty of consultants to represent their credentials and capabilities in an accurate and objective manner.

And later: AOM members do not make public statements, of any kind, that are false, deceptive, misleading, or fraudulent, either because of what they state, convey, or suggest or because of what they omit, including, but not limited to, false or deceptive statements concerning other AOM members.

So, the next time you are asked to evaluate a colleague – someone you might barely know – and have the urge to ‘return the favor’, give some consideration to who you are recommending, and what you are recommending them for. If we are ever able to enhance peer-to-peer reference systems such that they actually have an impact, it will be critical to carefully follow the recommendations outlined in our own code of ethics.

Working and trusting your co-authors

In the past few weeks, I’ve received a couple of examples of co-author oddities that I will shortly discuss. Because many of us work simultaneously in many virtual teams, we may have a less than comfortable knowledge of the ethical red lines of our co-authors. I have seen the result of working with co-authors we are unfamiliar with to seriously compromise the scholarly integrity of seemingly innocent contributors. In a few cases, reputations and careers have been seriously undermined.

I have always found it strange that we scholars tend to assume high integrity on all members of our profession, as though ethical norms are universally agreed upon and followed. In fact, cultures vary greatly, as do individual interpretations of what is ‘right’ and what is ‘wrong’. We do ourselves a service when we carefully investigate the norms and practices of potential collaborators. Sometimes, asking a former collaborator can provide insight. Otherwise, frank discussions regarding what they deem acceptable, or not, can be quite illuminating. What is most important is that the conversation take place before collaborative work is undertaken.

In one case discussed with me, a scholar was informed that a paper was published, with their name on it, only upon publication of the manuscript. The journal was not a very prestigious one, but the scholar had no idea the paper was being submitted, and it was published with another co-author that he was unfamiliar with.

In the second case, a scholar was working on a manuscript over a number of years, that had already gone through a few rejections, and elaborate revisions. Only following a specific inquiry was the co-author informed that her colleague had already published a ‘stripped down’ version of the paper in a lower tier journal, as sole author. This scholar was concerned that this prior publication invalidated the subsequent, more developed one (which had never cited the earlier work) and was unsure what the correct ethical protocol was.

Both of these cases illustrate problems surrounding intellectual property, integrity of authorship, and the importance of trusting, working relationships. Most journals accept submissions whereby a single author signs for the copy write on behalf of the remaining authors. However, consultation with the entire authorship team is not only expected, it is also specified in AOM’s code of ethics, as follows:

4.2.3.1. In cases of multiple authorship, AOM members confer with all other authors prior to submitting work for publication, and they establish mutually acceptable agreements regarding submission.

The second case, where related work is left un-cited, is perhaps somewhat more common. Many journals are now explicitly requiring submitting authors to verify that the work they are submitting is not based on previously published data or research, and if so, precisely where, and how the work differs. Because of increasing pressures to publish, scholars are increasingly enticed to ‘salami slice’ their work into multiple articles, even when publishing all the findings in a single article would be more appropriate. Editors have told me that they expect at least 60% of a paper to be new, however, they require the initial work in order to evaluate the measure of originality of the submission. Fortunately, AOM’s code of ethics is explicitly clear on the issue of citing previously related work:

4.2.3.5. When AOM members publish data or findings that overlap with work they have previously published elsewhere, they cite these publications. AOM members must also send the prior publication or in-press work to the AOM journal editor to whom they are submitting their work.

Very few of us would do banking with an uninsured bank or money lender lacking clear and transparent procedures. We would want to know that the staff of the bank are properly bonded, adhere to strict ethical guidelines, and will perform according to normative expectations. While the official laws regulating academic research are less institutionalized and so less codified, working with someone entails even more trust than we might have in our day to day banking. Money can easily be replaced (and is often insured). While we often focus on the technical competency of our co-authors, it behooves all of us to closely examine the ethical compasses of those we work with.

Reputations, once damaged, are very difficult to reestablish.

 

 

Management Without Borders

As many of us get ready for the annual AOM conference, it is worthwhile considering the theme for a moment, “Opening Governance”. We are invited “ to consider opportunities to improve the effectiveness and creativity of organizations by restructuring systems at the highest organizational levels.”

I believe we can begin with ourselves, as professionals, by enhancing our ability to act as organizational catalysts, stakeholders, managers, and global leaders. Certainly, AOM has created some very important mechanisms to ensure fair and transparent governance, and we refer to our global responsibilities clearly in our code of ethics:

  1. To all people with whom we live and work in the world community.

Sensitivity to other people, to diverse cultures, to the needs of the poor and disadvantaged, to ethical issues, and to newly emerging ethical dilemmas is required. We accomplish this aim through:  Worldview. Academy members have a duty to consider their responsibilities to the world community. In their role as educators, members of the Academy can play a vital role in encouraging a broader horizon for decision making by viewing issues from a multiplicity of perspectives, including the perspectives of those who are the least advantaged.

Like most of you, I’ve attended numerous academic conferences where great world issues are actively discussed and debated, including the relevance of management scholarship, of public policy research, Corporate Social Responsibility, and the like. Yet, as I think of our activities revolving around our conference and our professional roles, I often come up empty handed regarding the actual contribution our field makes in today’s current environments, particularly on a global basis. Most of us are fortunate enough to have established positions in wealthy ‘western/northern’ countries. We are rarely forced to worry about basic health care, nutrition, housing, and education, never mind political instability, personal freedom, safety and security, all of concern to most of the world’s population.

Henry Mintzberg, in his most recent book “Rebalancing Society” points out the need for a balance between government, business, and civil society (often referred to as the third sector, or by Mintzberg as the plural sector). He argues that the fall of the Soviet Union in 1989 was due to an imbalance (overly centralized government unbalanced with other forces) rather than a triumph of capitalism over communism. Our responsibility – as elite professional intellectuals – arguably includes helping to re-establish a balance that, according to Mintzberg (as well as many other scholars of public policy who examine empirical evidence) has become skewed, pushing civil society into the margins as a minority position. Resulting inequality, one consequence, should concern us all.

So, besides attending a conference exploring good governance, what else can we academicians do? What if the Academy developed and sponsored an ‘Academic Management Without Borders’ program? Is there any interest out there?

Openness

There’s an interesting conversation in progress on Andrew Gelman’s blog. He has long argued for the value of openly sharing your data with other researchers, and in the post in question he is promoting Jeff Rouder’s call for “born-open data”, i.e., for data that is collected in a manner that allows it to be made public immediately, even before it has been analyzed by the researcher.

Andrew goes on to cite an example of data that was not open, and was indeed deliberately not made available to him when he requested it. The reason he was given was that he had criticized the researchers in question in an article that was published in Slate, i.e., an online magazine, without first contacting them. As Jessica Tracy, one of the researchers, explains in a comment, they felt they could not “trust that [Andrew would] use the data in a good faith way,” since he did not ensure that they had chance to review and respond to his criticisms before he published them. It was because of Andrew’s “willingness (even eagerness) to publicly critique [Beal and Tracy’s] work without taking care to give [them] an opportunity to respond or correct errors,” that Tracy “made the decision that [Andrew is] not a part of the open-science community of scholars with whom I would (and have) readily shared our data” (my emphasis).

This way of putting is, in my opinion, essentially “ethical”, i.e., a matter of how one constructs one’s community of peers, the “company one keeps”. Our values, and the ethical behaviors they inform, shape our identity as researchers, i.e., both who we are and who, as in this case, we therefore associate with and are “open” to criticism from. Though she does not put it quite as strongly as I’m about to, and it is certainly a mild form of what I’m talking about, Tracy is actually saying that Andrew has behaved unethically, i.e., he has violated her sense of appropriate conduct, or, to put in in a way that resonates more closely with the letter of her remarks, he has violated what she perceives as her community’s standards. In the comments, Keith O’Rourke correctly points out why this sort of violation, whether real or merely perceived, is a problem in science:

It seems that if one’s initial contact with someone is to call their abilities into question somehow – it is very difficult for them to believe any further interactions are meant to cause anything but further harm. Worse this makes it difficult for them to actually attend to the substance of the original criticisms.

Both Andrew’s and Jessica Tracy’s arguments can be read as “ethical” ones in the sense that they are about the values that maintain communities of practice. Tracy is saying that she wants to work in a community that only requires her to share her data with people she trusts. Andrew is saying we should either trust everyone (in one sense) or not demand that we should trust (in another sense) anyone. Both are articulating constitutive values, values that shape who we are when we call ourselves scholars. They construct a research identity, a scholarly persona, a scientific ethos.

For my part, I’m generally in agreement with Andrew. I think when we spot something in the published work of others we should make our critique public before we contact the researchers in question. (I usually then send them a friendly email letting them know of my criticism.) The reason is that I value the correction of the error above the maintenance of my relationship to the relevant community. (I’m not without sympathy for people prioritize differently.) Also, no matter how the initial contact is framed, it will always open the possibility of keeping the criticism quiet, and this can lead to all sorts of uncomfortable situations and misunderstandings. (I’ve previously discussed this issue in the case of plagiarism charges.)

In any case, here’s what the AOM Code of Ethics says about sharing data and results, and about reacting to the discovery of error (presumably this as includes the discovery of error by others, i.e., errors revealed by public criticism):

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Notice that we’re in principle committed to open data at AOM, but we also acknowledge something like Tracy’s “good faith” requirement, and her discernment about whether the people who we show our data to are really members of our “community of scholars”, in that we specify that we “permit the open assessment and verification” of our data “by other responsible researchers,” If we decide, as Tracy did in Gelman’s case, that the researcher in question is not going to be “responsible”, then we are not obligated to share.

One argument, in my opinion, for the “born-open” approach is that is obviates the need for this kind of judgment call. Everyone (in my utopia), no matter how good their faith appears to be, is in principle a member of what Tracy calls “the open-science community of scholars“. I don’t think you should be able to choose your critics in science, though you are free to ignore the ones you don’t think are serious. The question, of course, will then be whether the peers you do want to talk to share your judgment of the critic you are ignoring.

Freedom

Without freedom, there’d be no need for an ethics. If every act of disobedience were punished by death, any deliberation about the “the right thing to do” would be foolish, except as a reflection upon the meaning of the orders one had been given. But notice that even exile would be “punishment” enough to ensure that a culture had no ethical dimension. If everyone who disobeyed were exiled, their ethical deliberations would still have a place, but it would be outside the culture that banished them.

This leads to a final variation on this dystopian scenario. Suppose obedience were made a requirement for entrance into a culture. Here you would have a powerful selection pressure that would favor those who have a tendency to conflate the question “What should I do?” with “What have I been told to do?” If getting into the culture demands that one be good at answering the second question, not the first, we can expect ethical deliberation to be a somewhat rare occurrence among those who do get in.

Building a strong ethical culture, therefore, means giving ample room for the free exercise of judgment. An educational system in which all instances of plagiarism are caught and punished with expulsion is no place to learn about the ethics of crediting your sources. If it is not really possible to do wrong, then it is also not possible to do right. That’s what freedom is really all about. (Of course, under all these hypothetical cases there is the insight that they are utterly unrealistic. It is impossible to punish all and only acts of disobedience. Even deciding whether someone has broken a rule is an act of interpretation.)

The section of the Code that deals most explicitly with this is the third part of our Professional Principles. Here we read that “The AOM must have continuous infusions of members and new points of view to remain viable and relevant as a professional association” and that “It is the duty of AOM members to interact with others in our community in a manner that recognizes individual dignity and merit.” That is, we must have a culture that does not, first, require the loyalty or obedience of its members, but actively seeks their “point of view” and recognizes their “individual dignity.” In short, as a professional association, we see ourselves as a community of free people.

One of the most important freedoms in intellectual contexts, to my mind at least, is the right to be wrong. In a free society we are free to make mistakes. That freedom, of course, comes with the obligation to acknowledge and correct our mistakes when they are pointed out to us. It follows that an intellectual community should select members not on the basis of the loyalty or obedience, i.e., their willingness to give up their freedom, but on the amount of errors they have made and corrected, i.e., their insistence on exercising their intellectual freedom.

Confessions of a hoarder

I admit it, I am a hoarder. Buried in a few large boxes in my garage, are the original surveys from my doctoral dissertation, collected in Jamaica 25 years ago. They have traveled with me to three countries crossing oceans twice – and yet, I have barely looked at them since placing them in the box a quarter of a century ago. Yet, in the back of my mind, I have this footnote – my work is not only archived, but is available to anyone who wants to wade through my handwritten notes, or listen to audio tapes (are they still usable? I wonder…). I may have erred in my work, but those errors can be examined under scholarly scrutiny should the need ever arise. It should be possible to determine the accuracy of virtually any data point.

At the last academy, there was considerable debate and heated arguments concerning the retraction of certain articles. One of the issues was that the author had lost the primary data backed up on jump drives – and there was no way to authenticate or replicate the analyses from a number of papers. This happened despite a journal requirement that primary data be stored for at least five years after publication.

This is not an isolated incident. More recently, I became aware of a colleague who lost primary digital files and was unable to replicate her own work. The result was a discussion regarding possible retraction – clearly an unfortunate and, in this era of cheap digital storage – a totally unnecessary event. Had she simply taken a minimum effort to ensure her original files were positioned somewhere, perhaps a much more effective approach might be in order.

The Academy of Management’s code of ethics is surprisingly silent regarding the storage of relevant data – perhaps this is an area of development for our code. However, two related points can be read:

4.1.4. In keeping with the spirit of full disclosure of methods and analyses, once findings are publicly disseminated, AOM members permit their open assessment and verification by other responsible researchers, with appropriate safeguards, where applicable, to protect the anonymity of research participants.

4.1.5. If AOM members discover significant errors in their publication or presentation of data, they take appropriate steps to correct such errors in the form of a correction, retraction, published erratum, or other public statement.

Thus, we can see that our code not only expects us to share relevant data, but also to report our errors in a public forum. Obviously, the implication is that our data will be secure, available, and not “lost”. However, as of now, the onus is on each of us to perform the necessary data archival work, hording our respective files in our garages and our disk drives, in perpetuity.

Clearly, a more professional model would be for each journal to arrange to archive relevant material for each published article. The data could then be dispensed, with appropriate safeguards, to other interested scholars, students, and the public. Given that much of our data is paid for with public funds, this should be a minimum requirement of prestigious “A” journals.

I’m sure my wife looks forward to the day I can scan, and  finally dispose of those ancient boxes taking up space in our garage.

 

 

 

 

 

It Gets Better!

New York Magazine has a very informative article about David Broockman‘s efforts to expose what most observers are now calling a case of outright data-fabrication in political science. I’m inclined to see Broockman as a hero in the struggle to reduce the amount of “moral hazard” in the social sciences. That is, it is the existence of thorough and persistent critics of other people’s research, people like Broockman, that establishes a real “risk” of getting caught, not just committing fraud, but also making honest mistakes. That risk, in turn, keeps researchers, not just more honest than they might otherwise be, but more careful in the way they gather and analyze their data. Similarly, people like Kresimir Petkovic, deserve to have their deeds celebrated in epic poems for offering a real, practical inducement to cite our sources properly when using the work of others.

Both cases give us some real-life insight into the difficulty of exposing academic misconduct, and are worth reflecting on in some detail. (As usual, this post will just be an opener. There’s a lot to discuss here.) I’m especially interested in them because they are driven by the work of graduate students, who are, for a variety of reasons, often best positioned to detect these sorts of problems and, unfortunately, least empowered to expose them when they do. It’s therefore important to give a tip of the hat to people like Alan Sokal and Andrew Gelman who are very good about supporting people with less authority than themselves. This takes a great deal of their time and, I’m certain, a good portion of sangfroid in the face (sometimes institutional) push-back from the people whose work comes under scrutiny. (Donald Green also deserves credit for doing the right thing as the co-author of the disputed article.)

At the end of the NY Mag article, Broockman offers a very interesting and insightful analogy to his experience of coming out as a gay man.

“I think there’s an interesting metaphor between what I went through now and what I went through as a gay teenager,” Broockman says. “I felt trapped by this suspicion, it had weighed on my mind for a long time. You know, you try to act on it in small ways in hopes that it goes away, or you find confirmation that your suspicions are wrong. You’re worried about how people will react, so you proceed really cautiously. But finally, the truth is that when you come out about it, it’s really liberating.” He thinks part of the reason he was able to eventually debunk the study, in fact, was “because I’d gone through that same experience before in my life.”

“Part of the message that I wanted to send to potential disclosers of the future is that you have a duty to come out about this, you’ll be rewarded if you do so in a responsible way, and you don’t want to live with the regret that I do that I didn’t act on it [sooner],” he says. “This is sort of my It Gets Better project for people with suspicions about research.”

I think this is right. But just as is (and certainly was) the case with young gay people, we have to be careful about insisting on the “duty” to be honest about your “suspicions” as a graduate student. Young researchers need time to make up their minds about what they have found, just as young men need to figure out what their feelings really mean in their own way. The worry about how people will react is not at all unfounded, whether in questions of sexual orientation or critical scholarship. But what Dan Savage famously told young people struggling with the sexual identity also goes for graduate students who make disturbing discoveries about their colleagues: It gets better!

And there’s actually two senses in which that is true. First of all, as you get older and more experienced, and you begin to live among people who are more like you and also more mature, it simply becomes easier to be the person you truly want to be. That’s just growing up. But it is also true that our culture is always changing, always learning how to include attitudes and perspectives that were previously excluded. And for that, again, we owe our thanks to the pioneering heroism of people like David Broockman and Kresimir Petkovic. The rest of us, who already have our careers established, do well to look to the examples of Sokal and Gelman. We can make it suck a little less, as the kids say, to have a different point of view.

Should We Prepare for a Reckoning?

(Hat tip to Adam Kotsko’s tweet, which also gave me the theological spin, and my title.)

Over at Crooked Timber, Daniel Davies has a thought-provoking post about looming scandals in academia, analogous to the scandals in government, journalism, and finance we’ve already witnessed. Interestingly, he closes the post with some straight talk about how we can protect ourselves from being implicated in such scandals when they (if he’s right) inevitably begin to roll. “[P]ractice decent email hygiene … and avoid the creation of a paper trail,” he says; “anyone sensible will do anything they’re ashamed of in person or over the telephone.” That’s some stark advice where most of us would be inclined to think (or at least pretend to think) that it was possible to just not do something you’re likely to be ashamed of. To my ears, it’s a refreshing way of putting it.

He sets it up with a much more, shall we say, “ethical” position, namely, that the reason you want to keep your name out of a searchable email correspondence about some doubtful practice is that the excuse you’re probably using to convince yourself to do what you know you shouldn’t probably isn’t going to cut it in the light of day.

The characteristic defence is that “in the system as it is designed, we are forced to take these measures”. Often because “everyone works the system, so if we don’t do so ourselves, then …” … well, then what? Then we will slip down the rankings. Then other people will get our grants and funding and students. After having spent a couple of years of my life covering the financial sector scandals, I can report back and tell you that “If I did what I know to be the right thing, then I would have got less money and prestige than if I did what I know to be the wrong thing, so I did something I knew to be wrong” is not generally regarded as a brilliant excuse.

This is an important point. Ethical behaviour costs something, often it terms of passing up opportunities for accelerated advancement. We might say that these are “short term” losses that pay off in the long run, but we might have to invoke an almost religious sense of “the long run” to make that argument truly plausible. There are no doubt many academics who take their ethical transgressions with them through a long career, first to the bank, and then to the grave. The “cost” you’re really avoiding is the risk of getting caught, and the worry that goes with it.

What Davies is trying to do is to get you to feel that risk a little more acutely. He wants you to worry a little more about the long-term risk of exposure, so that you might do something that exposes you to the short-term discomfort of a colleague’s, or even university president’s, disapproval. As he says in relation to that question of keeping your email clean, it might not be a bad thing one day if “it’s your name in the records attached to the lone voice of complaint saying ‘we shouldn’t do this, vice-Chancellor’.” A day of reckoning is coming!

Are generous salaries impacting our professional ethics? How should we ‘give back’?

It strikes me that a significant portion of the ethical concerns emanating from the academy are the result of activities involving reputation that directly impact  our own professorial pocketbooks. In an Op Ed piece in the NY Times, Frank Bruni points out that stellar salaries are not unique to Wall street, but have permeated the University environment.  University presidents commonly receive seven figure salaries, as do University sports directors and certain academic ‘superstars’.

When I consider some of the lengths colleagues go to in order to achieve yet another star publication (sometimes, unfortunately, crossing the ethical boundary lines), I can’t help but think remuneration is somewhere in back of their mind as a motivating force. I have met colleagues who evaluate each other directly according to the lines on their respective CV’s. And yet, there is no Nobel prize for management scholarship, and short of airport book notoriety (and perhaps the associated financial residuals), few of us are likely to attain the top .05% of wealth. Yet, we routinely make decisions, good and bad, as though each article provides yet another stream of income, another opportunity to leverage speaking engagements, research grants, and bolster our small army of hard working graduate students. Are we simply the sum total of our A* publications? Is that all that ‘matters’ to us?

At some point, I suspect our collective failure to acknowledge responsibilities beyond our own financial concerns are driving us ever further from science, from making important social contributions, and from advancing management as a field. When we make professional decisions based on how it affects our own financial situation, we are assuming that we are engaged in a zero sum game, perhaps one with winner take all, in the sense that only the most recognized bring home those super salaries Bruni complains about.

In some preliminary research I conducted with a colleague, we longitudinally analyzed professorial salaries in management. Changing institutions was one of the largest factors impacting salary raises. Regrettably, citation impacts were of little insight in the models (but numbers of publications were). So the message is clear – focus on ‘number one’, at the expense of institutions, play the numbers game of quantity over quality, and one’s salary will appreciate faster than the stock market.

Of course, none of this addresses some very important roles that we, as professional scholars and academics, are meant to oversee. Mentorship, community contributions, field contributions, institution building – these do no typically generate the salary enhancements that drive some of us so mercilessly.

So, here is a professional challenge. What if each of us pledged to dedicate a portion of our time – let’s say 10% – toward community betterment that would NOT appear on our CV? How would this impact our professional field? Would this help move us toward a more balanced role of individual performance and social welfare? Might it lead us toward a heightened ethical view of our roles that might reduce some of the tensions that have resulted in various ethical compromises? Am I just dreaming – or could some sort of ‘gold’ standard of professorial behavior become established in our profession?