Ethics? Let’s talk! Plan now for Atlanta sessions.

rightandwrongdecisionsWhat does it mean to act ethically?

Is it basically to “do the right thing”? We only have to peer out of our office windows to see that what one thinks is the right thing, the appropriate attitude, justifiable behavior, is utterly, perhaps terrifyingly wrong to someone else. What is the right thing, and who is the arbiter?

As academics, scholar practitioners, or students, much of our work is done privately. No one can see what we’re doing when we’re crafting a paper, analyzing data, or conducting a peer review on someone else’s work. If we cut corners or cheat the risk may not be obvious, or it may take time before those closed-door deeds become public. Other activities are public, and may have an immediate impact on other’s well-being, or careers. Even so, the right action, the ethical behavior, may not be entirely clear.

Members of the Academy need to be on the same page about what is right, and we can readily find that page– it’s called our Code of Ethics. The code spells out expectations for all of us in General and Professional Principles. Ethical Standards spell out “enforceable rules” for activities within the context of the AOM.

All members are expected to uphold the Code, but it is clear that many have not reviewed it to see what they have endorsed by joining the AOM, or perhaps wait until a problem arises before consulting it.

Like any document of this kind, it is useless unless we bring it to life in the ways that we think and act. The Ethics Education Committee (EEC) is responsible for bringing the Code of Ethics to the attention of our members– and the Ombuds Committee is responsible for providing guidance when dilemmas arise. EEC members are available to assist your Division Consortia, or other sessions you offer at the annual conference. We offer a flexible menu of options, and encourage you to contact us to discuss the best way we can work together in Atlanta.

We can offer the following types of sessions for your meeting, Division Consortium or Committee:

  1. Presentation and Discussion: A 60-minute interactive session to provide a broad overview of business and professional ethics, values and the AOM Code of Ethics.
  2. Focused Session and Discussion: A 30 to 60-minute session on a specific topic such as academic honesty, ethical dilemmas in collaborative research and writing, or an area you identify.
  3. Q & A Forum: Collect the questions your doctoral students or early career faculty have about ethics and the AOM in advance, and will come prepared to answer, and discuss them.
  4. Code and Procedures FAQ: A 30-minute introduction to the AOM Code of Ethics, who does what at the Academy in the ethics area, including the role of the Ombuds and ways to get help.
  5. Discussant: An EEC member can attend an ethics session you are offering in a consortium, PDW, or symposium, and answer questions as needed about the AOM Code, Ombuds roles etc.

Please contact EEC Chair Janet Salmons (jsalmons[at]vision2lead.com or with the contact form below) to discuss ways the EEC can help ensure that new and returning members your area of the Academy are familiar with the principles and standards they agreed to uphold.

Journal editors – unregulated and unmonitored

HI Friends

I’ve been quiet for a couple of months – summer schedule and all – and wanted to get back to the blogosphere. I’ll try and be more diligent.

Many strange things have been brought to my attention over the summer, but I thought I would start with a more personal experience. That way, if anyone want’s to comment, at least one side of the equation is available.

Last spring we sent a paper in to an unnamed FT50 journal. Normally, these top journals reply within three months – at least – that has been my experience until now, for the most part. One consequence of the enhanced competitive environment is that journal editors seem to invite submissions by promising faster turn around.

In any case, a full six months went by, without hearing from the journal. As a result, I contacted the editor directly.  The editor immediately responded, on a Friday,  by saying “I should have contacted him earlier” and that he would ‘get on it’. By Monday, we had our rejection, along with only one review, and a note from the editor saying he was unable to get a second review. He didn’t even bother adding his own comments to the rejection letter. Needless to say, the first review was not very helpful, but that is beside the point. This little exchange once again brings me to question the authority, transparency, and lack of professionalism sometime exhibited by editors of even top journals. One cannot help wondering, given the importance of these gate-keeping roles, how it happens that we have processes that appear cavalier, with no recourse regarding accountability, transparency, appeal, or arbitration. In this particular case, my career does not hinge on the outcome – but I must report – in many cases where individual careers are in jeopardy, I have more often observed arrogance than compassion.

So, this brings me to raise an important question – and I must highlight – this question does NOT apply to Academy of Management journals, where transparency and fairness seems to be much more institutionalized.

Who appoints these people as editors?

Who governs their behavior?

Why do we allow autocratic and incompetent behavior by editors, even of prestigious journals?

In my view, we have a serious professional need for an equivalent of ‘rate my professor’ for academic journals. Such an idea was posed a few years ago in the Chronicle of Higher Education by Robert Deaner who called for a “consumer reports for journals”. We could monitor and evaluate the review process, the editorial process, the time taken, and other aspects of peer review. If anyone is interested in starting such an activity, please let me know – as I think we really need some monitoring out there.

Happy Research!

Benson

 

Educating the educators: Truth and justice in academic publishing

It seems I can’t visit anywhere without hearing harrowing stories of unethical and abusive editors, reviewers, and scholars. Before starting this blog, I would hear the odd tale or two – but now I seem to be ground zero for the often shocking admissions of disgruntled and abused colleagues the world over!

While it would be nice to view these unfortunate confessions as a biased sample, I am beginning to believe that the entire profession harbors within each of us, numerous examples of blatantly unethical conduct, all simmering and waiting to escape as some sort of neurotic or equally unjust retribution. In short, we may be the walking wounded. All of this has to do with our culture of scholarship – we need to carefully ask ourselves, what kind of culture are we promoting, and what are our overall objectives? How can we improve the cultural landscape that we operate in?

Just a few representative examples:

A junior colleague tells me an anonymous reviewer demands a series of coercive self-citations of their own, only tangentially relevant work. They also disclose, in the review, both who they are, along with insinuations that they know exactly who the jr. scholar is. The editor forwards this review with no comment.

A senior scholar reports presenting a paper with a unique novel analysis of public data during a conference. A few months later, she observes a conference paper written by a member of the audience who had attended the talk – utilizing the exact same methods and data.  There is no mention of her paper, not even an acknowledgement. Despite reminding the author of this sequence of events – by sending a copy of the proceedings as a reminder – the paper is eventually published, without a word of recognition, even though the editor is aware of the circumstances.

Dog eat dog…

Finally, we have the ‘curse’ of the special issue editors. These are often the unregulated wild west. I have heard more horror stories than I can relate in this short blog, but they range from ‘tit for tat’ expectations, to outstanding examples of favoritism, nepotism, and cronyism. Editors taking advantage of writing themselves or their friends into special issues is very common. These may represent closed networks of special subject reviewers who are primed   to support primarily insider work – and reject outsider material. Social expectations trump scientific merit, and the entire effort becomes mired in politics.

While these are but a few examples, one begins to suspect that what is published is often not recognition regarding the high quality of the research, rather, it has to do with the social processes underlying how the work is presented. Rather than rewarding the highest quality work – or the most innovative work – we wind up with a kind of replication of the norm. We pat each other on the back regarding out methodological rigor, without really considering the accuracy or consequences of our efforts. No wonder managers in the ‘real world’ seldom pay attention to anything we do.

All of which suggests that we need more transparency in our publication and review process, as well as more insight into the methodological and philosophical rigour we use to approach our work. The idea of double blind is good – as long as it is truly double blind, and the objective is to enhance the quality of the subsequent product. However, all too often, we’re simply going through a well rehearsed process of convincing the editors and reviewers that our work is normative, while they go through the ritual of telling us how to frame an acceptable ‘story’ that meets their standards, irrespective of the accuracy of the work.

In a very insightful article, Bill Starbuck in the  60 year anniversary issue of ASQ points out the inconsistencies in reviewer evaluations, including the problems of submissions from ‘low status institutions’, convoluted formulas, and ambiguous editorial feedback. He also highlights the problems of signalling inherent in language usage, whereby reviewers can identify the origin of any particular manuscript’s authors.

Next, Bill tackles the issue of our efforts to enhance the importance of our work, irrespective of the actual merit, sometimes leading to corrupt methodologies, HARKing (Hypothesizing after results are known) and p-Hacking (subjecting data to multiple manipulations until some sort of pattern emerges) both of which misrepresent the accuracy of the theories discussed. Bill points out that this leads to “a cynical ethos that treats research as primarily a way to advance careers”.

Bill concludes that cultural changes are needed, but that they happen only slowly. Senior scholars must take a very visible lead – editors and reviewers alike. In the end, it’s really a matter of education.

I fully agree with Bill – we need to start looking at ourselves carefully in the mirror, stop quoting our individual H indexes, and begin the difficult task of educating ourselves regarding how to advance our scientific capabilities.

 

 

 

 

Predatory journals, and the arrogance of peer review

Sorry for the long absence, but I’ve been on the road quite a bit lately, providing me with an excuse for taking a short holiday from  blogging in the ethicist.

I recently returned from Nairobi, Kenya, where I was involved in running a track for a management conference for the Africa Academy of Management. The conference was a wonderful success, and it was great seeing so many management scholars from all over the world converging on Africa.

Of course, with any international scholarly conference, there are significant cultural norms, attitudes, and differences that we carry with us to our meetings. I thought it would be worthwhile to discuss just one of them: the perennial elephant in the room: publication, and in particular, predatory publication.

Surprisingly, while I was attending the conference, our Assoc. dean back home in Canada circulated a tract on the hazards of predatory journals. In particular, the email circulated informing the faculty of Beall’s list of predatory publishers.  The list provides perhaps a dozen journal titles specifically tailored toward management scholars. It also includes so called “hijacked journals” which emulate the name or acronym of other famous journals, presumably to confuse readers, as well as misleading metrics, whereby journals publish misleading impact factors (note: inflating  impact factor by journal self plagiarism is NOT considered a misleading practice!). So, for example, there are two ‘South African Journal of Business Management” publishers, presumably a legitimate one, and a ‘hacked’ one. Information Systems Management must contend with ‘Journal of Information System Management’, etc.. etc…

What surprised me most about our Canadian encounters is the reactions of some of my colleagues. An initial request was made to indicate that journals from this list would not be considered for hiring, tenure or promotion. This seemed like a reasonable request. Surprisingly, there was considerable debate, which ranged from “who created this list, anyway, it seems subjective” to “We’ve always been able to distinguish quality in the past, we should just continue as we have always done”.

While this was going on in the home front, my African colleagues were telling me stories of their own.  Publication is now de-rigueur for academic hiring and promotion at most African universities, even though they have barely established a research culture of their own. Incentives can vary widely, but many institutions pay bonuses for publications, and faculty are often offered opportunities to publish in little known journals for a ‘publication fee’ that can be considerable. During our ‘how to publish’ seminars, faculty repeatedly asked us how to distinguish between these predatory journals and the ‘other’ ones. Young scholars proudly shared how they had published six or seven articles in two years (in what journals, one might ask?). Doctoral students asked how to deal with advisers that insist on having their names on publications, despite them having absolutely nothing to do with any aspect of the research in question. In short, they had little information regarding the full range of scholarship, their own institutions rarely subscribed to the better journals, and they were often in a position of working in the dark regarding quality and scholarly norms.

So, coming full circle, it seems we have a problem of global proportions, one that might impact weaker institutions somewhat more (those without the governing systems to adequately ‘sniff out’ the corruption), but one that nevertheless impacts all of our work.

Of course, I can’t help but reflect the culture I live in – North America (I spend a lot of time in Europe as well…). So many of us would like to thumb our noses at our less fortunate colleagues and explain to them, with our own self importance, how our standards of publications reign supreme, and are only to emulated. To those of you, I’d like to refer to to Andrew Gelman’s recent posting that points out some serious weaknesses of our peer review system, where he critiques ‘power pose’ research.  Gelman points out that If you want to really review a paper, you need peer reviewers who can tell you if you’re missing something within the literature—and you need outside reviewers who can rescue you from groupthink”….” peer-review doesn’t get you much. The peers of the power-pose researchers are . . . other power-pose researchers. Or researchers on embodied cognition, or on other debatable claims in experimental psychology. Or maybe other scientists who don’t work in this area but have heard good things about it and want to be supportive of this work.”

So, let’s come full circle for a moment. We are in an international arena. Institutional norms are diffusing such that everyone wants to get into the same ‘game’. However, the rules of the game are subtle, often manipulated, rarely challenged, and heavily biased in favor of insiders over outsiders. No doubt, clarity would help everyone involved. How can we overcome our own blind spots? How can we validate and authenticate the publication process? What kind of measures might we employ to do so?

I disagree with my colleagues who argue ‘it worked in the past, we can continue doing it in the future’. First, I’m not certain how effective we were in the past. There may be numerous failings hidden inside our collective closets, some of which may come to light in the form of future retractions. Second, I’m not so certain we’ve made enormous progress in our own field of study. And finally, and most importantly, new mechanisms for corruption, cheating, and exploiting seem to pop up each and every day.

Which brings me to the central question I’ve been pondering: What can we do, as a community, to improve the quality of our work, while sifting out potential corrupt forces?

 

 

 

The Obligation to Publish

Lately, I’ve been feeling a bit melancholy about the my obligations to speak publicly about what I know. This has affected both my contributions to this blog, and my work on my longstanding blog about academic writing. It’s not, of course, that I don’t know anything, nor that I don’t have anything I want to say; it’s just a sort of reticence about engaging with others. It will, of course, pass in due time, and it’s probably not something to worry about. But it does raise an interesting ethical question: do we have an ethical obligation to say publicly what’s on our mind?

The Code tells us that we have an obligation

2. To the advancement of managerial knowledge. Prudence in research design, human subject use, and confidentiality and reporting of results is essential. Proper attribution of work is a necessity. We accomplish these aims through:

  •   Conducting and reporting. It is the duty of AOM members conducting research to design, implement, analyze, report, and present their findings rigorously.

I imagine most people read this with an emphasis on “rigorously”, i.e., as a responsibility when we do conduct research and report it to do so rigorously. But I think we do well to keep in mind that if we spent our entire scholarly careers conducting no research at all, or not reporting whatever research we did conduct, we would in fact be shirking an important responsibility.

Reporting our research opens it to criticism by our peers. It allows us to be corrected in our views wherever they happen to be erroneous. One of the most important reasons to publish, that is, is to give our peers an opportunity to tell us where we have gone wrong, so we can stop misleading our students about it, for example. But it is also a way of informing others about results that might call their previously held views into question. If I know that something you think is true is actually false (or vice versa) then I have an obligation to share that knowledge with you. That’s part of what it means to be an academic.

There’s an interesting variation on this theme in the current discussion of the publication of “null results”. If 9 out of 10 studies show no significant effect of a particular managerial practice, but only the 1 out of 10 studies that shows an effect is published, then we are being systematically misled about the efficacy of that practice. And yet, in today’s publishing culture, authors and journals are much less incentivized to publish null results than significant ones.

The Code does say that it is the responsibility of “journal editors and reviewers [towards the larger goal of advancing managerial knowledge] to exercise the privilege of their positions in a confidential, unbiased, prompt, constructive, and sensitive manner.” Perhaps I’m once again grasping at straws, but it is possible to construe “unbiased” as requiring us to publish valid but insignificant findings, i.e., studies that show no effect where one was hypothesized.

This becomes a personal ethical concern for individual scholars when they don’t publish results that call their own favoured theory into question, always, of course, citing the unwillingness of journals to publish null results. But whether it’s the authors or the editors that are to blame, the overall effect is that the truth remains hidden. So, in a sense, it is a species of dishonesty.

For that reason alone, I hope this melancholy of mine soon passes and that I once again start doing the responsible thing, namely, putting my ideas out there for all to see.

Trust

This blog is committed to facilitating a conversation about ethics among the members of the Academy of Management. There are two reasons for this. First, the topic demands it. It is not enough for a professional organization to have a code of ethics, nor even for that code to be rigorously enforced. In order to have a positive effect, ethics must be the subject of an ongoing conversation among the practitioners that work in the relevant communities. There’s no straightforwardly “right and wrong” way of doing a particular thing. We become “better people” by talking about what we do and how we do it, and the consequences of our actions on other people.

Second, it is my firm belief that blogs are best engaged with as conversations, even if only as conversations “overheard”. When I write a blog post, I’m not really pretending to be an “author”. It is certainly not my intention to “lecture”. Your role, as a reader, is not simply to try to understand and then believe what I tell you. Rather, implicitly at the end of the post, there is the question, What do you think? Often (since this is a blog about ethical behavior), What would you do?

So I’ve been thrilled to talk to an anonymous reader in the comments to my post from a couple of weeks ago. Focusing mainly on publication ethics, Anon123 began by saying that he* was “deeply skeptical of any attempts to teach ethics other than by our everyday conduct and, perhaps more importantly, the conduct of the leaders of our field.” I share his worry but am, perhaps, a bit more optimistic. I think that, if the conversation about ethics is being had throughout the many forums of the Academy, our leaders will have both better conditions and better opportunities to set a good example. Perhaps they’ll even find their efforts rewarded in journal and business school rankings. But, for the past 20 years or so, it is true that we have taken ethics somewhat for granted, assuming that people are generally well-intentioned and that errors are generally honest. This has perhaps made us less vigilant than we should be–even, I often emphasize, as regards catching those honest mistakes.

The result, as Anon123 points out, can sometimes be a bit dispiriting:

I have been in the field a fairly long time but I find myself unwilling to believe much of what is published in our journals anymore. The work on the Chrysalis Effect, researcher degrees of freedom, p-hacking and HARKing makes it clear that a substantial proportion of our collective scholarship cannot be trusted, but it is impossible to know precisely what to trust and what not to trust.

These are all issues that concern me too. I’d highly recommend Andrew Gelman’s blog for anyone who is interested in a technical discussion of the many ways in which statistics can be misused, out of either malice or ignorance. (See this post, for example, about how what is sometimes called p-hacking often actually results from perfectly sincere statistical naivety.) Of course, it hardly matters whether people are cheating or just careless (and we do, of course, have an ethical obligation to be careful) if the result is that the published literature becomes an unreliable source of knowledge. And that’s exactly what Anon123 suggests, in very strong terms:

If you told me that 5% or 10% of my favorite cereal brand is infested with worms but that I can only tell that after I have purchased the cereal (or have tried to eat it) I can guarantee you that I would no longer purchase that cereal. Similarly, I feel disinclined to continue to “purchase” many of the paper published in journals like AMJ or JOM – or recommend them to others.

That is, he would not simply buy the cereal with greater caution–testing it for worms, for example, before eating it. Rather, he’d simply stop buying it. This reminds me that I once discovered a shelf-full of hot wings in the local supermarket that were a month over their best-before date. The store clerk I pointed it out to didn’t really seem interested. He didn’t hurry over to check out the problem (even to make sure that my absurd claim was indeed mistaken), but sort of sauntered on with his day. I guess he’d “get to them” when he was ready. Needless to say, I’ve had a hard time buying anything there ever since. Certainly, I confined my purchases on that day to a few imperishables.

Notice that it wasn’t just the extremely out-of-date hot wings that turned me off the store. It was the conversation about it (or lack thereof) that ensued that undermined my trust. Likewise, knowing that 60% of the results of psychological studies can’t be replicated does not mean (though I am sometimes tempted to let it) that we shouldn’t ever take psychology seriously. It is how the psychological sciences deal with this new knowledge that is important. If we get the sense that they are sweeping it under the rug, or simply not really bothered by it, then it will indeed affect how seriously we can take them.

The recent correction of an ASQ paper about CEO narcissism, has given me some hope that the system is improving. Here’s how Jerry Davis described the exemplary process to Retraction Watch:

A concerned reader notified me of the issues with a published table in this paper a few weeks ago, and also contacted the authors.  The authors came forward with a correction, which we promptly published.  We did not consider this sufficient for a full  retraction.  The concerned reader reports that he/she is satisfied with the corrigendum.  The journal is always looking for ways to enhance the quality of the review process, and if errors end up in print, we aim to correct them promptly.

To me, the key here is that the “concerned reader … is satisfied with the corrigendum”. It is all about feeling that when you share your concerns they are taken seriously. That’s the sort of leadership that is likely to rebuild the trust we need in the management literature. Hopefully, over time, even Anon123 can be brought around.

 

_________

* I had to think about this pronoun for awhile, and I’m sorry if I got it wrong. It is of course possible to get it wrong even when a name (like Jesse or Shawn) is given. In this case, I’ve gone with my intuition based on the style of the comment, its “voice, if you will. If my “ear” has misled me I hope it will cause as little offence as the time I assumed an Italian commenter named Gabriele was a woman.