Ethics? Let’s talk! Plan now for Atlanta sessions.

rightandwrongdecisionsWhat does it mean to act ethically?

Is it basically to “do the right thing”? We only have to peer out of our office windows to see that what one thinks is the right thing, the appropriate attitude, justifiable behavior, is utterly, perhaps terrifyingly wrong to someone else. What is the right thing, and who is the arbiter?

As academics, scholar practitioners, or students, much of our work is done privately. No one can see what we’re doing when we’re crafting a paper, analyzing data, or conducting a peer review on someone else’s work. If we cut corners or cheat the risk may not be obvious, or it may take time before those closed-door deeds become public. Other activities are public, and may have an immediate impact on other’s well-being, or careers. Even so, the right action, the ethical behavior, may not be entirely clear.

Members of the Academy need to be on the same page about what is right, and we can readily find that page– it’s called our Code of Ethics. The code spells out expectations for all of us in General and Professional Principles. Ethical Standards spell out “enforceable rules” for activities within the context of the AOM.

All members are expected to uphold the Code, but it is clear that many have not reviewed it to see what they have endorsed by joining the AOM, or perhaps wait until a problem arises before consulting it.

Like any document of this kind, it is useless unless we bring it to life in the ways that we think and act. The Ethics Education Committee (EEC) is responsible for bringing the Code of Ethics to the attention of our members– and the Ombuds Committee is responsible for providing guidance when dilemmas arise. EEC members are available to assist your Division Consortia, or other sessions you offer at the annual conference. We offer a flexible menu of options, and encourage you to contact us to discuss the best way we can work together in Atlanta.

We can offer the following types of sessions for your meeting, Division Consortium or Committee:

  1. Presentation and Discussion: A 60-minute interactive session to provide a broad overview of business and professional ethics, values and the AOM Code of Ethics.
  2. Focused Session and Discussion: A 30 to 60-minute session on a specific topic such as academic honesty, ethical dilemmas in collaborative research and writing, or an area you identify.
  3. Q & A Forum: Collect the questions your doctoral students or early career faculty have about ethics and the AOM in advance, and will come prepared to answer, and discuss them.
  4. Code and Procedures FAQ: A 30-minute introduction to the AOM Code of Ethics, who does what at the Academy in the ethics area, including the role of the Ombuds and ways to get help.
  5. Discussant: An EEC member can attend an ethics session you are offering in a consortium, PDW, or symposium, and answer questions as needed about the AOM Code, Ombuds roles etc.

Please contact EEC Chair Janet Salmons (jsalmons[at]vision2lead.com or with the contact form below) to discuss ways the EEC can help ensure that new and returning members your area of the Academy are familiar with the principles and standards they agreed to uphold.

Journal editors – unregulated and unmonitored

HI Friends

I’ve been quiet for a couple of months – summer schedule and all – and wanted to get back to the blogosphere. I’ll try and be more diligent.

Many strange things have been brought to my attention over the summer, but I thought I would start with a more personal experience. That way, if anyone want’s to comment, at least one side of the equation is available.

Last spring we sent a paper in to an unnamed FT50 journal. Normally, these top journals reply within three months – at least – that has been my experience until now, for the most part. One consequence of the enhanced competitive environment is that journal editors seem to invite submissions by promising faster turn around.

In any case, a full six months went by, without hearing from the journal. As a result, I contacted the editor directly.  The editor immediately responded, on a Friday,  by saying “I should have contacted him earlier” and that he would ‘get on it’. By Monday, we had our rejection, along with only one review, and a note from the editor saying he was unable to get a second review. He didn’t even bother adding his own comments to the rejection letter. Needless to say, the first review was not very helpful, but that is beside the point. This little exchange once again brings me to question the authority, transparency, and lack of professionalism sometime exhibited by editors of even top journals. One cannot help wondering, given the importance of these gate-keeping roles, how it happens that we have processes that appear cavalier, with no recourse regarding accountability, transparency, appeal, or arbitration. In this particular case, my career does not hinge on the outcome – but I must report – in many cases where individual careers are in jeopardy, I have more often observed arrogance than compassion.

So, this brings me to raise an important question – and I must highlight – this question does NOT apply to Academy of Management journals, where transparency and fairness seems to be much more institutionalized.

Who appoints these people as editors?

Who governs their behavior?

Why do we allow autocratic and incompetent behavior by editors, even of prestigious journals?

In my view, we have a serious professional need for an equivalent of ‘rate my professor’ for academic journals. Such an idea was posed a few years ago in the Chronicle of Higher Education by Robert Deaner who called for a “consumer reports for journals”. We could monitor and evaluate the review process, the editorial process, the time taken, and other aspects of peer review. If anyone is interested in starting such an activity, please let me know – as I think we really need some monitoring out there.

Happy Research!

Benson

 

Educating the educators: Truth and justice in academic publishing

It seems I can’t visit anywhere without hearing harrowing stories of unethical and abusive editors, reviewers, and scholars. Before starting this blog, I would hear the odd tale or two – but now I seem to be ground zero for the often shocking admissions of disgruntled and abused colleagues the world over!

While it would be nice to view these unfortunate confessions as a biased sample, I am beginning to believe that the entire profession harbors within each of us, numerous examples of blatantly unethical conduct, all simmering and waiting to escape as some sort of neurotic or equally unjust retribution. In short, we may be the walking wounded. All of this has to do with our culture of scholarship – we need to carefully ask ourselves, what kind of culture are we promoting, and what are our overall objectives? How can we improve the cultural landscape that we operate in?

Just a few representative examples:

A junior colleague tells me an anonymous reviewer demands a series of coercive self-citations of their own, only tangentially relevant work. They also disclose, in the review, both who they are, along with insinuations that they know exactly who the jr. scholar is. The editor forwards this review with no comment.

A senior scholar reports presenting a paper with a unique novel analysis of public data during a conference. A few months later, she observes a conference paper written by a member of the audience who had attended the talk – utilizing the exact same methods and data.  There is no mention of her paper, not even an acknowledgement. Despite reminding the author of this sequence of events – by sending a copy of the proceedings as a reminder – the paper is eventually published, without a word of recognition, even though the editor is aware of the circumstances.

Dog eat dog…

Finally, we have the ‘curse’ of the special issue editors. These are often the unregulated wild west. I have heard more horror stories than I can relate in this short blog, but they range from ‘tit for tat’ expectations, to outstanding examples of favoritism, nepotism, and cronyism. Editors taking advantage of writing themselves or their friends into special issues is very common. These may represent closed networks of special subject reviewers who are primed   to support primarily insider work – and reject outsider material. Social expectations trump scientific merit, and the entire effort becomes mired in politics.

While these are but a few examples, one begins to suspect that what is published is often not recognition regarding the high quality of the research, rather, it has to do with the social processes underlying how the work is presented. Rather than rewarding the highest quality work – or the most innovative work – we wind up with a kind of replication of the norm. We pat each other on the back regarding out methodological rigor, without really considering the accuracy or consequences of our efforts. No wonder managers in the ‘real world’ seldom pay attention to anything we do.

All of which suggests that we need more transparency in our publication and review process, as well as more insight into the methodological and philosophical rigour we use to approach our work. The idea of double blind is good – as long as it is truly double blind, and the objective is to enhance the quality of the subsequent product. However, all too often, we’re simply going through a well rehearsed process of convincing the editors and reviewers that our work is normative, while they go through the ritual of telling us how to frame an acceptable ‘story’ that meets their standards, irrespective of the accuracy of the work.

In a very insightful article, Bill Starbuck in the  60 year anniversary issue of ASQ points out the inconsistencies in reviewer evaluations, including the problems of submissions from ‘low status institutions’, convoluted formulas, and ambiguous editorial feedback. He also highlights the problems of signalling inherent in language usage, whereby reviewers can identify the origin of any particular manuscript’s authors.

Next, Bill tackles the issue of our efforts to enhance the importance of our work, irrespective of the actual merit, sometimes leading to corrupt methodologies, HARKing (Hypothesizing after results are known) and p-Hacking (subjecting data to multiple manipulations until some sort of pattern emerges) both of which misrepresent the accuracy of the theories discussed. Bill points out that this leads to “a cynical ethos that treats research as primarily a way to advance careers”.

Bill concludes that cultural changes are needed, but that they happen only slowly. Senior scholars must take a very visible lead – editors and reviewers alike. In the end, it’s really a matter of education.

I fully agree with Bill – we need to start looking at ourselves carefully in the mirror, stop quoting our individual H indexes, and begin the difficult task of educating ourselves regarding how to advance our scientific capabilities.

 

 

 

 

Predatory journals, and the arrogance of peer review

Sorry for the long absence, but I’ve been on the road quite a bit lately, providing me with an excuse for taking a short holiday from  blogging in the ethicist.

I recently returned from Nairobi, Kenya, where I was involved in running a track for a management conference for the Africa Academy of Management. The conference was a wonderful success, and it was great seeing so many management scholars from all over the world converging on Africa.

Of course, with any international scholarly conference, there are significant cultural norms, attitudes, and differences that we carry with us to our meetings. I thought it would be worthwhile to discuss just one of them: the perennial elephant in the room: publication, and in particular, predatory publication.

Surprisingly, while I was attending the conference, our Assoc. dean back home in Canada circulated a tract on the hazards of predatory journals. In particular, the email circulated informing the faculty of Beall’s list of predatory publishers.  The list provides perhaps a dozen journal titles specifically tailored toward management scholars. It also includes so called “hijacked journals” which emulate the name or acronym of other famous journals, presumably to confuse readers, as well as misleading metrics, whereby journals publish misleading impact factors (note: inflating  impact factor by journal self plagiarism is NOT considered a misleading practice!). So, for example, there are two ‘South African Journal of Business Management” publishers, presumably a legitimate one, and a ‘hacked’ one. Information Systems Management must contend with ‘Journal of Information System Management’, etc.. etc…

What surprised me most about our Canadian encounters is the reactions of some of my colleagues. An initial request was made to indicate that journals from this list would not be considered for hiring, tenure or promotion. This seemed like a reasonable request. Surprisingly, there was considerable debate, which ranged from “who created this list, anyway, it seems subjective” to “We’ve always been able to distinguish quality in the past, we should just continue as we have always done”.

While this was going on in the home front, my African colleagues were telling me stories of their own.  Publication is now de-rigueur for academic hiring and promotion at most African universities, even though they have barely established a research culture of their own. Incentives can vary widely, but many institutions pay bonuses for publications, and faculty are often offered opportunities to publish in little known journals for a ‘publication fee’ that can be considerable. During our ‘how to publish’ seminars, faculty repeatedly asked us how to distinguish between these predatory journals and the ‘other’ ones. Young scholars proudly shared how they had published six or seven articles in two years (in what journals, one might ask?). Doctoral students asked how to deal with advisers that insist on having their names on publications, despite them having absolutely nothing to do with any aspect of the research in question. In short, they had little information regarding the full range of scholarship, their own institutions rarely subscribed to the better journals, and they were often in a position of working in the dark regarding quality and scholarly norms.

So, coming full circle, it seems we have a problem of global proportions, one that might impact weaker institutions somewhat more (those without the governing systems to adequately ‘sniff out’ the corruption), but one that nevertheless impacts all of our work.

Of course, I can’t help but reflect the culture I live in – North America (I spend a lot of time in Europe as well…). So many of us would like to thumb our noses at our less fortunate colleagues and explain to them, with our own self importance, how our standards of publications reign supreme, and are only to emulated. To those of you, I’d like to refer to to Andrew Gelman’s recent posting that points out some serious weaknesses of our peer review system, where he critiques ‘power pose’ research.  Gelman points out that If you want to really review a paper, you need peer reviewers who can tell you if you’re missing something within the literature—and you need outside reviewers who can rescue you from groupthink”….” peer-review doesn’t get you much. The peers of the power-pose researchers are . . . other power-pose researchers. Or researchers on embodied cognition, or on other debatable claims in experimental psychology. Or maybe other scientists who don’t work in this area but have heard good things about it and want to be supportive of this work.”

So, let’s come full circle for a moment. We are in an international arena. Institutional norms are diffusing such that everyone wants to get into the same ‘game’. However, the rules of the game are subtle, often manipulated, rarely challenged, and heavily biased in favor of insiders over outsiders. No doubt, clarity would help everyone involved. How can we overcome our own blind spots? How can we validate and authenticate the publication process? What kind of measures might we employ to do so?

I disagree with my colleagues who argue ‘it worked in the past, we can continue doing it in the future’. First, I’m not certain how effective we were in the past. There may be numerous failings hidden inside our collective closets, some of which may come to light in the form of future retractions. Second, I’m not so certain we’ve made enormous progress in our own field of study. And finally, and most importantly, new mechanisms for corruption, cheating, and exploiting seem to pop up each and every day.

Which brings me to the central question I’ve been pondering: What can we do, as a community, to improve the quality of our work, while sifting out potential corrupt forces?