Predatory journals, and the arrogance of peer review

Sorry for the long absence, but I’ve been on the road quite a bit lately, providing me with an excuse for taking a short holiday from  blogging in the ethicist.

I recently returned from Nairobi, Kenya, where I was involved in running a track for a management conference for the Africa Academy of Management. The conference was a wonderful success, and it was great seeing so many management scholars from all over the world converging on Africa.

Of course, with any international scholarly conference, there are significant cultural norms, attitudes, and differences that we carry with us to our meetings. I thought it would be worthwhile to discuss just one of them: the perennial elephant in the room: publication, and in particular, predatory publication.

Surprisingly, while I was attending the conference, our Assoc. dean back home in Canada circulated a tract on the hazards of predatory journals. In particular, the email circulated informing the faculty of Beall’s list of predatory publishers.  The list provides perhaps a dozen journal titles specifically tailored toward management scholars. It also includes so called “hijacked journals” which emulate the name or acronym of other famous journals, presumably to confuse readers, as well as misleading metrics, whereby journals publish misleading impact factors (note: inflating  impact factor by journal self plagiarism is NOT considered a misleading practice!). So, for example, there are two ‘South African Journal of Business Management” publishers, presumably a legitimate one, and a ‘hacked’ one. Information Systems Management must contend with ‘Journal of Information System Management’, etc.. etc…

What surprised me most about our Canadian encounters is the reactions of some of my colleagues. An initial request was made to indicate that journals from this list would not be considered for hiring, tenure or promotion. This seemed like a reasonable request. Surprisingly, there was considerable debate, which ranged from “who created this list, anyway, it seems subjective” to “We’ve always been able to distinguish quality in the past, we should just continue as we have always done”.

While this was going on in the home front, my African colleagues were telling me stories of their own.  Publication is now de-rigueur for academic hiring and promotion at most African universities, even though they have barely established a research culture of their own. Incentives can vary widely, but many institutions pay bonuses for publications, and faculty are often offered opportunities to publish in little known journals for a ‘publication fee’ that can be considerable. During our ‘how to publish’ seminars, faculty repeatedly asked us how to distinguish between these predatory journals and the ‘other’ ones. Young scholars proudly shared how they had published six or seven articles in two years (in what journals, one might ask?). Doctoral students asked how to deal with advisers that insist on having their names on publications, despite them having absolutely nothing to do with any aspect of the research in question. In short, they had little information regarding the full range of scholarship, their own institutions rarely subscribed to the better journals, and they were often in a position of working in the dark regarding quality and scholarly norms.

So, coming full circle, it seems we have a problem of global proportions, one that might impact weaker institutions somewhat more (those without the governing systems to adequately ‘sniff out’ the corruption), but one that nevertheless impacts all of our work.

Of course, I can’t help but reflect the culture I live in – North America (I spend a lot of time in Europe as well…). So many of us would like to thumb our noses at our less fortunate colleagues and explain to them, with our own self importance, how our standards of publications reign supreme, and are only to emulated. To those of you, I’d like to refer to to Andrew Gelman’s recent posting that points out some serious weaknesses of our peer review system, where he critiques ‘power pose’ research.  Gelman points out that If you want to really review a paper, you need peer reviewers who can tell you if you’re missing something within the literature—and you need outside reviewers who can rescue you from groupthink”….” peer-review doesn’t get you much. The peers of the power-pose researchers are . . . other power-pose researchers. Or researchers on embodied cognition, or on other debatable claims in experimental psychology. Or maybe other scientists who don’t work in this area but have heard good things about it and want to be supportive of this work.”

So, let’s come full circle for a moment. We are in an international arena. Institutional norms are diffusing such that everyone wants to get into the same ‘game’. However, the rules of the game are subtle, often manipulated, rarely challenged, and heavily biased in favor of insiders over outsiders. No doubt, clarity would help everyone involved. How can we overcome our own blind spots? How can we validate and authenticate the publication process? What kind of measures might we employ to do so?

I disagree with my colleagues who argue ‘it worked in the past, we can continue doing it in the future’. First, I’m not certain how effective we were in the past. There may be numerous failings hidden inside our collective closets, some of which may come to light in the form of future retractions. Second, I’m not so certain we’ve made enormous progress in our own field of study. And finally, and most importantly, new mechanisms for corruption, cheating, and exploiting seem to pop up each and every day.

Which brings me to the central question I’ve been pondering: What can we do, as a community, to improve the quality of our work, while sifting out potential corrupt forces?

 

 

 

4 thoughts on “Predatory journals, and the arrogance of peer review”

  1. Glad that you are back. I think that we already know many of the answers to the questions but they are hard to put into practice: 1) pre-registration of hypotheses and research questions, 2) publication of de-identified datasets with all papers, 3) a greater acceptance among journals of disconfirming evidence and null findings (reducing the incentive to fudge our results), 4) a move away from the silly NHST framework toward more strong inference and Bayesian thinking. I also think that we need to revise our way of judging research quality. Journal impact factor is a weak signal for the quality (or even citation rate) for any one article and we get way too hung up on citations anyway. There are management papers out there with well-over 1000 citations that are complete bunk. I am much more impressed by a first author paper in Psychometrika (impact factor 1) that never gets cited than by a fourth author pub in a management journal that gets cited 100 times.

  2. Given Andrew Gelman’s very legitimate concerns about the garden-of-forking-paths we should probably also preregister our data analysis approach.

  3. I fully agree with Andrew’s suggestions, but they present one major problem. Research we’ve been doing shows that universities reward faculty according to the most ‘base’ criteria, number of publications. They like to count. Impact factor helps them with their counting. Measuring quality would imply actually reading and evaluating someone’s work – and we seem to have lost our ‘taste’ for being willing to do subjective reviews. The result has spun things out of control, but committees can argue it is all done fairly.

Leave a Reply