It seems as though few days pass between individuals ‘recommending me’ on researchgate or Linkedin, for all types of skills. It started innocuously enough – when I received a recommendation on my main field of study from a colleague who knew me fairly well. What began as a friendly ‘tip of the hat’ has blossomed into a full scale encyclopedia of ratings (all, naturally positive – as nobody seems to dis-recommend me for anything) of just about everything related to what I might or might not do as a scholar, faculty member, or even dog walker. There are now dozens of references, some by people I barely know, or might not even know, or ever meet. So, what’s going on?
Presumably, there is a tit-for-tat process going on, whereby I am expected, in turn, to recommend the colleague I don’t even know or have hardly spoken with as a ‘expert’ in the field of xyz. In fact, these two rating organizations ‘helpfully’ remind me every so often to do so, by providing lists of people I may or may not know, along with questions “do you recommend Rumplestilskin as an expert public speaker”? (never mind that I don’t know who they are, and have never heard them speak publicly or privately). The assumption is that given that they were so nice as to recommend my paint drying observational skills, I will return the favor by also highly recommending them (I often do). It reminds me a bit of when I collected baseball cards as a kid – more was always better. Unfortunately, mom gave away the box and I no longer have my Mickey Mantle rookie card – although I’m not sure who will take my expired Linked-in recommendations – maybe I’ll just have to assume a new identity.
So, what’s going on, anyway, with all these internet evaluation schemes? In an op-ed in the NY times last year, David Brooks pointed out that while the private sector demands for peer-to-peer rating systems have obvious advantages, there is as yet no role for government in terms of regulating peer based reference systems. They exist in a ‘grey zone’ between bake sales and personal security. Yet, as these systems increasingly take on currency in our professional lives (e.g. rate my professor), it seems like our Academy might want to play a more active role. Certainly, our professional associations should be drawing certain red lines regarding appropriate behavior of our membership, including consulting roles that seem to be reflected in these reference systems. The recent shocking revelation that the American Psychology Association provided supporting recommendations to the US government regarding consulting about torture suggests that professional organizations will increasingly play a public role in providing both public and private ethical guidance and boundaries related to public trust in the integrity of our professional activities.
I was able to find two related passages in our code of ethics that obliquely address the importance of providing accurate assessments:
Credentials and capabilities. It is the duty of consultants to represent their credentials and capabilities in an accurate and objective manner.
And later: AOM members do not make public statements, of any kind, that are false, deceptive, misleading, or fraudulent, either because of what they state, convey, or suggest or because of what they omit, including, but not limited to, false or deceptive statements concerning other AOM members.
So, the next time you are asked to evaluate a colleague – someone you might barely know – and have the urge to ‘return the favor’, give some consideration to who you are recommending, and what you are recommending them for. If we are ever able to enhance peer-to-peer reference systems such that they actually have an impact, it will be critical to carefully follow the recommendations outlined in our own code of ethics.