In search of evidence-based bullshit

Monday morning is not the best time to be told to ‘bridge the quality chasm’ and ‘identify your value stream’. I was having the misfortune of starting my week with a talk that introduced new health-service management ideas based on psychological sounding ideas such as ‘lean thinking’ and ‘connected leadership’.

Now, I’ve got no problem with things sounding like bullshit, as long as they work. After all, medicine is one of the few places where you can get away with calling the practice of squirting cold water in the ear ‘vestibular caloric stimulation’.

No-one minds that much, because it’s been very well researched and is known to have a profound, albeit temporary, effect on a number of neurological conditions.

So if I wanted to find out whether any of these new management techniques made an organisation more efficient, the first thing I’d do is find out what the research says.

In health and medicine, the ‘gold standard’ for finding our whether an intervention has an effect is the randomised controlled trial or RCT.

It’s a simple but powerful idea. You get a group of people you want to study. You measure them at the beginning. You randomly assign them to two groups. One gets the intervention, the other doesn’t. You measure them at the end. If your intervention has worked, one group should be different when compared to the others.

Of course, it gets a bit more complex in places. Making the comparison fair and deciding what should be measured can be tricky, but it’s still a useful tool.

After my traumatic Monday morning experience I went to see what randomized controlled trials had been done on management techniques.

To my surprise, I found none. Not a single RCT in any of the business psychology literature.

Now, this may be because I know little about organisational psychology, and literature searches are as much about knowing the key words as knowing what you want. So maybe RCTs are called something completely different, or I’m just looking in the wrong places.

So, if you know of any RCTs done on leadership and management techniques, please let me know, I’d be fascinated to find out.

I could completely wrong, but if I’m not, I want to know why are there no randomised-controlled trials in organisational psychology?

And as a corollary, are we spending millions on organisational interventions to supposedly help patients that have been tested no further than the pseudoscience we reject for every other area of medicine?

UPDATE: Some interesting comments from organisational psychologist Stefan Shipman:

It may be that the complexity lies in that organizational research is always secondary to doing business. I can remember in some of my early research that I attempted to implement a new human resources program in one department. The program was successful in its early stages and was (despite my suggestions) implemented company wide.

I think your post absolutely speaks to the frustration of all organizational psychologists because the zeal of organizations to find “new” ways of doing business that are hopefully more effective. This zeal often reduces the “completeness” of research. As organizational psychologists we accept the conditions under which real world research can be done. We encourage the assignment of conditions but accept that some ideas or programs might “leak” into other parts of the organization.

5 Comments

  1. Tom
    Posted December 2, 2007 at 5:42 am | Permalink

    I think that many of these scientifically *inspired* management techniques are not made to be scientifically sound but to sound somewhat familiar and to be believed by the audience. A scientifically sound and proved management theory that nobody likes and nobody believes in isn’t worth much because people won’t respond to it. Unless this wasn’t specifically announced as scientific talk, one has to be aware that one enters a field that maybe has more to do with collective hypnosis than science. It makes a difference if you do research on people or if you do your work among people and with them and if you need their will to cooperate.
    But of course, since you were a member of the audience, it’s quite possible that the talk was slightly ill targeted.

  2. Sheila
    Posted December 2, 2007 at 2:44 pm | Permalink

    I’ve often wondered the same thing in my life as a software engineer. I did a double degree in cs and psyc, focusing on cognitive psychology. I took a course in operational psychology, but I can’t remember much from that class to share with you. I vaguely recall discussion of psychometrics and tests of them. So, you’d probably be able to find literature on measuring performance and aptitudes, but I can’t remember anything about management techniques.
    I do pay attention to cacm articles and sometimes they’ll have articles that touch on the topic. For example, “Who should work with whom?: building effective software project teams” (http://portal.acm.org/citation.cfm?doid=990680.990684) but the methodology used isn’t as sophisticated as you can get in other areas of psychology.

  3. Sheila
    Posted December 2, 2007 at 2:45 pm | Permalink

    Oh, and a thought that occured to me. Perhaps coorporations that might do these types of studies don’t share the results as would happen in an academic setting. They might consider them proprietary.

  4. Posted December 4, 2007 at 5:36 pm | Permalink

    RCT is a great tool, but not the best one for every environment. In business you can’t wait a year to conduct and publish a beautiful paper, and, even if you did, the artificial environment created wouldn’t be the most relevant for real-time learning and adjusting.
    The best run companies I have seen/ been part of are true learning environments, in which every quarter and year is a real-world experiment, with clear hypotheses, empirical data, and feedback loops. Those results are not usually published in an academic paper, but become part of the culture of the company and sometimes shared in the form of consulting/ speaking engagements. Before you introduce a new practice you set up an imperfect pilot study, ideally comparing similar situations, and decide then what to scale. Imperfect, yes. Faster and more realistic than alternatives, most likely.
    Having said that, there is a clear difference between a Jack Welch and many “BS experts”: real-world results.

  5. Posted December 4, 2007 at 8:28 pm | Permalink

    I guess my gripe is not with companies who don’t conduct their own RCTs, but with consultants or consultancies who promote particular programmes with no RCT evidence that they work. Perhaps business researchers are also falling short here for not running them.
    Of course, RCTs are difficult to run at work, but they are possible. There are hundreds of occupational health RCTs done in exactly the same environment.
    http://tinyurl.com/2dafb2
    If you can do an RCT to test (for example) the effect of a stress management programme on productivity, why can’t you do one to test the effect of a management programme on productivity?


Post a Comment

Required fields are marked *
*
*

Follow

Get every new post delivered to your Inbox.

Join 2,584 other followers