Blog 1: Evidence-itis

8 June, 2012

Evidence is a valuable part of management but, like any language, it has limitations. Understanding organisations requires more than one language, this post argues.

In May I talked at the excellent Connecting HR conference. Here’s the gist of what I said.

I had stumbled across a depressing comment by a human resources luminary called John McGurk. He had written that “deep business understanding can only be demonstrated by evidence-based HR.” Depressing because it suggested a belief that evidence = the only way to show that you understand business.

I became interested in metrics through my research on the paradoxes that are a feature of organisational life – like the short-term vs the long-term. The story (so far) is that the short term often gets preferred treatment because it’s easier to measure than the long term, and one of the universal truths about business is that what gets measured gets done. Or rather, it’s a universal truth because it’s barely challenged, and I’ve become more and more curious about why this is so.

HR people revere metrics because they want to be taken seriously in companies where the lingua franca is metrics, but in embracing evidence-based HR as the gold standard they abandon the very thing which could make them invaluable: a firm grasp of the psychosocial – what goes on inside and between people.

Evidence begins with a claim about what is true. You set about testing variables to support your claim, and if the evidence isn’t there you should drop your claim. This is a very specific take on what is true and how you can know it. Evidence is part of a system modelled on the Enlightenment and the natural sciences – physics, chemistry and biology. But the social world – organisations included – is a contaminated laboratory where certainty is arguably a lot more elusive. (For some interesting thoughts on social science and work, look at a recent post on Flip Chart Fairy Tales here).

This blog isn’t trying to diss evidence: just our faith in it. So here are 3 reasons to question this faith…


1. The evidence approach often falls way short of high science

There is some McKinsey & Whitehead Mann research (see graph from Human Resources International, Sept 2011) which says that certain kinds of leaders are correlated with more financially

successful companies. This is interesting and it could be the start of a great conversation. What is surprising is that the article shifts from saying there’s a correlation (which is fine)…to claiming that one thing causes the other. In other words a certain kind of leader predicts company success.

“The study shows that leaders make all the difference in powering growth.”

To know something causes something else, you need to be sure you can isolate it as a cause. You also need to be sure it was there first! This McKinsey article is like me claiming I can predict tonight’s lottery, but only producing the proof after 8pm. We all want good evidence to explain things, but we should be candid when it doesn’t – shouldn’t we? And if we can’t rely on kids at the top of the class (McKinsey) to be clear about this…

2. Not everything that counts can be counted

Some of the biggest ideas come about when evidence and logic are ignored. What Einstein called ‘the most beautiful thought of his life’ began when he decided to play with an impossibility: objects which are moving and stationary at the same time. He imagined a man jumping off a tall building. On the way down he takes out his wallet and lets go of it. For a few seconds both objects are falling and yet they’re both stationary relative to each other. This produced the insight that eventually produced his theory of relativity. In organisations the innovation that leaders crave often starts in the same way – with something that goes against common sense. But it’s hard to deviate from what’s established when all you want to do is make your world coherent and logical.

3. Evidence ≠ neutral

Evidence is regularly portrayed as something pure, unadulterated, outside politics. But how likely is it that anything is unpolitical when organisations are social entities? One of the great successes of rationalism in organisations has been to link metrics to brainpower. When HR people argue a point without evidence they’re torn, because their heart says they’re on to something, while their training tells them that evidence is the high water mark of intellectual rigour. ‘Put that into numbers and we may just take you seriously.’ Most people don’t want to look unrigorous, so you need to be a very confident HR person to insist that some of the best judgments are made without what’s called objective evidence. Yet for all the senior people (and scientists) I know, that skill is vital.


What to do?

Use evidence where possible. Like duh. Be as commercial as possible. Duh. But also…

1. Understand how evidence-based management works so that you can expose its assumptions and what it can’t explain.

2. Show how other approaches can shed light on organisations and the people who work in them – eg psychoanalytic, critical, postmodern. This is not evidence OR a shinier alternative. It’s both/and.

3. There’s something else. All this zeal for evidence says something profound about our relationship with ambiguity. Brands are social properties which only really influence when people adopt them as their own. For 10,000 employees that means 10,000 subtly different interpretations of the brand. For the old school of brand management this idea is terrifying. The old school sees brand as something to control. New school sees brand as a broad platform to inspire action.

The school of evidence-based HR has a similar choice. One option is to follow John McGurk until evidence has presumably banished ambiguity for ever. The other is to start seeing organisations as ambiguous, messy places where evidence is just one kind of knowledge, not the gold standard.

{ 3 comments… read them below or add one }

Kerstin Sailer June 11, 2012 at 10:33 am

Nice post, Jamie! I like the idea that organisations are ambiguous and messy and that knowledge can come in many different forms. However, I would disagree that evidence-based practices (management, HR, or in fact design, which I am working on) only speak one single language of ‘truth’, ‘causation’ or only display a specific kind of knowledge, i.e. clearly measurable metrics.
An evidence-based practice is always as good as its scientific basis and rigour. That also means choosing appropriate methods and a valid ‘research design’, i.e. how do combine various methods to answer your research questions?
This doesn’t necessarily mean reverting to quantitative data and easy statistics, which would only communicate half of the story. As we agree, organisations are messy, political and ambiguous.
If an evidence-based practice uses multiple data sources, multiple and diverse methods and combines insights from quantitative and qualitative data, and weaves this into a complex story of what is going on in an organisation, I believe it can be an important contribution to innovation, change and new, radical thinking.
We have to keep in mind that there’s (as always) good science and bad science. And evidence-based practices can only be as good as the underlying science. Unfortunately, too often this is overlooked.


Jamiepr June 12, 2012 at 1:12 am

Thanks very much for your interesting thoughts, Kerstin. What you say makes sense, and if we disagree it’s probably about the definition of evidence. I had in mind reductionist methods which support truth claims with objective evidence. All research relies on interpretation (e.g. about whether and how much engagement surveys measure actual engagement) and lots is possible when people are lucid and honest about its limitations. Even engagement surveys can start useful conversations. The point I wanted to make is that evidence can also be used to stop conversations when it’s promoted as unchallengeable science, or a regime of truth, as Foucault put it.


Martin Amor June 11, 2012 at 6:36 pm

Hi Jamie.
Nice post. Awesome site :-)
Couple of thoughts…
1. I wonder if evidence-based metrics is best on the sort of thing you’ve got time to think about but less useful when you need speed, momentum even. By the time you’ve worked out how to measure something nuanced, or psychosocial its objective may be obsolete. Where there’s a culture of measurement there’s a risk of foot-dragging and the devaluing of instinct, intuition, the softer skills of coaching, personal development, being congruent.
2. My favourite measuring story – I took a bank to meet google to explore customer centricity, but we ended up sporting performance metrics as a contributor to great teamwork and ppl customer service and innovation.
Google’s performance metrics are smart. Broadly everyone scores themselves and everyone they work with on the 8-10 criteria which Google believes (knows?) deliver success on projects. This allows them to see what ppl on teams value, how ppl view each other, how good they are at knowing their own skills and at spotting skills in others. Plus you can use it for creating good teams, identifying development areas for individuals, plugging gaps on teams.
Meanwhile at the bank the team leader allocates A thru E to each team member. Normally distributed so everyone knows there is only one A, one E, two Ds etc. Basically this structure is divisive and fundamentally competitive as everyone is aiming to avoid the E and push their colleagues out the way to get the A.
I suppose this means – understand the metric’s psychosocial impact, understand what you are trying to learn and make the metric a tool for improving things rather than for covering arse or being bulletproof in front of the boss.


Leave a Comment

Next post: