Microsoft, Big Data and statistical idiocy
Stack rankings: An example of the misunderstanding and misuse of statistics
Published 16:43, 09 July 12
The performance management system of "stack ranking" has been in the news this week. Stack ranking is controversial for a number of reasons but our particular bugbear is that it is an example of the broader misunderstanding and misuse of statistics that is endemic in many organisations.
As we move into the age of Big Data, understanding what statistical probabilities indicate will become dramatically more important. Organisations need to start getting this right as soon as possible.
Last week, Vanity Fair published an article about Microsoft, making a point that the company's staff performance management system, stack ranking, was responsible for "crippled innovation". We see this as one of a rising tide of examples of the misunderstanding of statistics that is endemic in many organisations, and that, with the rise of big data, could prove fatal. According to author Dick Grote, quoted in CBS News, stack ranking, also known as forced ranking, is used by a third of Fortune 500 organisations. This makes this particular misuse of statistics hugely widespread.
Stack ranking is a performance measurement system that, in Vanity Fair's words, "forces every unit to declare a certain percentage of employees as top performers, good performers, average, and poor." The problems with stack ranking from an employee perspective, particularly in organisations that produce intellectual capital, are widely documented, and include: business units are often challenged with too many good performers, so these strong team members have to be marked down to fill the quota; managers retain poor performers without encouraging them to improve, so that good performers don't have to be marked down; and team members compete with each other rather than work together, because they know that someone has to fill the bottom slot in the annual appraisal table.
However, the stack ranking methodology also constitutes a misuse of statistics. From a statistical perspective, the stack ranking methodology is wrong because:
- It is often propagated through organisations by being applied at the level of each team, and for small or even medium-sized teams (say, below 25), a team is a statistically invalid sample. Stacked ranking is based on the assumption that employee performance follows a normal distribution (this assumption itself is broadly accepted but not statistically proven). However, a statistical distribution that applies across a statistically valid sample has no reason to apply to a team of 6 people - the number of instances is simply not large enough.
- Even for large enough teams, the choice of performance levels and segments has no reason to reflect the normal distribution, because for this to be the case the performance levels and segments within each business unit have to be statistically selected, which of course they are not.
- It is an example of confusing correlation and causality. For example, in a team of 10, if 2 people are already marked "strong" and 6 people are "satisfactory", the stack ranking system in effect assumes that the remaining two are "poor". In reality this is not the case, because each employee's performance is relatively independent of that of their colleagues. Thus stack ranking assumes the statistics dictate reality, rather than reflect reality.
As we move into the age of Big Data, we will move away from deterministic systems that give us answers and towards a more stochastic world, where systems crunch the data to give us probabilities and likelihoods instead of simple answers. Understanding what statistical probabilities indicate will become dramatically more important. The data we use to support decisions and how we integrated, derived it and enrich it will become more complex across multiple dimensions.
We need to question methodologies that are misusing statistics and have cultures in place to evolve these methodologies towards ones that use statistics to help us get to what is truly important: what is causing our success or failure, how can we maintain success and improve on failure, and how is the external environment changing so that we need to evolve?
Does your organisation misunderstand statistics, apply statistics on small numbers or arbitrary segments, confuse causality and correlation, or assume statistics dictate reality rather than reflect it? Please share your thoughts.
(Thanks to Mike Glennon of IDC UK for his input into this post.)
Posted by Alys Woodward, Research Director, European Business Analytics, Enterprise Collaboration and Social Platforms