September 15, 2015, by Academic contributor

Understanding risk and responsibility

When things go wrong, more often than not, we look to blame individuals. Someone has to carry the can. The buck has to stop somewhere. Rarely is culpability ascribed on a collective, organisational or institutional level.Risk MBA blog

But what if the problem is one of culture? What if individuals “fit in” with what they believe will be accepted – whether by their group, by their bosses or by their company?

The academic literature in this field believes behaviour within organisations is influenced in some way by the interaction of individual morality, work climate and collective moral emotion. These combine to create both intentions to behave and actual behaviour.

Case in point: Mid-Staffordshire. The Francis Report made absolutely plain that the fundamental cause of what happened – or, more pertinently, what was allowed to happen – within Mid-Staffordshire NHS Trust was a wholesale, across-the-board failure of culture.

Francis found that the practices that developed were entirely inconsistent with the basic obligations of any NHS employee and the professional duties of any clinician. Most damningly, the erosion of acceptable conduct was essentially all-encompassing. As the report noted in one of its most revealing observations, staff “did the system’s business”.

This is a classic case of what has come to be known as “people risk”. It is the elephant in the room – one capable of both terrifying and enthralling any industry that might tend towards risk-averseness in public and rashness in private.

The central tenet of people risk is that incidents such as the catastrophe that unfolded at Mid-Staffordshire can almost invariably be traced back to internal hierarchies that permit major errors of judgment to go undetected, ignored or suppressed until calamity strikes. We can draw salutary lessons from examples in many fields, perhaps most notably the world of finance and the global economic crisis that hit in 2008.

The birth of the US sub-prime housing market collapse in the mid-2000s had its roots in an astonishing absence of responsible risk controls. With staff pursuing their own destabilising agendas both individually and collectively, mismanagement and myopia drove institutional contempt for risk to its zenith. Only years later did Clayton Holdings, the West’s largest residential loan due diligence and securitisation surveillance company, reveal that a review of more than 900,000 mortgages issued from January 2006 to June 2007 had shown just 54% of loans met the lenders’ underwriting standards.

There are numerous less renowned instances. In 1997 NatWest Markets, the bank’s corporate and investment arm, announced it had discovered a £50m loss – a figure that escalated to £90.5m after additional inquiries into the actions of just two employees. In early 2008 it emerged that a relatively junior futures trader with Société Générale, Jérôme Kerviel, had fraudulently lost an incredible €4.9bn.

Of course, the repercussions of people risk are not always measured in profit and loss alone. NASA’s ill-fated decision to launch the Challenger space shuttle on the freezing-cold morning of January 28 1986 provides an illustration of the devastating consequences in human terms.

NASA was acutely aware of an issue with the O-rings in the shuttle’s solid rocket boosters, but it repeatedly reinterpreted evidence of the problem as within the bounds of tolerable risk. Flying with the flaw thus came to be deemed entirely acceptable. On what proved to be the day of Challenger’s final mission, when low temperatures exacerbated the problem to a critical degree, the agency’s selective blinkeredness culminated in the very public deaths of seven astronauts.

Richard Feynman, the maverick physicist and Nobel Prize winner, was part of a presidential commission tasked with investigating the disaster. He famously exposed NASA’s institutional shortsightedness by dropping a chunk of O-ring into a glass of iced water during a televised hearing. Remarking on the disconnect he found between NASA’s managers and engineers, he later wrote: “It’s a question of whether, when you do tell somebody about some problem, they’re delighted to hear about it and say ‘Tell me more’ or they say ‘Well, see what you can do about it’ – which is a completely different atmosphere. If you try once or twice to communicate and get pushed back, pretty soon you decide ‘To hell with it’.”

Another academic who investigated the Challenger tragedy, American sociologist Diane Vaughan, coined the phrase “the normalisation of deviance” to describe an organisational situation in which bad habits become standard practice. Again, it is depressingly easy to apply this term here, there and everywhere.

One reason why deviance is able to flourish in so many settings is that three key strands of business bureaucracy exist in a state of constant antagonism. Upper management either blithely assumes safe policies are being implemented or is content to collude with risky practices that deliver short-term results. Risk assessors are left to cope with the contradictory demands of executives who maintain that greater performance can go hand-in-hand with greater caution. Frontline staff do not understand – or do not care to understand – the implications of policies that stop them maximising their earning potential through taking risks.

This can lead to a perfect storm. People act subjectively out of malice, avarice or ignorance. The organisational fracture between top and bottom is exploited as a matter of course. The most cavalier risk-takers are lauded right up until the point where their actions finally backfire in spectacular fashion, while whistle-blowers are more likely to be sacked than supported.

The bottom line is that a demotivated and opportunistic workforce will at best look away and at worst abuse the flaws in the system when confronted by a grave risk incident. By contrast, an appropriate work climate and a well-motivated and ethical workforce may help to detect such an incident.

Although the threat of people risk remains enormous, solutions are less than abundant. But one possible answer is to nurture a culture in which responsible behaviour, not its diametric opposite, is rewarded and incentives are linked to compliance rather than to subversion.

Francis hinted at something similar, suggesting that a key objective in seeking to prevent a repeat of Mid-Staffordshire should be the fostering of circumstances in which it is easier to express genuine fears about care quality than it is to stifle them. In other words, the raising of concerns should be celebrated rather than censured.

Interestingly – not to mention encouragingly – we already know that people risk is a problem that nobody is especially keen to confess to but which everybody is keen to tackle. The buy-in from the financial services sector in particular has been considerable. Various institutions have collaborated in our research, and one firm has even allowed its staff to take part in a full assessment of people risk. As a result, we are gradually uncovering the first empirical evidence of how and why people risk arises.

It would be foolish to pretend conclusive solutions will be achieved either effortlessly or quickly, whether in healthcare or elsewhere. Equally, though, we should not suppose the problem will simply vanish. Maybe above all, we should not imagine for a moment that doing nothing will result in anything other than further harsh lessons. In the end, as Francis laid bare, it is principally a question of cultural change.

Dr Robert Webb is an Associate Professor of Banking at Nottingham University Business School.

Posted in Risk