Rewriting history: insurance ‘what ifs’
On the evening of July 7 this year, the worst aviation disaster on record very nearly happened.
Air Canada flight 759 from Toronto was preparing to land at San Francisco airport. The weather was clear, so the pilot was on a visual approach, but he failed to see he was guiding his aircraft towards a taxiway where four fully loaded planes were waiting for take-off.
Air traffic control ordered AC759 to abort its landing when the aircraft was just 30 metres off the ground.
Event reconstruction showed that had the pilot pulled up just five seconds later, it would have been too late and hundreds of people would have died.
But it didn’t happen, so why worry?
A new report from Lloyd’s and catastrophe modeller RMS says we absolutely should analyse past near misses, to help us understand potential losses and learn from them in the same way we learn from actual events.
It is called downward counterfactual analysis – essentially asking how things might have been worse. It is rarely carried out, but the report argues there is huge value in doing so.
“In statistical analysis, historical data is usually treated as fixed rather than one possible version of many that could have occurred if various influencing factors had been different,” the report says.
“This can be a weakness in risk modelling. For example, in the case of modelling rare extreme events, the lack of loss data may give a false picture of the actual threat level, which could have been distorted by near misses and good fortune.
“Downward counterfactual analysis could help insurers to identify such anomalies, and to adjust risk models and pricing accordingly.”
Such an approach can help insurers identify unlikely but possible “black swan” events. The result is a deeper understanding of future risks, and higher levels of preparation.
The report uses the September 11 2001 terrorist attacks in the US as an example.
Our natural response is to consider “upward counterfactual” thoughts: if only the FBI had the authority to open suspected terrorists’ computers; if only the FBI and CIA shared intelligence more widely; if only security at Boston’s Logan airport had been tighter.
However, the report argues “the more searching question” is: “Why didn’t this happen before?”
Two years earlier, an Egyptian pilot deliberately crashed EgyptAir 990 into the
Atlantic en route from JFK to Cairo.
In 1994 terrorists hijacked AF8969 in Algiers with the aim of crashing into the Eiffel Tower. The French authorities sent commandos to storm the plane while it was refuelling in Marseilles and a disaster was averted.
If downward counterfactual analysis had been applied to these incidents, the world could have been better prepared on September 11 2001.
Assessments could have been made of the damage to a skyscraper from passenger jet impact, and expected insured losses.
“Most events have either happened previously, almost happened previously or might have happened previously,” the report says. “Conceptually, the historical past has a dense labyrinthine event-tree structure, and the domain of future possibility is mostly spanned by history, its perturbations and variants.
“Yet, the past is typically perceived in a fatalistic way somehow as having been inevitable.”
There is a local example, which is not included in the report, but illustrates the point perfectly.
If downward counterfactual analysis had been applied to Melbourne’s Lacrosse apartment blaze in 2014, caused by flammable building cladding, could London’s deadly Grenfell Tower fire have been prevented?
Catastrophist at RMS Gordon Woo says insurers should look at the past as just one realisation of what might have happened.
“Whatever the past, risk insight is gained from exploring how things might have turned for the worse – the downward counterfactuals,” he says.
“By adopting a counterfactual perspective and exploring how historical events could have unfolded differently, additional insight can be gained into rare extreme losses that might otherwise come as a surprise.”
To read the full report, click here.