AI threatens public utility of cover, MP warns
Artificial intelligence’s ability to “drill down” to an individual customer’s risk profile and set one-on-one insurance premiums “undermines the whole market model”, Greens MP David Shoebridge says.
AI is enabling insurers to conduct “far more granular risk assessment – basically deconstruct the pool – and in doing that potentially undermining one of the chief benefits of an insurance model, to spread the risk, to pool the risk”, he told a Senate committee last Tuesday.
“It’s not unlawful. It’s probably the profitable way of pricing your insurance products, but it undermines the whole market in many ways – the social utility of insurance. There’s a public interest in sharing the burden across society.”
Australian Securities and Investments Commission chairman Joe Longo, who was asked how the regulator is responding, told the committee there are “some very fundamental public policy issues lurking there – for example, natural disasters, floods”.
“We’re going into a period now where a number of areas in Australia are becoming uninsurable – or insurable at an unaffordable cost. I think that’s related to your question,” he said, adding there was public interest in risks being properly priced.
ASIC executive director Calissa Aldridge said it is a “very challenging” issue.
“Licensees need to comply with the obligation to operate efficiently, honestly and fairly, and that does come into it, but ultimately they’ve also got to manage their risk. There’s also an argument there that some consumers would prefer to pay the premiums based on the risk that they provide,” she said.
ASIC is reviewing more than 20 entities based on their use of AI and advanced data analytics, and expects to release findings later this year. Mr Longo says Australia “probably needs an enhanced regulatory framework”, but “we shouldn't assume that additional regulation is going to have a chilling effect” on innovation.
“My argument is we need more regulation – we just don’t know what it looks like yet, so that it works for us culturally, for our way of life and for what we’re capable of dealing with.”
The European AI regulatory model is “very prescriptive” while the British model is “more facilitative”, he says.
“There are a number of models we can choose from. My fundamental proposition for today is the existing legal framework – we need to make a deliberate decision as a country whether we’re happy to leave it alone or whether we think we need more regulation and, if so, what that looks like.”
Mr Longo noted ASIC has ongoing litigation against IAG over algorithms it used to generate premium notices to customers and is “testing pricing promises”.
“What we’re doing there is quite innovative. I think it’s the first of its kind in Australia, if not globally. We’re not satisfied that the algorithm actually works to deliver what consumers think they’re getting. That’s a really good current example of the use of algorithms that directly affect consumers,” he said.
“This is at the cutting edge of why we should all be concerned about AI, in terms of trying to understand what’s going on with whatever the application is, and how we satisfy ourselves – we haven’t even got to data poisoning or [AI] hallucinations yet.”