AI inaccuracy a ticking claim bomb, lawyer warns
Mistakes made by artificial intelligence may take years to be discovered and may cost insurers dearly due to unintended cover, Clyde & Co partner Darryl Smith says.
Many organisations are integrating AI as part of their business and insurers should be wary that existing insurance policies could cover claims stemming from inaccuracies produced by the technology, with professional indemnity, errors and omissions and directors’ and officers’ policies vulnerable.
“Any errors within it could lead to claims,” Mr Smith tells insuranceNEWS.com.au. “It was the same with silent cyber – you’re writing the insurance but you’re not collecting the premium. That’s where AI sits.
“The historical example is asbestos ... there’s not $1 of premium written for asbestos, but it costs insurers a large amount of money.”
The main concern is that AI models produce an inaccurate result that is relied on for advice, and potentially perpetuated extensively.
Mr Smith, who says he writes machine learning language models in his spare time as a hobby, says there is widespread lack of understanding about what AI does.
"It's just mechanical, mathematical – it's absolutely rigid in what it does. I think there's perhaps a perception that it produces something random, that's creative. It doesn't do any of that – it produces a result based on the input that's put in it and the way the model is designed and has been prompt engineered,” he said.
"My concern would be that if you're using AI and you don't understand precisely how it works and what it does and what your input is – making sure the input is aligned with the way it's been engineered – then there is the prospect of an incorrect result coming out the other end.
“If you rely on that result and you springboard your advice based on the result of what the model produces, then the advice can be incorrect. The cause of the loss is the AI, but it's just a claim for an error or an omission or an inaccuracy.”
Infringement of intellectual property, defamation or incorrect use of personal information are possible claims caused by AI mistakes, and casualty losses and property damage are also possible.
This means AI could be “even more pervasive than cyber in terms of where it can impact policies,” Mr Smith says, and policy proposal and renewal forms should ask if, where and why AI is being used.
“It has the capacity to impact a lot of different types of policies, but if insurers start looking at it now, it’s something that’s more than capable of being addressed,” he said.
“At least we’ve got a road map from silent cyber in terms of how to move forward. You can look at exclusions – which is probably not very palatable to your policyholders – or see an opportunity for a different cover. In cyber, that started with sublimited covers.”