Brought to you by:

AI can make insurance delivery more efficient, effective: ICA 

Artificial Intelligence (AI) offers the potential for insurers to deliver financial risk protection in more efficient and effective ways, the Insurance Council of Australia (ICA) says. 

In a submission to a Federal Government discussion paper on safe and responsible AI, ICA says there are diverse potential AI use cases in general insurance, from more efficient automation of claims handling processes to more “interactive and tailored” engagement with consumers.  

“AI provides the potential for insurers to deliver this critical function in more efficient and effective ways,” the submission said. 

It is important AI technology is harnessed in a way that is safe and consistent with consumer protection principles, ICA says, though public discourse around AI “is focused on the potential threats to humanity” and public trust and confidence in AI is low.  

“There is the potential for the discourse to manifest in unnecessarily negative consumer sentiment towards all AI applications,” ICA said.  

“Negative consumer sentiment makes it difficult for firms and governments to invest in and deploy AI applications that will improve the lives of Australians and/or drive productivity gains and economic growth.”  

ICA says the Government has a role in acknowledging beneficial AI applications and opportunity for it to highlight the governance which AI is subject to under existing regulatory frameworks, such as privacy and consumer protection laws. 

A public pro-innovation principle from the Government will help attract international capital and talent and support Australia’s desire to be a global leader in AI, ICA says. 

Many potential risks or consumer harms are likely to already be addressed in existing consumer protections, and AI-specific regulation would likely overlap. ICA recommends regulation should be “technology neutral” and not specific to AI products or to specific use cases.  

“The Government should make clear what it considers to be unacceptable practices, not unacceptable applications of a particular technology,” ICA said. 

“The focus should be on whether their application could result in new risks or harms and exploit or create gaps in the existing regulatory framework,” it said. “Ensuring coverage under existing regulations, such as consumer protection laws, will mitigate the risk of AI regulation which may become outdated.” 

ICA says ensuring anti-discrimination legislation covers AI systems is better than creation of further regulation, for example regarding social scoring and AI, and “this will combat the conflation of the entire AI ecosystem with specific harms”.  

“Elements of both regulatory and voluntary approaches may be appropriate in the Australian context,” ICA said, adding there is “significant work to do to develop an approach that industry is comfortable with”.  

ICA wants the Government to create a central AI expertise body that would advise other parts of the Government on AI, consulting with AI expertise in the private sector and academia.  

Its first task should be to develop a national risk statement, and the second to examine existing regulation impacting AI with a view to identifying gaps and identifying how regulations could be made clearer and drive better outcomes.  

See the submission here.