Regulator calls for ‘quick work’ on AI governance
Financial services companies must comply with current obligations when deploying artificial intelligence and avoid “simply waiting” for AI-specific regulations to be introduced, the industry watchdog says.
An Australian Securities and Investments Commission review of AI use by 23 financial services licensees found there is potential for governance to slow its adoption. Only 12 had policies in place for AI that referenced fairness or related concepts such as inclusivity and accessibility, and only 10 had policies referencing disclosure of AI use to consumers.
Commission chair Joe Longo says this policy shortfall raises the risk of misinformation, unintended discrimination or bias, manipulation of consumer sentiment, and data security and privacy failures.
He says adequate governance arrangements should be in place before AI is deployed.
“Work needs to be done – and quickly – to ensure governance is adequate for the potential surge in consumer-facing AI,” he said. “There is the potential for a governance gap, one that risks widening if AI adoption outpaces governance in response to competitive pressures.”
AI use is accelerating rapidly, ASIC found, with 61% of licensees intending to ramp up implementation.
So far, licensees use it mostly to support human decisions and improve efficiency.
Current consumer protections, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures to manage use of new technologies, including ongoing due diligence to mitigate third-party AI supplier risk.
ASIC is monitoring how licensees use AI to protect “the safety and integrity of the financial system”.
The review, which included general and life insurance businesses where AI interacted with or affected consumers, finds risks to consumers include unfair or unintended discrimination due to biased training data or algorithm design, biased AI outputs that could lead to denial of insurance or paying a higher price, and other harms.
Uses of AI included deep learning models for natural language processing and optical character recognition when scanning analogue form data to speed up insurance; actuarial models for risk, cost and demand modelling; and supporting the claims process with triaging, decision engines, document indexation and identifying claims for cost recovery.
Machine learning was used in underwriting to extract information and summarise key information about customers’ applications, while generative Al and natural language processing techniques extracted and summarised key information from claims, emails and other key documents.
The report includes a case study in which AI use was not disclosed to a consumer making a claim. The licensee used a third-party AI model to assist with indexing documents submitted for claims, which included sensitive personal information.
“The licensee identified that consumers may be concerned about their documents being sent to a third party to be read by AI, but decided not to specifically disclose this to consumers,” ASIC said.
“The licensee’s documents explained that its privacy policy stated that consumers’ data would be shared with third parties, and the data was at all times kept in Australia. But consumers were not specifically informed that some sharing of information involved AI, or about whether they could opt out.
“It illustrates the complexity of the issue and the potential for loss of consumer trust.”
In another case study, a business with a new ethical principle on transparency had not applied it to a model affecting consumers making insurance claims.
ASIC describes this as a “failure to apply evolving policies”.
See the report here.
From Insurance News magazine: Our tech seminar pitch winner on her mission to eradicate
"ancient ways" in the field of compliance