Brought to you by:

Profiling for profit: consumer group’s artificial intelligence warning

Artificial intelligence (AI) could lead to insurance customers being “profiled for profit” and subjected to increasing pricing discrimination, the Financial Rights Legal Centre says.

In a submission to a Department of Industry, Innovation and Science discussion paper on Australia’s AI ethical framework, the centre says AI is already playing a crucial role in the development of insurtech.

It says AI is used to link behaviour to premium pricing through connected devices and telematics, in the sales process through chatbots, and in claims handling and fraud detection.

But the consumer group says further consideration should be given to potential bad outcomes that could lead to increased economic inequality and financial exclusion.

“Financial Rights is concerned that with the rise of AI in fintech, we will see increased occurrences of consumers being profiled for profit, which will see more people experiencing financial difficulties or hardship being offered unsuitable (but highly profitable) products,” the submission says.

“In the insurance sector, the increased use of Big Data analysis and automated processing allowed by increased computing power will enable insurers to increasingly distinguish between risks on an increasingly granular level.

“This will lead to the higher risks only being able to be insured for higher prices or on worse terms.”

Algorithms used by fintechs and insurtechs could deny consumers access to products and services, the centre warns.

Customers could also be left without the ability to ask questions or correct inaccurate data or “underlying assumptions”.

“Algorithmic bias or discrimination is already well documented and arises when an algorithm used in a piece of technology – say a fintech product or service – reflects the implicit or explicit values of those who are involved in coding, collecting, selecting, or using data to establish and develop an algorithm,” the submission says.

The use of AI could also further expose the sector and its customers to cybercrime, as “financial identities” of consumers are built up and stored.

“With increasingly sensitive and accurate data being held by fintechs, breaches of these datasets make it easier for criminals to use this identifying information to undertake subsequent crimes, financial or otherwise.”

To read the full submission, click here.