Consumer group flags Big Data concerns
Increased use of Big Data could create an insurance affordability problem as the industry assesses risk at a “granular level”, the Financial Rights Legal Centre says.
In a submission to a Department of Industry, Innovation and Science discussion paper on Australia’s artificial intelligence (AI) ethical framework, the centre says AI is already playing a crucial role in the development of insurtech.
It says AI is used to link behaviour to premium pricing through connected devices and telematics, in the sales process through chatbots, and in claims handling and fraud detection.
But it warns increased economic inequality and financial exclusion could be the result.
“In the insurance sector, the increased use of Big Data analysis and automated processing allowed by increased computing power will enable insurers to… distinguish between risks on an increasingly granular level,” the submission says.
“This will lead to the higher risks only being… insured for higher prices or on worse terms.”
Algorithms used by fintechs and insurtechs could deny consumers access to products and services, the centre warns.
Customers could also be left without the ability to ask questions or correct inaccurate data or “underlying assumptions”.
“Algorithmic bias or discrimination is already well documented and arises when an algorithm used in a piece of technology – say, a fintech product or service – reflects the implicit or explicit values of those who are involved in coding, collecting, selecting, or using data to establish and develop an algorithm.”
The use of AI could also further expose the sector and its customers to cyber crime, the centre says, as consumers’ “financial identities” are built up and stored.