‘Do your homework’: Munich Re urges caution on AI
Insurers should implement artificial intelligence (AI) in their operations slowly and carefully, with full research and oversight, Munich Re says.
The latest Munich Re Tech Trend Radar report, now in its 11th year, says insurers must establish AI governance to safeguard business and protect customers.
“Take implementation step by step. Insurers would be wasting money on AI and [generative AI] projects if overall data quality is insufficient or the underlying systems do not have performant application programming interfaces,” the report says. “It is vital that we do our homework first.”
Brazil, Canada, China, Columbia, the EU, Mexico and the US have adopted a mandatory regulatory approach towards AI. In contrast, Australia, Japan, Singapore and Britain have an “ethical principles” approach with a set of high-level intentions to be followed when developing AI solutions.
“A good example is Australia’s AI Ethics Principles, which is a voluntary framework,” Munich Re says.
Risks posed by AI include: fake information, for example, claims being created at scale; lack of transparency, as it can be difficult to understand how generative AI models arrive at their decisions; lack of security awareness among inexperienced users; and challenges meeting strict regulation.
“While the chances are massive, insurers are also facing several hurdles when it comes to tackling use cases with generative AI,” the report says. “Insurers must address the potential for GenAI to hallucinate or generate false information, and [ensure] appropriate measures are in place to prevent fraudulent or incorrect information from being processed.”
Generative AI models must be designed and trained to meet data privacy and security standards, and have transparent, explainable decision-making processes, the report says.
“The investment required for GenAI implementation is significant, requiring investment in technology, data and talent. Sensitive customer data must be protected from cyber threats and maintained in accordance with data privacy regulations.”
Large language models must not perpetuate biases or discrimination, and treat all customers fairly, Munich Re says.