Designed to help: AI and the rise of the ‘cobots’
Artificial intelligence (AI) and robotics are changing the world, and the insurance industry had better change with it.
Two new reports from Lloyd’s spell out the risks – and opportunities – as technological change continues to accelerate.
One report focuses on the rapid emergence of collaborative robots, or “cobots” – devices that help humans by extending their physical capabilities.
While cobots account for only 3% of the total robotics market, the figure is expected to reach 34% by 2025.
Their increasing popularity stems from the fact they are cheaper, smaller and smarter than regular robots, and they are moving beyond factories into sectors such as agriculture, healthcare and retail, where they help people with jobs that are “dirty, dangerous, repetitive and difficult”.
Fear of robots putting humans out of work may be misplaced, the report says.
“Robots, particularly cobots, rarely replace workers; they replace tasks. They often help workers through decision-making, or physical handling, rather than replacing them.”
However there are significant implications for the insurance industry.
“Widespread cobot use will create new risks, change existing risks and reduce others,” the report says.
“By helping insureds identify the risks and by offering ways to mitigate them, insurance could help increase and speed up cobot adoption.”
Adoption of robots in retail and agriculture is likely to reduce accidents caused by fatigue and could improve safety in areas such as nuclear decommissioning, mining and construction.
“For insurers, increasing adoption of robots in dangerous environments would reduce the number of employee injury claims.”
The risk profile of employers’ liability and public liability could change because “liability could be pushed back onto the robot product manufacturer or designer”.
The report warns cobots require “vast data storage facilities” that could be vulnerable to cyber attacks. There is also potential for “large-scale insurance losses” from business interruption in supply chains that use faulty or failing cobots.
Despite these risks, there are also numerous opportunities for insurers.
Cobots are “a substantial emerging market”, with an estimated compound annual growth rate of about 60%.
“Increasing adoption of cobots in environments that work closely with humans will expand the need for insurance products including: product liability, product recall, cyber, property, (contingent) business interruption and medical malpractice, all of which could be marketed as comprehensive insurance solutions for the robotics sector.”
There is also an opportunity for the insurance industry to work directly with manufacturers to identify risks associated with cobot deployment, and data from cobots could offer opportunities for improved risk and pricing models.
“For example, in ‘precision farming’, sensor data from fields could be combined with external climate and weather data to allow developers to develop algorithms that help the farmer make best use of their land,” the report says.
“This in turn might allow insurers to create bespoke and more accurately priced crop insurance.”
Lloyd’s second report highlights increasing use of AI.
While AI has been around for 60 years, Lloyd’s says its “recent, rapid escalation” has awakened awareness of its complex ethical, legal and societal challenges.
Areas of insurance that could be affected include:
- Product liability and product recall. Recalls could become larger and more complex. AI machines cannot be liable for negligence or omissions, so who is?
- Third party motor. Assignment and coverage of liability will be difficult due to a shift of responsibility from human drivers to automated vehicles.
- Medical malpractice. AI is being used to help diagnose conditions, and an error could amount to negligence.
- Cyber. As chatbot technology develops, it is increasingly difficult to tell humans and AI apart, which could make it easier to carry out phishing scams. This raises questions about what types of insurance would be available to cover against such losses.
- Fidelity. Fraudulent activity from employees could be exacerbated. Fraud may increasingly come from staff with access to IT systems rather than those with financial authority. The emergence of “deep fakes” is also a concern around identity fraud. Deep fakes are AI systems capable of generating realistic audio and video.
- Political risks. The weaponisation of AI “could take many forms” and AI might contribute to events such as expropriation, wars, terrorism and civil disturbance.
Again, the rapid development of AI presents business opportunities for insurers.
Any company offering algorithm-based systems may seek to insure against the risk of them returning incorrect decisions and the impact on the AI companies’ clients, the report says, and risk management requirements are increasing.
AI can also be used to improve insurance processes such as customer service (with chatbots), underwriting (which could be enhanced and sped up) and fraud detection.
“Our world is becoming increasingly automated,” Lloyd’s Head of Innovation Trevor Maynard says. “Insurers have an opportunity to play a role in shaping the development of AI, and robotics and will no doubt be instrumental in providing solutions to some of the most complex risks associated with these technologies.
“The publication of these two reports aims to provide underwriters with guidance on best practice, as well as insights into the short, medium and long-term potential of AI and robotics.”
To read the reports, click here.