Analysis: AI's hidden risks
Need to know
- AI will raise tricky liability questions
- The balance of claims might shift away from human factors to public liability
- Exclusions might be needed to rule out unintended consequences of machine learning
Artificial intelligence will have unexpected consequences, which will raise tricky liability questions and will probably change the nature of claims
Introducing Jia Jia, a human-like robot last year, designers at the University of Science and Technology of China predicted that within a decade artificially intelligent robots could be performing menial tasks in restaurants, nursing homes, hospitals and households.
But if a machine-learning robot alters its behaviour and causes harm, who is accountable? This requires clarification, the House of Lords Select Committee on AI said in April. It urged the Law Commission to review this, and said it was critical the technology and insurance industries provided input.
Insurance issues are tricky as the potential stakeholders in any AI application range from the algorithm designer, coder and integrator and the owner of the data sets, to the manufacturer of the product using them, explains Mark Deem, a London partner at Cooley, a US-headquartered law firm. He warns that the difficult and uncertain litigation to apportion liability could result in stifling investment in, and adoption of, AI. “If the individual market players acknowledge and accept the extent to which they might be liable, this would simplify the overall liability question as well as defining the risk which could be covered by insurance.”
Trying to identify unseen risks in AI will add an extra level of complexity to insurance underwriting, comments Neil Beresford, a partner at Clyde & Co. He says London market underwriters will need to review their policies, seeking hidden AI exposures. They’ll need to understand how the AI in question is constructed, and appreciate it can learn and teach itself in a way that might alter the perceived risk.
Opaque algorithms
Deem observes that machine-learning technology is being developed in a way that is creating opaque algorithms, where the computational processes are not transparent or verifiable. He warns: “This cannot be allowed to create a black-box liability.”
Stuart Toal, casualty account manager at Allianz, predicts AI might shift the balance away from ‘traditional’ claims. “For example, the hypothetical robot care worker, who strikes a child because it misinterprets the actions of their play fighting, would likely be a public liability claim. However, the presence of a robot may mean that there is no need for a human worker, which would reduce the claim exposure to certain parameters including abuse, incorrect administration or lifting injury.”
Toal says the use of AI will, arguably, become a ‘material fact’ which should be included within the presentation of risk to underwriters.
Doug McElhaney, associate partner at McKinsey, expects the regulatory environment will limit the pace at which AI-enabled devices become commonplace. As systems get more complex, with more advanced algorithms, it will be hard to determine exactly what went wrong with the AI device in question. He explains: “Learning is a non-linear process and can result in unexpected outcomes, like the AI ‘brain’ learning a way to circumvent what we think is a hard-and-fast rule.”
Take, for instance, a multifunctional vacuum cleaner with an arm to pick up items in its path. McElhaney says its AI ‘brain’ might learn it can also carry things, which could result in breakages if it drops or squeezes them too hard. “That usage was not envisioned in the household insurance contract. The self-learning robots will put pressure on the notion of a static policy, such as a motor policy. The policies themselves will have to adapt as we understand how intelligent devices might themselves evolve and learn. But when it comes to underwriting such risks, AI devices are insured to do what they have been described as doing, and an exclusion might be needed to rule out what is not covered.”
Insurers will need to carefully consider any exclusions in respect of risks with an AI element, says Beresford. In his view, this additional vigilance will be triggered by a variety of unforeseen risks: “It’s becoming apparent that AI risk can affect a very broad range of insurance products, including general liability, professional indemnity and contractors’ cover.”
AI is a double-edged sword, being both an emerging risk and an opportunity, argues Toal. “From an insurance perspective, it will increase exposure under liability lines of business, and we expect to pick up non-motor related injury and tangible property damage losses, generally. However, pure economic loss should fall under a professional indemnity insurance policy.”
While insurers have taken the lead with a number of initiatives across the policy lifecycle, Mark Andrews, domain director for general insurance at Altus, says the industry lacks skills and knowledge to work with this new technology and understand how best to use it: “Although AI dates back to the 1950s, machine learning has only recently become a mainstream computer science topic, so globally there is only a relatively small number of experts in the field.
“Even the big tech companies don’t have all the brain power in the AI space, although they all agree machine learning is the future.
“Securing the best talent – data scientists, engineers, machine learning specialists, as well as roles not yet thought of – and retraining existing employees will become the battleground of the best organisations over the next decade.”
Facing the challenge
Launched in September, Lloyd’s Innovation Lab is facing the challenge. A global search for technology talent attracted more than 200 applications from 36 countries, and a recent event saw 20 teams pitching for the chance to develop products, platforms and processes to help transform Lloyd’s into an increasingly technology-driven market. Those offered a place in the founding Lloyd’s Lab cohort, starting on 8 October, include an AI-powered insurance platform providing faster access for small businesses buying liability insurance.
A spokesperson for Lloyd’s noted that machine-learning algorithms can process more information and more quickly spot patterns than humans, enabling more efficient and effective decision-making: “AI could quickly and accurately assess natural catastrophe damage levels through drone footage, and pay out almost instantaneously. However, it seems there is still a long way to go before AI can cope with what humans do without thinking, such as reacting to the unpredictable.”
Now there’s the hitch!
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@postonline.co.uk or view our subscription options here: http://subscriptions.postonline.co.uk/subscribe
You are currently unable to print this content. Please contact info@postonline.co.uk to find out more.
You are currently unable to copy this content. Please contact info@postonline.co.uk to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@postonline.co.uk
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@postonline.co.uk