Kirsten Kirkwood | Possibilities and pitfalls for professional indemnity insurers
Artificial Intelligence (AI) is arguably one of the most prolific buzzwords of the past decade, particularly since the advent of ChatGPT in late 2022 when AI became a hotly debated topic of discussion beyond the boardroom, casual dinners and braais. True, AI presents potentially both -humanity’s greatest achievement and unknowable prodigious risk.
A little-known fact is that the original “AI boom” occurred as far back as the 1980s, so the concept is not new. However, the rapid adoption of AI currently being experienced is largely due to the proliferation of data and the advanced level of innovations in cloud computing – both are critical for effective applications of AI.
What is AI? AI is not just one thing, it is an amalgamation of various technologies including machine learning, deep learning, and natural language processing, among others, combined with data, analytics, and automation. It is used to help businesses drive innovation and achieve their goals, faster and hopefully more accurately.
The Two Faces of AI
As beneficial as AI is to many businesses, it presents somewhat of a dichotomy, a double-edged sword if you will, to liability insurers providing professional indemnity (PI) cover. On the one hand, it poses new risks and creates uncertainty around the risk exposures for various professions, driving insurers to reevaluate and update their insurance policies to reflect the new perils. On the other hand, it streamlines the underwriting process for insurers. Similarly, harnessing the capabilities of AI responsibly can be highly advantageous for professionals, however, recklessly applying AI can open the door to liability claims and litigation.
The benefits of AI for many professionals include increased accuracy and efficiency as AI tools can assist professionals with tasks such as legal research or financial analysis, potentially reducing human error and leading to a lower frequency or severity of claims.
However, due to the perceived capabilities of AI, a potential risk that professionals using AI tools face is an increase in expectations from clients on the quality of work delivered, performance, and accountability, whether contractual or not.
While AI can mitigate certain risks, its implementation introduces new risks. The main concerns currently relate to data privacy, security breaches, copyright infringement, accuracy and liability for errors, algorithmic bias, and system failures.
AI Risks
There is currently no legal framework to govern AI usage in South Africa, outside of Section 71 of the Protection of Personal Information Act (POPIA), which focuses on automated decision-making. This act outlines limitations on how businesses can use personal information to make automated decisions that affect individuals. Insurers face the challenge of a future rapidly evolving legal and regulatory framework around AI, which could increase their risk exposure during the time it takes to understand and adapt to the regulatory changes.
It is fair to say that currently most PI policies do not explicitly address AI-related risks. Determining liability in cases involving AI will be complex, particularly where multiple parties are involved in the development, deployment, and use of AI tools and systems. This complexity could lead to disputes over coverage and claims, as well as disputes over where the liability lies.
For instance, assuming the AI’s advice or design proved to be defective, questions would arise as to what was the proximate cause of the claim? AI’s negligence or the insured client? Did the insured client act negligently by making use of an AI tool (which held itself out as an expert) or was the Insured client negligent for failing to verify the information/advice. There is an added complexity where AI, at least on current iterations, blatantly lies and fabricates information.
More recently, a popular AI bot advised in response to ‘how many rocks should a child eat’, it said ‘UC Berkeley geologists recommend eating at least one small rock per day’. This is obviously false. The point being, whilst AI advancement to date is groundbreaking, evidently it is still tremendously unreliable, and insured parties must take care.
AI as a Tool for Insurers
Although AI is creating a heightened and more complex risk landscape, it also offers insurers many benefits from a risk management perspective. AI algorithms can predict potential risks and suggest mitigation strategies based on historical data and real-time information being fed into the tool. It can also detect suspicious patterns or anomalies in insurance claims data, assisting insurers to identify potential fraudulent cases more effectively. Eventually, AI tools will enable insurers to revolutionize risk assessment and analysis for professional indemnity cover through advanced data analytics and automation in assessing and managing risks.
Ultimately, AI is an emerging risk that presents both challenges and opportunities for PI insurers. It is a continuously evolving risk and our focus now as insurers must be on developing adaptable coverage solutions that can evolve alongside AI as we learn more about this technology and its capabilities.
The future is here, and AI will become more advanced each year and it is up to Insurers (and insureds) to adapt their risk management strategies, risk assessment criteria and tools and ensure that this risk is properly considered.
*Kirsten Kirkwood is Senior Professional Indemnity and Clinical Trial Underwriter, SHA Risk Specialists.