Artificial intelligence (AI) is transforming the financial services industry. For tax neutral IFCs such as Cayman, Bermuda, and Jersey, it has the potential to increase competitiveness and facilitate economic diversification. The benefits could be enormous, but to realise this potential, jurisdictions will have to be open to the new technology. Two factors underpin such openness. First, enabling businesses to access the skills to ensure that AI can be implemented successfully and appropriately. Second, avoiding excessively prescriptive and precautionary restrictions on the development and use of AI.
Discussions of AI tend to focus on the astounding abilities of OpenAI’s ChatGPT and other so-called ‘generative Ais’ (GAIs). These use neural networks trained over enormous data sets, resulting in models with billions or even trillions of parameters — hence the “large” in “large language models”. I will come back to GAIs but first it is worth noting that simpler predictive AIs have been in widespread use in the financial services industry for decades. These work on the same basic principles, but typically use smaller, more discrete sets of training data.[1] And, like GAIs, they are playing an increasingly important role in the industry.
Visa first implemented a neural network-based fraud detection system in 1993 and as of 2019 estimated that its system helped prevent $25 billion in fraud annually.[2] Such systems work by modelling individual card users’ payment patterns and related risk parameters; those models are then used to evaluate subsequent card use in order to identify and flag attempted payments that do not fit the pattern — a task that would be impossible for humans. Many other financial institutions have subsequently implemented similar systems to detect and prevent fraud.
Predictive AI is also increasingly being used to improve the efficiency and effectiveness of anti-money laundering (AML) systems. Traditional AML systems rely on simplistic rules that generate ‘suspicious activity reports’ (SARs) in response to some combination of factors (eg if an account holder received more than $10,000 from a foreign account that had not previously sent them money). But such systems can be very costly to operate, as they are labour-intensive and typically generate large numbers of false positive SARs, which leads to time and money wasted on unnecessary investigations, as well as adversely affecting customer relationships.
In recent trials in the UK and Hong Kong, HSBC found that Google Cloud’s AML system, trained on HSBC’s data, was able to detect between two and four-times as many instances of money laundering as traditional rules-based systems, while generating 60 per cent fewer alerts.[3] If financial institutions and their service providers in IFCs were to deploy such AI-based AML systems, they could potentially save significant resources on unnecessary investigations and improve the detection, prosecution, and prevention of money laundering. This would help ensure that they (the FIs and the IFCs) remain compliant with international norms while improving their competitiveness.
But while the effects of predictive AIs are significant, GAIs are likely to be transformative. Consider law: With the right prompts, ChatGPT and similar large language models (LLMs) can produce in minutes decent first drafts of company formation documents or contracts that would take a trained lawyer several hours. And they can draft such documents in any language for any legal system. While such documents would certainly require the eye of a lawyer, even that task can be minimised by using different bots to check the work. The same goes for other tasks: A study from 2022 found that Relativity’s Text IQ system can increase the efficiency of document review 10-fold, while reducing both the time taken and the error rate by 90 per cent, and cutting the cost for clients by 75 per cent.[4]
The story for accounting is similar. While automation has arguably been ongoing in that profession for longer, AIs such as Microsoft’s Copilot will enable accountants to work more quickly and efficiently, with fewer mistakes.
It's important to note that this does not mean mass unemployment. As with other technological revolutions, the main effect of GAI is an increase in productivity. By enabling each person to do more, GAI will increase output and create new opportunities. It has been compared to the shift from the horse and cart to the steam train. It may be more like going from the Ancient Greek trireme to a jet airplane. JP Morgan Asset Management estimates that GAI will result in annual productivity gains of between 1.4 per cent and 2.7 per cent every year for a decade.[5]
But jobs will change. For example, many of the tasks currently done by entry-level professionals will be done by GAIs. Since there will still be a need to train professionals to oversee the GAIs, liaise with clients and so on, hiring won’t stop (though it might slow down). But the role of those entry level professionals will have to change. Maybe they will be more directly involved in more interesting work at an earlier stage of their career than has previously been the case. And there will perhaps be fewer lawyers pulling 48-hour stints finalising documents. Recent surveys found that the use of GAIs improved job satisfaction.[6]
And some jobs will disappear. For example, compliance professionals whose main job is to ensure that forms have been correctly completed will likely need to retrain because AIs are simply better at checking forms and they can do it all day and all night at practically no cost. At the same time, it will become increasingly necessary for at least some staff to be familiar with AI. As such, there will be a greater need for retraining and skills development. Perhaps those compliance professionals can retrain as prompt engineers.
One issue many companies domiciled in IFCs such as Cayman are likely to face is a lack of expertise implementing privacy-preserving AI-based solutions. Given recent concerns regarding the disclosure of training data in LLMs, companies may well be reluctant to use LLMs that run queries through a centralised system that could inadvertently expose client data.[7] As such, there will be a need to implement AIs locally.
This may lead to the development of niche consulting businesses that can implement open-source AIs such as Ollama,[8] or Alpaca-Lora.[9] It may also lead to the development of more easily implemented off the shelf scalable AIs that can be downloaded and run by relative novices (full disclosure, I am an advisor to Euler Digital, which is building just such a scalable AI, called Bezoku[10]).
In the longer term, GAI presents IFCs with an opportunity to diversify the economy. But for that to happen, the next generation of entrepreneurs and employees will have to be ‘AI natives’ (just as the current generation are ‘Internet natives’). That means education, workforce training and skills development that is oriented towards AI and related fields. To that end, a group of us recently established a non-profit in Cayman, 345 Robotics, exclusively devoted to enabling kids aged eight and up to participate in AI and STEM-related activities through designing, building and competing with robots.[11] We hope to be part of the solution — and encourage others to support the initiative — but many other programmes will be needed.
While GAI presents significant opportunities, it also presents challenges. A well-trained GAI may be less error-prone than a human, especially when performing specific tasks for which it is trained. However, AIs can nonetheless make errors and, as has been widely reported, even ‘hallucinate’. For these reasons, it is important that GAI continue to be governed by humans. But what does such governance look like?
Some fearmongers claim that AI represents an existential threat and that this justifies precautionary, pre-emptive, top-down regulatory intervention. But such regulations would benefit larger firms that are better able to comply — which at least partly explains why executives at some big AI firms seem supportive of such regulation.[12] It would thereby crowd out many smaller firms and open-source initiatives, limiting competition, and centralising power in a few companies. While it would also slow down innovation, it would do so primarily by reducing beneficial innovations, including those that could combat any harmful effects.
Worst of all, top-down regulation has the potential to lead to a small number of highly centralised AIs. If one of those AIs also had control of physical systems, such as weapons, energy, and/or other infrastructure, then it could become an existential threat. This is the Skynet envisioned in the Terminator movies, or The Matrix envisaged in the movies of the same name. In other words, precautionary regulation could ultimately be counterproductive, bringing about the very threat it is intended to prevent.[13] But even that is highly unlikely.
By contrast, in the absence of precautionary top-down regulation, it is far more likely that there will be a multiplicity of AIs, providing us with choice and competition, and helping us to innovate and increase our productivity rather than threatening us with extermination. Competition between AIs is also likely to be an important part of the remedy for other problems, ranging from errors to bias.[14]
But competition alone may not be enough. To prevent and remedy harm, it is important to hold the party responsible for harm to account. That is the job of the legal system. The combination of common law and constitutional statutes that provide the legal framework in Cayman and many other IFCs will in most cases offer effective remedies. For example, a company that deploys an AI can be held liable if it breaches a contract or if the AI violates a person’s privacy.
However, to address situations in which a third party might be harmed (for example, if an AI is operating a vehicle that crashes), it may be necessary to introduce legislation. This is because the modern law of torts is largely fault-based and premised upon humans having a duty of care that meets a certain standard. Since AIs are not humans, there is some uncertainty as to how the courts would address situations in which the AIs are the proximal cause of harm. One solution would be to apply the law of vicarious liability as it applies to animals, holding the owner strictly liable for the any harm. However, that might result in the perverse situation that AIs are effectively held to a higher standard than humans, in which case an alternative would be to apply vicarious liability while adopting a reasonableness standard to the AI’s conduct.[15]
I will leave the closing summary to Claude, Anthropic’s LLM (fed the above, I think it does a decent job):
Artificial intelligence has the potential to greatly benefit financial services in international financial centres like the Cayman Islands. Predictive AI is already being used to detect fraud and money laundering more effectively and efficiently. More transformative generative AI can automate routine legal, accounting, and compliance tasks, boosting productivity enormously. However, this will disrupt jobs, so education and retraining will be critical. Cayman has an opportunity to develop expertise in implementing privacy-preserving AIs and to diversify its economy, but only if it invests heavily in STEM education for the next generation. Excessive regulation of AI risks hampering innovation and competition. Instead, relying on existing legal frameworks like tort law and competition should help minimise harm from AI while enabling us to reap the benefits. Targeted new laws may be needed to clarify liability relating to autonomous systems. Overall, an open, light-touch approach will likely maximise benefits from AI advances.
1 https://www.bmc.com/blogs/neural-network-introduction/
2 https://usa.visa.com/about-visa/newsroom/press-releases.releaseId.16421.html
3 https://cloud.google.com/blog/topics/financial-services/how-hsbc-fights-money-launderers-with-artificial-intelligence
4 https://www.relativity.com/blog/could-it-be-unethical-not-to-use-ai/
5 https://am.jpmorgan.com/content/dam/jpm-am-aem/global/en/insights/The%20transformative%20power%20of%20generative%20AI.pdf
6 https://hrreview.co.uk/hr-news/reward-news/generation-ai-new-wave-of-workers-adopt-generative-ai-in-pursuit-of-greater-job-satisfaction/373017; https://www.oecd.org/coronavirus/en/data-insights/what-do-workers-and-employers-think-about-ai-in-the-workplace
7 https://www.zdnet.com/article/chatgpt-can-leak-source-data-violate-privacy-says-googles-deepmind/
9 https://www.datacamp.com/blog/12-gpt4-open-source-alternatives
12 https://www.businessinsider.com/ai-leaders-are-fighting-over-claims-ai-poses-extinction-threat-2023-11
13 This is an example of a paradoxical peril arising from the application of the precautionary principle: https://scholarlycommons.law.wlu.edu/wlulr/vol53/iss3/2/
14 This is supported by evidence from the history of technologies from bread and beer to ride sharing: https://laweconcenter.org/resources/consumer-protection-in-the-21st-century/
15 See eg https://digitalcommons.osgoode.yorku.ca/cgi/viewcontent.cgi?article=3678&context=ohlj
Julian Morris FRSA
Julian Morris has 30 years’ experience as an economist, policy expert, and entrepreneur. In addition to his role at ICLE, he is a Senior Fellow at Reason Foundation and a member of the editorial board of Energy and Environment. Julian is the author of over 100 scholarly publications and many more articles for newspapers, magazines, and blogs. A graduate of Edinburgh University, he has masters’ degrees from UCL and Cambridge, and a Graduate diploma in law from Westminster. In addition to his more academic work, Julian is an advisor to various business and a member of several non-profit boards.