Regulators are sometimes unkindly criticised for having predicted all six of the last three recessions. Although it sometimes pays to be cautious, it is important that excessive caution does not stifle innovation and progress. Increasingly, leading regulators are seeking to engineer a smart financial centre where innovation is ever-present and technology is used extensively to enhance value, increase efficiency, manage risks better, and create new opportunities – invariably with the consumer in mind.
Historically, prophets have not always had good press though their messages are enduring. But long term it pays to be a visionary and in today’s hyper-competitive financial services industry practical forward thinking is a sign of real progress. Data-driven regulation and compliance is the key to a successful future financial services industry. With regulators being the ‘public champion’ for these new data technologies, the automation of regulation and compliance are areas that offer significant potential to transform services and support the work of regulators and the British Virgin Islands (BVI) is leading by example.
Exploring The Sandbox
Remember playing in a sandbox as a child – using your imagination to create shapes and implement vision but in a safe and controlled environment? Metaphorically, a regulatory sandbox is no different, only this sandbox is where FinTech innovation meets RegTech practicality. The sandbox aims to promote more effective competition in the interests of consumers by allowing both existing and prospective licensees to test innovative products, services and business models in a live market environment, while ensuring that appropriate safeguards are in place.
To this end, a sandbox can help to encourage more FinTech experimentation within a well-defined space and duration, where the regulator will provide the requisite regulatory support, with the fourfold aim of increasing efficiency; managing risks better; creating new opportunities; and improving people’s lives. The sandbox is an experiment for both regulator and regulated alike. It is the first time that many regulators have allowed licensees to test in this way and interest is growing exponentially.
From FinTech To RegTech
The data science technologies of artificial intelligence (AI), Internet of Things (IoT), big data and behavioural/predictive analytics, and the blockchain are all poised to revolutionise regulation and compliance and create a new generation of Regulatory Technology or RegTech start-ups. Examples of current RegTech systems include Chatbots and intelligent assistants for public engagement; Robo-advisors to support regulators; real-time management of the compliance ecosystem using IoT and blockchain; automated compliance/regulation tools; compliance records securely stored in blockchain distributed ledgers; online regulatory and dispute resolution systems; and, in future, regulations encoded as understandable and executable computer programs.
Automation is all the rage but why is this happening and what are the benefits? In short, money – cost savings and greater efficiency are both imperatives and key drivers. The intended benefits of this automation will likely be reduced costs for financial services firms as well as the removal of a key barrier for FinTechs as they enter financial services markets.
Regulators collect huge volumes of data (increasingly open sourced) and thus present major opportunities for so-called Big data (analytics). In general, Big data provides the opportunity of examining large and varied data sets to uncover hidden patterns, unknown correlations, customer preferences etc. Big data encompasses a mix of structured, semi-structured and unstructured data gathered formally through interactions with citizens, social media content, text from citizens’ emails and survey responses, phone call data and records, data captured by sensors connected to the internet of things and so on. The notion of Big data is both increasing in volume, the variety of data being generated by organisations and the velocity at which that data is being created and updated - often referred to as the ‘3Vs’ of Big data.
Artificial Intelligence And Behavioural Analytics
AI technologies power intelligent personal assistants, such as Apple Siri, Amazon Alexa, ‘Robo’ advisors, and autonomous vehicles. AI provides computers with the ability to make decisions and learn without explicit programming. There are three main branches including machine learning - a type of AI program with the ability to learn without explicit programming and can change when exposed to new data; natural language understanding – the application of computational techniques to the analysis and synthesis of natural language and speech; and sentiment analysis - the process of computationally identifying and categorising opinions expressed in a piece of text.
Closely related to Big data is behavioural and predictive analytics that focus on providing insight into the actions of people. Behavioural analytics centres on understanding how consumers act and why, enabling predictions about how they are likely to act in the future. Predictive analytics is the practice of extracting information from historical and real-time data sets to determine patterns and predict future outcomes and trends. Predictive analytics ‘forecasts’ what might happen in the future with an acceptable level of reliability and includes what-if scenarios and risk assessment.
Blockchain Technologies
Perhaps the most popular and much coined term in FinTech are the blockchain technologies. These include Distributed Ledger Technology (DLT) - a decentralised database where transactions are kept in a shared, replicated, synchronised, distributed book-keeping record which is secured by cryptographic sealing; and Smart Contracts - computer programs that codify transactions and contracts which in turn ‘legally’ manage the records in a distributed ledger.
Automating Regulation & Compliance
A core focus of regulators of late has been the challenge of Digital Regulatory Reporting (DRR) and weighing up the pros and cons of each of the following concepts, namely: disambiguation of reporting requirements; common data approach; mapping requirements to firms’ internal systems; a mechanism for firms to submit data to regulators; utilising standards to assist the implementation of DRR; a common data model; application programming interfaces; DLT networks; and disambiguation of regulatory text.
In terms of the potential benefits of DRR, these can be summarised as less time and more efficiency to comply with regulatory reporting requirements and improving consistency in information provided as well as enhanced information sharing between firms – specifically internal risk mitigation.
Regulation And Legal Status Of Algorithms
Legal redress for algorithm failure seems straightforward. If something goes wrong with an algorithm, just sue the humans who deployed the algorithm. But it may not be that simple: for example, if an autonomous vehicle causes death does the lawsuit pursue the dealership, the manufacturer, the third-party who developed the algorithm, the driver, or the other person’s illegal behaviour? This stimulates the debate as to whether or not algorithms should be given a legal personality in the same way as a company.[i]
As we know, a ‘Legal person’ refers to a non–human entity that has a legal standing in the eyes of the law. A graphic example of a company having legal personality is the offence of corporate manslaughter, which is a criminal offence in law being an act of homicide committed by a company or organisation. Another important principle of law is that of Agency, where a relationship is created where a principal gives legal authority to an agent to act on the principal’s behalf when dealing with a third party. An agency relationship is a fiduciary relationship. It is a complex area of law with concepts such as apparent authority where a reasonable third party would understand that the agent had authority to act.
As the combination of software and hardware is producing intelligent algorithms that learn from their environment and may become unpredictable, it is conceivable that, with the growth of multi algorithm systems, decisions will be made by algorithms that have far reaching consequences for humans. It is this potential of unpredictability that supports the argument that algorithms should have a separate legal identity so that due process can occur in cases where unfairness occurs. The alternative to this approach would be to adopt a regime of strict liability for those who design or place dangerous algorithms on the market to deter behaviours that appear or turn out to have been reckless. Is this a case of bolting the door after the horse has escaped?
Global Collaboration And Coordination
The situation remains fluid with increasing signs of international coordination, which is most welcome. For example, the collaboration of financial regulators and related organisations to create the Global Financial Innovation Network (GFIN), building on earlier proposals to create a ‘global sandbox’ – a network for collaboration and shared experience of innovation. Formerly launched in 2019, GFIN is designed to be an inclusive community of financial services regulators and related organisations that now numbers more than 60 with expansion inevitable. It is in this collaborative and pioneering environment that the BVI has the opportunity to become a leading player in shaping the FinTech regulatory landscape of the future.
Footnotes:
[i] The case of Salomon v A. Salomon & Co. Ltd established the principle of “separate legal personality” as was provided in the Companies Act of 1862 and as it is still provided in the Companies Act of 2006 under the UK Company Law.
Simon Gray
Simon is a senior financial services’ professional with strong GCC and international background and major experience in both
public and in private sectors with an established track record of success in both regulator and regulated. He has significant
experience in corporate governance and in turning around compliance and risk functions amidst increasing regulatory scrutiny. Combining an investigative background with 20+ years of first-hand experience of corporate governance, compliance, AML / CTF and risk management within the diverse international financial services sector, including designing and running comprehensive training programs. Significant policy and educational experience. Accountable executive for new regulatory implementation with close board level liaison. His remit has included the supervision of Islamic Financial Institutions as well as conventional Firms offering Islamic Windows.
Prof. Philip Treleaven
Philip Treleaven is Director of the UK Centre for Financial Computing & Analytics (www.financialcomputing.org) and Professor of Computing at UCL. Twenty five years ago his research group development much of the early fraud detection technology and built the first insider dealing detection system for the London Stock Exchange. For the past 16 years Prof. Treleaven’s research group has developed algorithmic trading systems with many of the leading investment banks and funds, and for the past 5 years they have worked on HFT trading risk and systemic risk with the Bank of England and FCA. Current research includes the application of machine learning and blockchain technology to financial asset management and “Algorithmic Regulation”.
UCL Computer Science is the leading UK centre for AI (http://ai.cs.ucl.ac.uk/) and blockchain research (http://blockchain.cs.ucl.ac.uk/).