INFORMATION TECHNOLOGY
OECD publishes paper on generative AI in the financial sector.
The Organization for Economic Co-operation and Development (OECD) published a paper on generative artificial intelligence in finance. In particular, the paper notes that the deployment of artificial intelligence (AI) in the financial sector has been increasing in recent years, which presents opportunities for efficiencies, but also comes with important risks.
How is AI being used in finance?
In particular, the report outlined the slow-paced deployment of AI in finance, partially explained by the heavy regulation of the financial sector. Other factors involve risks of data privacy breaching for financial market participants, and lack of sufficient explainability of AI models.
The most common cases of AI adoption in finance, according to the OECD, are found in process automation.
What are the main concerns regarding generative AI use in the financial sector?
In particular, the OECD highlighted, among others:
What does the OECD recommend for using generative AI in the financial sector?
The OECD notes that AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed. Particularly, risks of generative AI use in finance need to be identified and mitigated to support and promote the use of responsible and safe AI, without stifling innovation.
The OECD called for policy consideration and potential action, specifically in the following areas:
How is AI being used in finance?
In particular, the report outlined the slow-paced deployment of AI in finance, partially explained by the heavy regulation of the financial sector. Other factors involve risks of data privacy breaching for financial market participants, and lack of sufficient explainability of AI models.
The most common cases of AI adoption in finance, according to the OECD, are found in process automation.
What are the main concerns regarding generative AI use in the financial sector?
In particular, the OECD highlighted, among others:
- risks of bias, discrimination, and unfair outcomes;
- lack of explainability, which is the 'difficulty in understanding why and how AI-based models generate results;'
- data-related risks, such as data quality, privacy, and concentration of data;
- risks to intellectual property (IP) and copyright;
- poor reliability and accuracy of AI-driven model outputs;
- governance-related risks, such as lack of accountability and transparency;
- financial stability-related risks, including herding, volatility, flash crashes, interconnectedness, and concentration; and
- competition-related risks, considering the concentration of activity in a small number of providers.
What does the OECD recommend for using generative AI in the financial sector?
The OECD notes that AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed. Particularly, risks of generative AI use in finance need to be identified and mitigated to support and promote the use of responsible and safe AI, without stifling innovation.
The OECD called for policy consideration and potential action, specifically in the following areas:
- promoting safeguards against the risk of bias;
- encouraging efforts to improve explainability;
- strengthening data governance;
- promoting safety and resilience, protecting against deception and market manipulation;
- promoting international multi-disciplinary collaboration;
- educating, raising awareness, and investing in skills;
- promoting a human-centric approach; and
- strengthening model governance and disclosure.
The OECD further noted that any guidance or policy will also need to be future-proof to withstand the rapid pace of innovation in the AI field.