Global regulators step up oversight of AI risks in finance

Global financial watchdogs are intensifying their scrutiny of artificial intelligence (AI) in the banking sector, warning that heavy reliance on shared AI systems could threaten financial stability. As the use of AI accelerates across global markets, regulators are moving to monitor systemic risks and strengthen their own technological capabilities.

In a report published Friday, the Financial Stability Board (FSB) — which advises G20 governments — said widespread adoption of the same AI models and infrastructure could create “herd-like behaviour” across financial institutions. “This heavy reliance can create vulnerabilities if there are few alternatives available,” the FSB cautioned, warning that such concentration could amplify shocks during market stress.

A separate study by the Bank for International Settlements (BIS) urged regulators and central banks to “raise their game” in monitoring and using AI. The BIS said authorities must not only understand AI’s potential to reshape markets but also adopt the technology themselves to improve supervision and data analysis.

The report comes amid an international race — led by the United States and China — to dominate next-generation AI tools and applications, including those that underpin financial services.

While the FSB said there is currently “little empirical evidence” that AI-driven correlations have directly impacted market outcomes, it warned that AI could increase exposure to cyberattacks and algorithmic fraud.

Some jurisdictions have already acted. The European Union’s Digital Operational Resilience Act (DORA), which took effect in January, establishes new rules for digital and AI-based systems used by financial institutions.

The emerging consensus among regulators is clear: AI promises efficiency and insight, but without vigilant oversight, it could become a new source of systemic risk in global finance.