The Financial Conduct Authority, Bank of England, and HM Treasury have issued a joint warning to regulated financial firms, calling on them to urgently strengthen their cyber defences against threats posed by frontier artificial intelligence models.

The statement, published on 15 May 2026, marks one of the most direct interventions from UK financial authorities on the subject of AI-related cyber risk, and sets out clear expectations across governance, vulnerability management, third-party oversight, and incident response.

Regulators warned that frontier AI models now exceed what a skilled human practitioner could achieve in terms of cyber attack capability, operating at faster speeds, greater scale, and significantly lower cost. The authorities stated that if these capabilities are used maliciously, the consequences could threaten firms’ safety and soundness, harm customers, undermine market integrity, and destabilise the wider financial system.

The statement made clear that firms which have underinvested in core cyber security fundamentals face growing exposure as more powerful AI models enter the market.

Boards and senior management at regulated firms are expected to demonstrate sufficient understanding of frontier AI risks, with the FCA and its co-signatories linking governance failures directly to strategic and operational vulnerability. Firms are also expected to review whether existing insurance coverage is appropriate given the evolving threat environment.

On vulnerability management, the regulators highlighted the speed at which frontier AI models can identify and exploit weaknesses across a firm’s entire technology estate. Firms are being asked to triage, prioritise, and remediate vulnerabilities more rapidly and at greater scale than previously required, including through automation where appropriate.

Third-party risk also featured prominently, with the joint statement requiring firms to actively identify, monitor, and manage all external applications, libraries, and services integrated into their networks, including open-source software. Firms must be ready to remediate vulnerabilities flagged by supply chain partners at scale.

On protection, the regulators said firms should consider deploying automated and AI-enabled defences capable of operating at the same speed as AI-driven attacks, rather than relying solely on traditional manual responses.

The statement directed firms to review effective practices on cyber resilience published by the Bank of England, the Prudential Regulation Authority, and the FCA in October 2025, which set out detailed guidance on response and recovery capabilities.

The FCA, Bank of England, and Treasury confirmed they will continue monitoring frontier AI developments and engaging with the financial sector through the Cross Market Operational Resilience Group, known as CMORG, which hosted a dedicated Frontier AI Risk Mitigation Webinar on 14 May 2026.

The National Cyber Security Centre has also published supporting guidance, covering preparation for large-scale vulnerability patch events, the implications of frontier AI for cyber defenders, and a framework of questions firms should apply when using AI models to identify vulnerabilities.

The regulators stated the joint statement does not introduce new regulatory requirements but brings together and reinforces existing expectations as the operating environment becomes increasingly complex.