九色

Australia Regulator Threatens Enforcement for Poor AI Controls

By Richard Henderson | April 30, 2026

Australia’s top prudential regulator said it will take action against companies that fail to adequately control cybersecurity threats, as concerns within the industry mount over Anthropic PBC’s latest AI model Mythos.

The Australian Prudential Regulation Authority is finalizing a plan to supervise artificial intelligence risks, following a review of banks, insurers and retirement funds conducted late last year that identified several shortcomings. These include information security practices struggling to keep pace with AI threats and over-reliance on third party AI vendors, according to the regulator.

“Where entities fail to adequately identify, manage or control AI risks in a manner proportionate to their size, scale and complexity, we will take stronger supervisory action and, where appropriate, pursue enforcement,” APRA said in a .

The comments from Australia reflect the urgency with which regulators around the world are acting to spur companies to strengthen their AI defenses, as the technology rapidly advances.

APRA is engaging across the sector on the potential for heightened “cyber threats from high capability AI frontier models such as Anthropic Mythos,” according to the letter from Therese McCarthy Hockey, an executive board member. It has heard clear recognition from regulated entities of the need for a step change in cyber practices in an “evolving threat environment,” the letter stated.

APRA called on companies to ensure there are credible fall-back processes where the AI technology supports critical operations and called for “robust security testing across AI-generated code.”

In other areas of weaknesses, the watchdog warned about supplier concentration, where firms are heavily reliant on a single provider for multiple AI use cases. There’s an over-reliance on “vendor presentations and summaries without sufficient examination of key AI risks such as unpredictable model behaviour and the impact on critical operations,” according to the letter.

Among its expectations, APRA said companies should prepare for “timely action” when AI tools are “not operating as expected.”

Photograph: The Anthropic logo on a laptop; photo credit: Gabby Jones/Bloomberg

Related:

Topics InsurTech Data Driven Artificial Intelligence Australia

Was this article valuable?

Here are more articles you may enjoy.