Finance Leaders Face AI Governance Gap as Adoption Surges

A survey of more than 400 finance sector organisations has found that governance structures are not keeping pace with the deployment of agentic AI.
TrendAI, a business unit of Trend Micro, surveyed 407 finance, insurance and accounting organisations globally. The findings suggest finance chiefs are approving AI systems that can act autonomously in areas such as fraud detection and compliance, but lack visibility into how those systems operate.
Nearly one in three organisations cannot track or verify the actions taken by autonomous systems. The research found that 31% of financial services firms have no observability or auditability over AI agents.
The research also found that competitive dynamics are influencing approval decisions. According to the survey, 68% of organisations have been pressured to approve AI deployments in the past year despite security concerns, with 15% describing those concerns as extreme but overridden.
Pressure overrides security concerns
Bharat Mistry, Field CTO at TrendAI, says governance maturity has not matched deployment speed.
"Financial services firms are not short of awareness when it comes to AI risk, but awareness alone is not control," Bharat says. "What we are seeing is a widening gap between how quickly AI is being deployed and how well it is being governed. That gap is where risk lives, and it's a problem that's getting worse in light of increased interest in, and uptake of, agentic AI tools."
The report found that only 21% of firms have comprehensive AI policies in place. Just 32% report moderate confidence in their understanding of legal frameworks governing AI.
Meanwhile, 44% cite unclear regulation or compliance standards as a barrier to progress. The absence of standardised regulatory guidance has left many organisations uncertain about which frameworks to prioritise.
Data exposure tops risk concerns
Sensitive data exposure is the top concern for 40% of respondents. An expanded cyber attack surface was cited by 34%, while 32% identified risks tied to autonomous execution and misuse of trusted AI status.
The survey also found limited awareness of emerging threats. Just 30% of organisations recognise prompt injection as a risk, despite its growing use in manipulating AI systems to bypass security controls or extract confidential information.
"Agentic AI changes the equation," Bharat says. "These systems are not just supporting decisions, they are taking action. Without visibility, auditability and clear control mechanisms, organisations are effectively handing over authority without accountability."
Kill switch debate remains unresolved
TrendAI has developed its Vision One platform to provide visibility and control across cloud, endpoints, networks and data. The system is designed to manage cyber risk across the AI lifecycle, from infrastructure to models to users.
The platform responds to a challenge finance leaders face in retaining control over systems that can act without human intervention. According to the survey, 40% of organisations support the introduction of AI kill switches to shut down systems in the event of misuse, but 46% remain uncertain.
"This lack of alignment points to a deeper issue," Bharat says. "Organisations are deploying increasingly powerful AI systems without a shared understanding of when, or how, human intervention should take place when it matters most."


