Responsible AI Governance: A Competitive Edge for CFOs

Share
Share
David Fearne is Vice President of AI at NTT DATA
As finance teams deploy AI at scale, CFOs must see governance frameworks as strategic enablers of innovation rather than a compliance cost

AI has moved beyond the pilot phase in financial services. Chief Financial Officers now face critical decisions about how AI systems influence credit risk, fraud detection, regulatory reporting and customer-facing operations.

Financial services remain one of the most heavily regulated sectors globally, where accountability and transparency carry material consequences. As frameworks such as the EU AI Act reshape the compliance landscape, CFOs must ensure their organisations balance innovation with governance.

David Fearne, Vice President of AI at NTT DATA, works with banks, insurers and fintech providers to operationalise AI in ways that are commercially viable and socially responsible. He explains how financial institutions can scale AI systems while turning governance into a source of competitive advantage.

David Fearne, Vice President of AI at NTT DATA

Governance as an innovation enabler

The most successful AI programmes recognise that ethical governance enables innovation to scale rather than constraining it. "The key is to stop treating innovation and governance as competing forces," David says.

This balance begins with design intent. Financial institutions must define which decisions AI can influence, where human oversight is mandatory and how risk tolerance varies by use case. Not all AI systems require identical levels of explainability or control, for example.

Youtube Placeholder

Embedding governance into the delivery lifecycle means defining model selection, data provenance, evaluation criteria and escalation thresholds upfront. When governance is operationalised this way, teams can move faster with confidence because they understand the boundaries within which they operate.

CFOs play a central role in establishing this framework. By treating governance as an investment in sustainable growth rather than a regulatory checkbox, finance leaders can ensure AI initiatives deliver measurable business value while maintaining institutional integrity and stakeholder trust.

Youtube Placeholder

Material risks and mitigation strategies

One of the biggest risks financial institutions face is organisational overconfidence, David explains. Many assume that once an AI system performs well in a pilot, it will behave predictably at scale. In reality, scale introduces complexity, edge cases and behavioural drift that are often underestimated.

Youtube Placeholder

Another major risk is opacity. When decision-making becomes too difficult to explain, accountability becomes blurred, particularly in customer-facing or credit-related decisions.

Responsible mitigation starts with clear system boundaries. Banks need to define what AI can and cannot do and ensure those constraints are enforced technically. Robust evaluation frameworks, audit logs and escalation mechanisms are essential.

Youtube Placeholder

Human oversight must be meaningful. Humans should not simply rubber-stamp AI outputs but be equipped to challenge, override and help AI learn from them as part of an ongoing feedback loop.

Explainability and accountability should be treated as core architectural requirements, not optional features. In financial services, institutions must be able to explain how decisions were reached, who is responsible for them and under what conditions they can be challenged.

Youtube Placeholder

Accountability requires clear ownership. AI systems do not make decisions in isolation; people and organisations do. That accountability must be traceable through system design, from data inputs to model behaviour and final outcomes.

Adaptive governance models ahead

The next generation of AI governance in banking will be adaptive and continuous rather than static and point-in-time, according to David. Fixed rulebooks will not keep pace with rapidly evolving models and use cases.

Youtube Placeholder

These models will combine technical controls such as real-time monitoring and automated evaluation with organisational accountability, ensuring humans remain clearly responsible for outcomes. Governance will be embedded into systems, not layered on top.

Greater differentiation by use case is also expected. High-impact decisions will carry stricter controls, while lower-risk applications will be governed more lightly, enabling faster innovation without compromising safety.

Youtube Placeholder

NTT DATA focuses on integrating responsible AI principles into existing architectures, workflows and control frameworks. This often involves creating intermediary layers such as evaluation services, audit pipelines and decision orchestration components that sit alongside legacy systems.

Banks that can clearly demonstrate how their AI systems behave, learn and are corrected over time will earn greater trust from regulators and customers alike, turning responsible AI into a competitive advantage rather than a compliance burden.

Company portals

Executives