Artificial Intelligence for All: Rethinking Responsible and Frugal AI in Financial Services

8 April 2026

Category: AI Podcast

The article at a glance

The question of how to use Artificial Intelligence (AI) responsibly and efficiently has never been more pressing. In a recent episode of …

The question of how to use Artificial Intelligence (AI) responsibly and efficiently has never been more pressing. In a recent episode of the Cambridge Executive Business Insights podcast series, “Rethinking AI”, Professor Jaideep Prabhu sat down with Firas Ben Hassan, Head of Agentic AI Solutions at Allianz Technology, to discuss the evolving concept of “frugal AI” and the way it is shaping decision-making across the financial services sector.

Image of hands typing on a laptop, surrounded by data.

Much like the debates spurred by urban planners over whom a city should serve and how, the adoption of AI raises fundamental questions: should we always seek to build the most advanced and complex solutions just because we can? Or is there merit, and perhaps necessity, in deliberately choosing simplicity, transparency, and resource stewardship?


Demystifying frugal AI: why less can be more

The term “frugal AI” might seem counterintuitive in an age of ever-larger language models and swelling datasets that dominate both technical headlines and boardroom conversations. And yet, as Firas Ben Hassan notes, frugality in AI is not about austerity for its own sake, but rather about being “smart about how we are using AI”, selecting the right level of technology for a problem and focusing constrained resources, whether financial, technical or environmental, on what truly matters.

The tendency to throw the biggest, most expensive, and most sophisticated models at every challenge – what Firas calls the “bigger is better” fallacy – risks both technical waste and loss of purpose. “It’s like buying a sports car just to go to the grocery shop,” he quips. Instead, he suggests that a small, efficient solution, carefully chosen, can not only be adequate but ideal. This insight is crucial, especially in sectors like insurance and banking, where explainability, cost control and regulatory compliance are paramount.


A three-part framework for frugal AI

Drawing on his experience as both a data scientist and a leader at Allianz, Firas Ben Hassan offers a pragmatic framework for what frugal AI means within the context of financial services:

  1. Efficient use of computing power
    Reducing computational demands is not only about saving money but directly aligned with the financial sector’s growing commitments to sustainability and net-zero goals. As Firas points out, running massive language models consumes vast amounts of energy, and the environmental costs, from carbon emissions to the use of clean water for cooling data centres, are not trivial.
  2. Purposeful deployment
    Not every problem requires an AI solution, and not every AI solution needs to be cutting-edge. There is value in using lighter-weight solutions, sometimes even simple spreadsheet automation, when they address business needs effectively. “Not all the problems need AI. We can innovate even with Excel files,” Firas observes, challenging the current hype cycle.
  3. Responsibility and explainability
    In highly regulated industries, responsible AI is non-negotiable. AI systems must be explainable, interpretable and auditable not just to satisfy regulators, but to earn the trust of customers. “If you cannot explain, you cannot use this model,” Firas cautions. Understanding the rationale behind decisions is indispensable, whether in underwriting, fraud detection, or investment management.


The human-centred vision at Allianz

Throughout the conversation, a recurring theme is Allianz’s commitment to human-centred AI. Rather than trying to replace people outright, AI is seen as a tool to augment human judgement and capability. This “collaborative intelligence” is embedded in Allianz’s philosophy: AI assists people, surfaces insights, flags anomalies, but crucial decisions remain with humans, especially in high-stakes environments like investment and risk management.

Firas articulates a hierarchy of agent autonomy, with three distinct levels:

  • Level 1: agentic AI offers suggestions; humans decide.
  • Level 2: AI can make preliminary decisions, but humans review them before action is taken.
  • Level 3: agents can act independently within defined rules, but humans still set boundaries, monitor outcomes, and override as needed.

While the industry is experimenting with increasing levels of autonomy, Allianz remains cautious, especially at the frontier of full automation, and keeps “humans in the loop and in charge.”


Guardrails and regulatory collaboration


Regulation in financial services is among the world’s most stringent, and for good reason. Financial crises, data breaches or opaque automated decisions can have profound and systemic consequences. Yet, Firas does not view regulation as simply a constraint, but also as a driver for innovation in responsible AI.

He recounts how Allianz’s interactions with European, US, and Asian regulators always centre on a few non-negotiable rules:

  • Transparency by design: every AI system must document its data sources, training methods and intended use.
  • Human checkability: for important decisions, a human must be able to interrogate and approve AI’s reasoning.
  • Safety to fail: no AI system is infallible. There must be clear boundaries, ongoing monitoring for anomalies, and well-defined procedures for intervention.

An analogy is the airline industry, where autopilots are used extensively yet always under the watch of human pilots, ready to take over instantly should the unexpected arise.

Allianz’s approach, therefore, is not to resist regulation but to align with it, seeing rigor, explainability and “safe to fail” designs not as burdens but as pillars for the lasting and trustworthy application of AI.


The “good enough” principle: market realities and cultural change


Firas speaks of the idea of “good enough”, that is, focusing on solutions tailored to context and need, not simply those that push technological boundaries for their own sake. Firas echoes research from emerging markets, where frugal innovation is a necessity, not a luxury: “You try to achieve a good enough solution, because that can be very expensive and can be very hard for people to even use and maintain.”

Allianz employs a rigorous, multi-part test to decide if an AI or analytics model is fit for purpose:

  1. Performance: is the model accurate enough for the task at hand, not perfect, but sufficient?
  2. Explainability: can the model’s actions be understood by regulators, end-users, and internal auditors?
  3. Efficiency: could a simpler baseline model do nearly as well and at far lower cost in terms of computation, maintenance and risk?
  4. Sustainability: is the model resource-efficient and environmentally responsible?

Firas recounts how Allianz moved from using state-of-the-art, expensive language models to in-house, smaller alternatives, which achieved comparable accuracy at a fraction of the cost, and with vastly improved explainability.


Cultural challenges

At the heart of the transition to frugal AI are not merely technical or regulatory hurdles, but deeply embedded cultural ones. The “shiny object” problem, that is, where teams or executives are seduced by novelty and hype, is widespread. It can be tempting to seek the most complex and celebrated technologies, fearing that reliance on simpler tools is somehow backward or unambitious.

Moreover, incentives are often misaligned: promotions and accolades may be awarded to those developing the most complex models, not the most efficient or transparent ones. Changing this requires leaders who reward frugality, see value in explainability, and ask hard questions about cost, sustainability and actual improvement in outcomes.

Firas is unequivocal: leadership must shift from being impressed with what is technically possible to focusing on what actually makes sense for the business and its customers. This involves not only education, but the courage to define success differently, by celebrating “elegant, efficient solutions” rather than simply “the biggest models”.


From theory to practice: the AI efficiency scorecard


One practical takeaway from the episode is the suggestion for AI leaders and teams to implement an “AI efficiency scorecard” for every project or proof of concept. Before any new initiative moves forward, this scorecard encourages reflective questioning:

  • What specific problem are we trying to solve?
  • What is the simplest solution that could work, and have we tested this baseline?
  • What will this cost per run, per month, including cloud computing and maintenance?
  • Can we explain how it works to all stakeholders, including regulators and employees?
  • How will we measure if it is actually working (i.e., what KPIs indicate success)?

Such an approach ensures that strategic, frugal thinking is embedded from the outset, rather than retrofitted as afterthoughts. It is an antidote to the hype-driven model of innovation, balancing ambition with humility and stewardship.


A new model for responsible AI in finance and beyond

The lessons from Allianz’s experience, as explained by Firas, are widely applicable. The best AI systems are those created with a clear sense of purpose, inclusion, and accountability.

Financial services provide a unique lens: systems are complex, interconnected and risk-sensitive, and the costs – environmental, social, reputational – of AI gone wrong can be high. By foregrounding efficiency, explainability and human-centricity, Allianz charts a path that other sectors would do well to study.


What success looks like for frugal AI

Looking ahead, Firas envisions a world where financial firms routinely measure the efficiency and sustainability of their AI, celebrating solutions that are “80% as good as they could theoretically be” but use a fraction of the resources and are “10 times more explainable.” He suggests that chief efficiency officers for AI may soon be as common as chief risk officers, and that the industry will need to reward not just technical prowess, but the wisdom to choose appropriate tools for each challenge.

The risks of not embracing this approach are also clear: an escalating technological arms race, enormous wasted resources and energy, and a proliferation of unmanageable, opaque systems that undermine trust in digital finance.


A call to thoughtful action

Ultimately, the conversation at Allianz and in the wider sector is a case study of how real, responsible technological progress is as much about culture, leadership and continual reflection as it is about breakthroughs in code or algorithms.

The future of AI is not simply a contest to build bigger machines, but a collaborative effort to create systems that are effective, sustainable and, above all, trustworthy. In an era where AI can touch all our lives, from investments and insurance premiums to the protection of sensitive data, this frugal, human-centred approach is not just wise, but essential.

Listen to the full episode – available wherever you get your podcasts.

Top