Artificial intelligence’s (AI) path into meaningful, widespread use is fraught with complexity, risk and cost. In the latest episode of the Cambridge Executive Business Insights: Rethinking AI podcast, Professor Jaideep Prabhu engages Anusha Dandapani, Chief of Data and AI at the UN International Computing Centre (UNICC), for an exploration of frugal AI: a paradigm shift from sheer power and expansion to smarter, purpose-driven, and socially impactful innovation.

The stakes are high for global institutions like the United Nations, responsible for serving diverse agencies with distinct mandates, risk appetites and operational realities. AI, in this context, cannot simply be a playground for experimentation; its promise must be tempered by discipline, sustainability, and a commitment to social good. The conversation between Jaideep Prabhu and Anusha Dandapani takes us into the heart of this challenge, offering lessons and frameworks that resonate far beyond the UN.
AI as infrastructure: building common rails for diverse missions
To understand the real-world complexity of AI deployment, it’s worth considering the UN ecosystem. As Dandapani recounts in the podcast, when she stepped into her role, AI was already nascent across numerous UN agencies, cropping up as small pilots and scattered experiments. The excitement was palpable, but so too was the fragmentation and risk. The challenge was clear: AI needed to evolve from a tool to an infrastructure, an institutional architecture that centralises repeated efforts, reduces duplication, minimises wasted investment, and manages uneven risk exposure.
Dandapani describes the creation of the UNICC AI Hub as an initiative to provide “common rails” – secure infrastructure, governance templates and sandbox environments – for agencies to retain sovereignty while benefitting from pooled resources. This approach not only empowers agencies to deploy AI use cases tailored to their mandates but also tackles three recurring barriers:
- Fragmented infrastructure
- Limited financing for scaling AI
- Workforce capacity gaps
The practical effect is a shared digital backbone, supporting deployments like AI-native HR tools that both address functional needs and enable collective learning. Such initiatives move the UN beyond mere pilots, fostering co-investment and reuse across agencies.
From total cost of ownership to social impact: the essence of frugal AI
The conversation soon pivots to frugal AI, a concept developed jointly between UNICC and Cambridge Judge colleagues. Frugal AI is not simply about reducing the computational bill; it’s about shifting the axis of AI evaluation from total cost of ownership to social impact. Dandapani highlights a fundamental risk: “falling in love with the capability but forgetting the sustainability.” In public institutions, AI must justify not only its existence but its value in furthering the organisation’s mission and advancing the UN’s Sustainable Development Goals (SDGs).
This shift is operationalised through measurement frameworks that refuse to treat performance as the sole yardstick. Instead, the framework insists that for each AI deployment, there are three pivotal questions:
- What does it cost?
- What does it change?
- Does it advance our mandate?
Measurement becomes discipline, and the cost of AI is tied to its social and operational outcomes.
The surprising anatomy of AI costs
It’s tempting to assume that the largest costs in AI arise from compute and technical scale. Yet, as Dandapani reveals, a forensic analysis of real AI costs brings surprises. Integration, change management, governance reviews and human factors like upskilling emerge as dominant. “Ghost costs” such as security and cybersecurity audits, data preparation, transparency and risk awareness build up behind the scenes, dwarfing the familiar outlays on hardware and software. Frugal AI, thus, is less about “cheap compute” and more about institutional readiness.
This insight resonates globally. It overturns assumptions of where resources need to be allocated and underscores the need for frameworks that account for the full cost spectrum, especially those human and organisational costs often buried under technical headlines.
Linking AI to sustainable development goals: making impact measurable
In the UN context, performance is not enough. Dandapani makes clear that impact must be visible and measurable, especially when aligning AI initiatives with SDGs. The podcast describes how the framework developed with Cambridge doesn’t attempt to measure SDGs directly but focuses on operational shifts that underpin them:
- Reducing processing backlogs (SDG 16)
- Increasing multilingual access (SDG 10)
- Accelerating climate data analysis (SDG 13)
Sustainability only becomes truly measurable when translated into operational indicators. Linking AI performance to such outcomes brings discipline and relevance to the evaluation of projects and steers investments towards both efficiency and equity.
Governance and risk management: balancing speed with trust
Innovation, especially in heavily regulated contexts, often collides with the realities of governance and compliance. The podcast delves into this tension; banks, for instance, sometimes outsource AI innovation to fintech startups precisely due to compliance headwinds. The UN, operating at the heart of public trust, must walk a tightrope between speed and durability.
Dandapani’s response to this challenge is crisp: “Speed is seductive, but trust is durable.” The capital of public institutions is trust, and this shapes the logic of AI deployment. How is this achieved? Through a mix of AI red teaming, bias audits, accessibility reviews, explainability commitments, and most importantly, a human-in-the-loop approach. These are not afterthoughts but components baked into the design and stewardship of AI systems, ensuring that risk reduction mechanisms compound value over time.
A shared investment model: leveraging reuse across agencies
Measuring success, Dandapani argues, lies not in counting pilots but in the degree of reuse across agencies. By adopting frameworks that enable cost-efficient sharing, agencies reduce duplication and build governance maturity. Investment is shared, risk is collectively managed, and the AI ecosystem evolves not as a set of isolated experiments but as a portfolio of strategic assets.
Rather than seeing governance processes such as managing bias, explainability and audit as mere overhead, UNICC treats these investments in infrastructure as capital base, building an inventory of models, governance mechanisms and a sustainable deployment pipeline.
AI sandbox: from experimentation to institutionalisation
A critical innovation in the UNICC AI hub is the “sandbox infrastructure.” While traditionally sandboxes are associated with pilots and risk reduction, Dandapani’s vision is deeper: sandboxes vetted and validated toolkits for quick adoption, built-in audit and oversight, one-time approval and continuous supervision.
This model converts the sandbox from an isolated experiment to a structural mechanism for standards and process. Procurement, voluntary declarations, evolving governance are all managed as parts of a dynamic system. The consequence is not just risk reduction but increased governance maturity, laying the foundation for rolling out AI at scale across the UN’s global footprint.
New paradigms for workforce development: the many faces of AI adoption
The move from pilots to institutional architecture is not just a technical challenge but a human one. AI is no longer the exclusive domain of practitioners. Instead, the UN ecosystem is seeing the rise of multiple AI personas:
- AI citizens: users leveraging AI in daily workflows, not creators but beneficiaries
- AI-driven decision makers: business stakeholders employing AI across functions
- Practitioners: technical experts and developers
Each persona has distinct development needs, and “upskilling” and “reskilling” are not just optional extras but essential pillars. The challenge is persistent, but the vision is clear: a workforce empowered to make AI not just functional but transformative.
Agentic AI: new governance and cost paradigms
The episode ends with a discussion of the latest shift: agentic AI systems, in which AI “agents” operate semi-autonomously, potentially communicating and orchestrating tasks across platforms. Here, the cost equation shifts again: monitoring, oversight and orchestration become more resource-intensive, demanding continuous supervision.
Human-in-the-loop supervision is as crucial as ever. AI does not eliminate jobs, but instead necessitates new roles for humans in orchestration, continuous validation, and evolving governance. AI agents are not just technological artefacts; they are systems embedded within social, ethical and operational contexts that require ongoing human stewardship.
The power of challenger models: a call for frugal innovation
Dandapani’s closing recommendation is profound: create challenger models. These models, designed for meaningful implementation, enable organisations to assess reliability and cost efficiency across ecosystems with different needs and constraints. Leaders must bake frugality into their business models, not as an afterthought but as a foundational principle.
In doing so, organisations learn from one another, foster innovation that is purposeful rather than profligate, and ensure that AI scales to meet the world’s challenges with ingenuity and responsibility.
AI for good: lessons for the wider world
The UN’s approach to frugal AI is more than a lesson for global institutions; it is a template for all organisations grappling with the demands of modern technology. As AI permeates every sector, its future lies not in bigger and faster deployments, but in smarter, more purposeful innovation. Building common infrastructure, measuring full-spectrum costs, aligning outcomes with social missions, embedding governance and fostering human development, are the building blocks for AI as a strategic asset rather than an expensive experiment.
The complexity of AI adoption mirrors the complexity of solving other ‘wicked problems’ such as climate change, healthcare, inequality, where technology must be embedded within diverse human contexts. Shared knowledge, participatory governance and multidisciplinary frameworks offer the best chance for meaningful progress.
As we rethink AI and how it is used, the conversation started by Jaideep Prabhu and Anusha Dandapani is essential listening. Listen to the full episode wherever you get your podcasts for deeper insight into the frameworks, philosophy and practice of frugal AI at the UN and beyond.



