
As AI continues its rapid ascent, redefining how we work and interact, questions about who benefits, and who is left behind, grow ever more pressing. The interplay between technological innovation and human inclusion is no longer a side note but a central theme of business and society. In a recent episode of the Cambridge Executive Business Insights podcast “Rethinking AI”, host Jaideep Prabhu spoke with Neil Milliken, former Vice President, Global Head of Accessibility and Digital Inclusion at Atos, to interrogate the promises and pitfalls of making work and workplaces universally accessible in the era of AI.
A personal journey: from technology novice to accessibility advocate
Neil Milliken’s journey into the world of accessibility was, as he tells it, “accidental” but transformative. A Cambridge resident seeking employment, he began his career at a small tech company developing early tools for supporting people with dyslexia. Framed by his own experiences with dyslexia and, later, an ADHD diagnosis, Neil quickly perceived the emancipatory power of technology when thoughtfully applied – tools that could unlock capabilities, erode barriers, and instil a sense of possibility otherwise denied by traditional systems.
This was long before the AI explosion dominating headlines today. In the late 1990s, even fairly basic machine learning models underpinned breakthroughs in speech recognition and text-to-speech (foundational for accessibility) though their performance was, as Neil wryly notes, far from perfect: “I’ve been shouting at computers for 25 years. It’s only recently they’ve been answering me back in a sort of cogent manner.” Yet, fundamentally, what excited him was technology’s capacity to enable, making previously impossible tasks not just doable, but routine.
From assistive technologies to Generative AI: a continuum, not a revolution
It is tempting to imagine every wave of AI as a seismic rupture with the past. But as Neil points out, today’s large language models, which are now capable of translation, summarisation, and text generation at human-like levels, stand firmly atop decades of accessibility-driven innovation. Early machine learning systems that powered speech recognition and translation for people with disabilities directly prefigure the most influential generative AI technologies of today: “Those kind of machine learning models are the foundation of today’s large language models and the tech transforming business,” Neil reflects.
This continuity is crucial for several reasons. Firstly, it reminds us that accessibility is not a niche but the very crucible in which some of our most powerful AI tools were forged. Secondly, making technology inclusive does not merely mitigate risk or discharge compliance; it is historically one of the richest sources of scalable, mainstream innovation. In other words: what begins at the margins often becomes core.
The shift to mainstream: disability as an engine of innovation
Examples abound of innovations originally designed for people with disabilities that went on to revolutionise daily life for all. The typewriter, the telephone, even modern speech recognition, all began as so-called “assistive” technologies. Neil frames disability “as a wellspring of innovation”, not only as a lens for necessity but as a powerful creative force:
“People with disabilities face difficulties in everyday life and everyday work, and that means that they have to find solutions to those problems. So if you think about technologies that are ubiquitous, such as the typewriter, the telephone, speech recognition systems, all of these started as solutions to problems that people with disabilities were facing.”
This “curb cut effect” draws its name from urban design: ramps in pavements (curb cuts) were initially created for wheelchair users, but ended up benefitting parents with prams, delivery drivers, and countless others. Designing for the edges, not just the centre, can produce innovations that reverberate across society.
AI as the great leveller, or divider?
If AI is so full of potential, where do we stand now? According to Neil, we are living through a moment of exceptional promise and peril.
Positively, the rapid advance of AI is turbocharging the capacity for personalisation at scale. For example, speech-to-text has expanded to many more languages, and recognition accuracy for non-standard accents has improved enormously. Where once users had to “train” software laboriously, now generalist systems can adapt on the fly. This opens possibilities: people for whom note-taking, written expression, or even navigation were obstacles can now rely on AI agents to support or even automate these tasks. Such tools, when thoughtfully designed, are inherently assistive, by not only streamlining routine work but empowering tasks that were once out of reach.
Yet, the adoption of AI is not without caveats. As Neil warns: “Rubbish in, rubbish out. There’s a lot that, on the surface, seems impressive, but when you dig in, you realise the details are wrong or it’s hallucinating.” AI’s confident outputs can mask serious missteps, especially for marginal users whose experiences diverge from “standard” datasets. And when these tools, which are often developed without deep engagement from disabled or minority communities, are used for high-stakes tasks like recruitment or parole decisions, the risks are not just discomfort, but outright harm.
Designing for inclusion: from compliance to core strategy
Too often, accessibility is approached as a compliance hurdle – a box to tick in response to legislation, rather than a source of value. Neil, drawing on his experience at Atos and in global accessibility leadership, is clear that this is a category error. Truly transformative accessibility work happens not by tacking it on at the end, but by embedding universal design principles at the outset.
He recounts practical examples: Atos, operating in 70 countries, created a legal knowledge bot that draws on more than 400 pieces of legislation, that allowed employees to query what was required, in specific contexts, to ensure no exclusion occurred. By integrating this directly into service offerings, inclusion becomes ambient and automatic, not reactive or exceptional.
Similarly, AI-driven tools for note-taking or summary can, for neurodivergent employees, be the difference between participation and isolation. The key is that these systems are not afterthoughts, they are designed-in, not bolted-on. And, crucially, organisations gain not only in compliance or morale, but in risk awareness, data-driven decision-making, and often unexpectedly, profitability.
The practical business case: AI for doing more with less
One of the most resonant themes is that of “frugal innovation”: the ambition to deliver outsized impact with minimal resources. AI, by automating the painstaking and scaling the complex, is the ultimate force multiplier here. Neil offers the example of compliance risk assessment: Atos faced thousands of product offerings and tens of thousands of interfaces that needed reviewing for accessibility risk – an impossible task for human auditors. But with AI to scan, classify and flag, the once unmanageable workload becomes tractable, letting humans focus on high-value decisions rather than low-value drudgery.
Another example comes from assistive technology for people with vision loss. Where once apps had to connect users with remote human volunteers to describe their surroundings, modern AI-driven image and object recognition now automates this instantly on devices, conferring greater independence and far lower cost. Similarly, autonomous vehicles and enhanced navigation apps are returning autonomy to people previously dependent on others for mobility – a change that is as profoundly emotional as it is practical.
The limits and risks: biases, privacy, and the need for human oversight
With every advance, new challenges emerge. In the AI domain, these are often summed up in a familiar litany: bias, hallucination and privacy.
The problem of bias is particularly acute in systems meant to evaluate people, whether in recruitment, promotion, or justice. AI recruitment tools, for instance, frequently select for homogeneity, marginalising those with different communication styles, disabilities, or cultural backgrounds. Visual assessment algorithms might penalise blind or neurodivergent candidates whose eye contact or movements deviate from trained norms. Similarly, so-called predictive policing or automated parole decisions can perpetuate injustice if historic data reflects structural racism or other embedded biases.
The only remedy, Neil suggests, is intentional inclusion at every stage: “You need to work with diverse communities right from the design stage and continue to consult through the testing phases.” This is not just a matter of fairness but of accuracy and reliability. Biases present in training data will be magnified by the scale and speed of AI deployment, unless it is exposed and redressed by robust and ongoing human review.
Privacy, too, is a double-edged sword. The capabilities that make AI so powerful, such as sensor fusion, data aggregation, behaviour prediction, are inescapably intrusive. Assistive devices that “see” on behalf of a user may also transmit sensitive data to cloud providers, raising legitimate concerns over surveillance and data sovereignty. Sometimes, users must make hard trade-offs, balancing autonomy and convenience against potential loss of privacy – a decision that must be respected and supported.
The art of the possible: personalisation, scale and the role of AI
A persistent question is whether true personalisation is compatible with the demands of scale. In the accessibility context, this is especially acute: disabilities manifest along infinite spectrums, and each individual’s needs may change over time or context.
Yet, Neil is cautiously optimistic that AI can square this circle. The progression from rigid, one-size-fits-all systems to highly configurable environments is well underway. Where once IT departments feared the maintenance overheads of letting every user customise their desktop, today’s cloud AI is capable of supporting vast arrays of preference and usage patterns, delivering “extreme personalisation” as routine.
But this isn’t simply a matter of technical capacity. Making this work in practice, as Neil’s team did at Atos, requires integrating accessibility goals with broader business and sustainability strategies, leveraging governance frameworks (such as ESG reporting) and embedding objectives in company culture. In Neil’s words, “A lot of accessibility is just extreme personalisation.”
Partnership, collaboration, and the evolution of standards
No single actor – state, corporation, or non-profit – can address the challenges of inclusive design in isolation. One of the most heartening trends, as Neil observes, is the surprising spirit of collaboration even among fierce commercial rivals, particularly in the accessibility space. Industry heavyweights such as Apple, Google and Microsoft collaborate on projects like Project Euphonia, aiming to improve AI-powered recognition for people with non-standard speech.
At a policy level, dialogues with regulators and standards bodies are increasingly multidirectional, with business, academia, and disability advocates shaping the language and scope of emerging laws including, notably, the European AI Act. Frameworks now emerging treat accessibility not as isolated, sector-specific regulation, but as integral to overall outcomes, sustainability, and organisational life cycles.
“Treat exclusion like pollution,” Neil advises. By drawing the analogy between carbon emissions, a now-familiar negative externality, and the costs of inaccessible products, the case for universal design becomes not only a moral or legal one, but also one of operational competence and reputational risk.
Common misconceptions: expensive, insecure, and unfair?
It is worth dispelling a trio of persistent myths about accessibility and AI. First, Neil insists, making systems accessible need not be prohibitively expensive; in fact, thoughtful design can dramatically reduce long-term costs. Second, while there is a tension between security and the kinds of access that assistive technologies sometimes require, robust governance and oversight can generally resolve these. Notably, even the UK’s GCHQ, an organisation for whom security is paramount, ranks accessibility as a higher-order priority in its systems.
The third misconception is perhaps the most insidious: that providing certain users with assistive tools or accommodations is somehow “unfair” advantage. This, Neil argues, is a misunderstanding of both equity and innovation. Not only does levelling the playing field confer dignity and opportunity, but as previous examples show, the mainstream swiftly benefits from adoption of assistive tools once considered “special”.
Responsible AI: the human spark and environmental reckoning
As we look to the future, it is tempting to become either utopian or dystopian about AI’s impact. Neil remains grounded. While he sees immense benefits of agentic AI, layers of interacting systems capable of synthesising massive datasets and freeing people for more creative work, he cautions against surrendering critical oversight. AI, he notes, acts on prior data and is dazzling at pattern recognition, but lacks true novelty or human judgement. The need for “human-in-the-loop” decision-making endures.
An emerging concern is environmental sustainability. The AI boom is resource intensive, both in energy required to train and run models, and in infrastructure costs. “Most of the energy footprint of processing isn’t on the device. It’s in a data centre somewhere,” Neil notes. For AI to be truly “frugal”, organisations must be aware of these hidden costs, seeking efficiencies not just in code or workflow but in energy and hardware footprints as well.
One step forward: practical advice for leaders
What can organisational leaders do in the next year to make their use of AI both frugal and inclusive? Neil’s advice is deceptively simple: invest in teaching teams to write effective “prompts” for AI, shaping not just what they ask, but how they ask it. This not only improves productivity by reducing wasted iterations and redundant effort, but can embed accessibility requirements (such as inclusive output or clear structure) at the heart of every AI interaction. As new startups (like Cambridge’s own Squish) emerge to solve prompt engineering at scale, this is both a practical and strategic investment.
Rethinking AI means rethinking inclusion
The conversation between Jaideep Prabhu and Neil Milliken is a call to move accessibility from the periphery to the core of our innovation and AI agendas. The lesson is clear: when we design for the margins, we often unlock value for all. AI’s future will be defined not by technology alone, but by the courage, creativity and empathy with which we choose to wield it.
Listen to the full episode with Neil, available now wherever you get your podcasts.



