TECHNOLOGY AND POWER | BIG INNOVATION CENTRE
How emerging technologies reshape economic power, governance and global competition.
Can Government Reinvent Itself for the Age of AI? Public Service, Institutional Power and the Future State
The core issue
Artificial intelligence is often framed as a tool for improving efficiency in government: automating form fillings, accelerating decisions and reducing costs. But the deeper question is not how the state uses AI. It is how AI reshapes what the state is capable of being.
As artificial intelligence moves into public administration, it does more than modernise processes. It challenges how governments deliver services, exercise authority and sustain public trust. It exposes institutional strengths and weaknesses, from legacy infrastructure to skills capacity and governance frameworks.
The question is no longer whether governments will adopt AI.
It is whether governments can reinvent themselves for an AI-enabled era.
Power implications
- AI is reshaping how states deliver services and exercise institutional authority
- Governments that modernise systems and skills will gain strategic capability advantages
- Trust in public-sector AI will determine adoption and legitimacy
- Control over data, infrastructure and digital capability is becoming a core element of state capacity
- Institutional power increasingly depends on the ability to govern and deploy AI responsibly
Beyond efficiency: AI as institutional transformation
Public debate often frames artificial intelligence in government as a productivity tool. Yet its implications are far broader. AI has the potential to transform how citizens experience the state and how institutions themselves function.
For decades, public services have been organised around processing paperwork, managing queues and rationing limited expert attention. These constraints reflect human administrative limits. AI has the capacity to reduce them, enabling faster decision-making, improved service delivery and more responsive interactions between citizens and the state.
Early applications already demonstrate potential: automated appointment systems, faster benefits processing and tools that allow frontline staff to focus on complex cases rather than administrative burden. At scale, such changes could transform millions of interactions between citizens and public institutions.
But the introduction of AI also exposes systemic weaknesses. Where processes are fragmented, data poorly integrated or governance unclear, artificial intelligence does not conceal institutional problems — it magnifies them.
AI therefore functions not only as a tool of transformation but as a mirror, reflecting the strengths and limitations of government itself.
The legacy constraint: infrastructure and expectations
Across many advanced economies, government digital infrastructure has evolved through decades of incremental upgrades and siloed systems. Legacy technology often struggles to support modern data integration, real-time processing or AI-enabled decision support.
At the same time, citizen expectations have shifted. Individuals now interact daily with highly responsive digital services in the private sector. These experiences shape expectations of public services. When government systems appear slow, opaque or difficult to navigate, confidence erodes.
Artificial intelligence has the potential to close this gap. But doing so requires more than overlaying AI tools onto outdated systems. It demands investment in digital infrastructure, data governance and interoperability across departments and agencies.
Without such foundations, AI risks amplifying fragmentation rather than resolving it.
Trust as the currency of public-sector AI
Public attitudes toward AI in government are ambivalent. Many citizens welcome improved efficiency and responsiveness. Yet concerns about transparency, fairness and accountability remain significant.
Trust becomes fragile when:
- decisions appear opaque
- data is repurposed without clarity
- errors have significant personal consequences
- accountability for automated decisions is unclear
In public administration, trust is not a peripheral issue. It is the foundation of legitimacy. AI systems deployed without clear safeguards and transparency risk undermining confidence not only in technology but in public institutions themselves.
Trust in public-sector AI depends on governance structures that ensure explainability, recourse and oversight. It requires clarity about how systems are used, what data they rely on and how decisions can be challenged. Where such safeguards are embedded from the outset, AI can strengthen public confidence. Where they are absent, resistance grows.
The institutional capability challenge
Transforming government for the AI era is not simply a technological project. It is a capability and culture challenge.
Public servants do not need to become AI engineers. But governments require a workforce that understands data, algorithmic systems and the governance implications of automation. This includes the ability to evaluate technologies, manage risks and integrate AI into service delivery responsibly.
States that succeed in this transition are likely to focus on two priorities. First, building internal capability by embedding digital and data expertise across departments rather than concentrating it in isolated teams. Second, fostering partnerships with academia, civil society and the private sector to co-design systems aligned with public values.
Without such capability, AI initiatives risk becoming fragmented pilots rather than systemic transformation. Institutional capacity, not technology alone, will determine success.
Toward a new model of digital government
Artificial intelligence raises a fundamental question: should governments use AI merely to optimise existing administrative models, or to rethink how public services are designed and delivered?
The answer has implications for institutional legitimacy and effectiveness. AI offers an opportunity to redesign services around citizens’ needs rather than organisational structures. It can enable more proactive support, faster responses and better allocation of public resources.
Realising this potential requires:
- modernising digital infrastructure
- investing in skills and organisational capability
- establishing governance frameworks that enable innovation while protecting citizens
- designing services around human needs rather than administrative convenience
These are not simply technical adjustments. They represent a shift in how public institutions conceive their role in society.
Conclusion: the future state in an AI era
Artificial intelligence will not replace government. But it will reshape how government functions, how authority is exercised and how public trust is maintained.
The central question is not whether states adopt AI. It is whether they use it to replicate existing systems or to reinvent how they serve citizens.
Institutional power in the 21st century will increasingly depend on the capacity to deploy AI responsibly, effectively and in ways that strengthen public trust.
Governments that succeed will be those that treat AI not merely as a tool of efficiency, but as a catalyst for institutional renewal. Those that fail to adapt risk widening the gap between public expectations and public capability.
The future of public service will therefore be shaped not only by technological capability, but by the willingness of governments to rethink what it means to serve in an age of intelligent systems.
TECHNOLOGY AND POWER | BIG INNOVATION CENTRE
How emerging technologies reshape economic power, governance and global competition
Professor Birgitte Andersen is Professor of the Economics and Management of Innovation and leads research on the political economy of emerging technologies.





