Sunday, March 1, 2026

Accountable AI for Smarter Public Services in Smart Cities

Introduction: Smart Cities Start with Smarter Public Services

Smart city initiatives are often associated with connected infrastructure, sensors, mobility platforms, and data dashboards. Yet citizens experience a city in a much more concrete way: how easy it is to get information, lodge a request, and receive a swift and accurate response. Whether renewing a residence permit with a regional or national authority, requesting an appointment, understanding building regulations, or reporting an issue in public space, the quality of public services directly shapes trust in public institutions.

When citizens receive no response – or a delayed, generic one – they often interpret it as indifference. Over time, trust erodes. Conversely, when they feel understood – even before their issue is resolved – confidence increases. Trust is built not only on outcomes, but on the quality of interaction.

Across Europe, cities and regions face rising demand for services delivered through unstructured channels such as email, web forms, social networks, and phone calls while resources remain constrained. Artificial Intelligence offers powerful tools to address this gap, but in public-sector contexts, accuracy, transparency, fairness, and accountability are prerequisites.

This article draws on projects in France and Finland to illustrate how accountable, human-centered AI can improve public services without undermining trust.

The Core Problem: Unstructured Demand Meets Structured Administration

Public administrations are designed around structured processes – predefined forms, workflows, eligibility rules, and legal constraints – even if real-world practice is sometimes more fragmented. Citizens, however, communicate in natural language, mixing context, urgency, emotion, and partial information. This mismatch creates friction.

Common operational pain points include:

  • High volumes of uncategorized requests via email or generic forms
  • Urgent cases mixed with routine inquiries
  • Significant agent time spent reading and reformatting messages before processing

Backlogs grow, response times increase, and frustration rises on both sides. The absence of response often causes anxiety, leading them to send follow-up emails, creating a snowball effect that further congests the system.

Static portals improve consistency but shift complexity to users. Purely conversational AI systems based only on large language models (LLMs) feel natural but introduce unacceptable risks: hallucinations, inconsistent decisions, and limited explainability.

Smart cities need a third path.

A Pragmatic Approach: Hybrid and Accountable AI

The most effective solutions combine two complementary paradigms:

  • Generative AI to understand natural language and extract relevant information
  • Symbolic AI (business rules) to ensure routing, prioritization, and proposed actions follow policies and regulations

In this hybrid model, generative AI does not decide. It interprets and structures human input. Prioritization and routing are governed by explicit, auditable logic aligned with public policies. These rules remain under institutional control and can be versioned and updated as regulations evolve.

Final decisions remain human. Case officers validate priorities, review proposed responses, and retain full authority over outcomes. AI assists; it does not replace responsibility.

This architecture aligns with smart city values: transparency, adaptability, and resilience.

Case Study 1: Improving Access to Public Services in France

In several French préfectures, local authorities responsible for residence permits and asylum claims face overwhelming volumes of emails. Delays can have severe consequences: loss of employment, housing instability, or legal uncertainty.

Working with public-sector teams, we designed a system that allows users to describe their situation in their own words. AI analyzes the message, identifies intent, extracts key information (dates, risk factors, etc), and proposes a structured summary.

Crucially, users validate this interpretation before it enters the workflow. Citizens no longer send a “message in a bottle.”  They see how their request was understood, can correct inaccuracies, and receive immediate confirmation. This simple mechanism significantly reduces frustration and reinforces the feeling of being heard.

A transparent rule-based system then determines priority and routing. Case officers receive enriched, structured cases rather than raw emails, enabling them to focus on resolution rather than triage.

The result is not automation for its own sake, but measurable gains in responsiveness and fairness – using existing resources more effectively.

Case Study 2: AISA – Guiding Citizens in Helsinki

In Helsinki, the challenge involved navigating complex regulations related to land use, construction permits, and urban planning. Information was available but fragmented across official sources.

The AISA experiment helped users quickly identify relevant information. Rather than generating open-ended answers, the system detects intent and directs users to authoritative resources, services, or next steps.

The design was deliberately constrained: no personal data storage, no autonomous decision-making, and clear limits on system behavior. This modest and transparent scope earned stakeholder recognition and the award for best AI experiment in the program.

The key insight: trust grows when AI systems are explicit in scope and aligned with institutional responsibility.

Case Study 3: Managing Resident Service Requests

In a pilot with a European city, the focus is managing citizen service requests—housing issues, school billing, public space maintenance, and general inquiries.

The system operates upstream of existing legacy ticketing and case-management tools, integrating with them rather than replacing them. It:

  • Identifies the dominant request type
  • Extracts required information
  • Detects potential duplicates
  • Asks targeted follow-up questions when necessary

By the time a request enters the internal system, it is validated, enriched, and easier to process. Early work suggests significant reductions in handling time and fewer back-and-forth exchanges.

Design Principles for Smart City AI

Across these projects, several principles consistently emerge:

  • Human control: AI proposes, humans review, validate, and decide.
  • Transparency: Users and agents can review and correct interpretations.
  • Policy grounding: Rules are explicit, versioned, and institutionally owned.
  • Data minimization: Personal data is used only where strictly necessary.
  • Adaptability: Systems evolve with regulations and priorities.

These principles are not just ethical guidelines; they are practical enablers of adoption.  These principles are also aligned with emerging European regulatory frameworks, including the EU AI Act, which emphasizes transparency, human oversight, and accountability in public-sector AI systems.

Conclusion: Trust as the Foundation of Smart Cities

Smart cities are not defined solely by technology, but by how technology serves people.

AI can reduce friction, accelerate response times, and allow public servants to focus on complex cases.  Yet in the public sector, the cost of getting AI wrong is high.

Hybrid, accountable AI offers a realistic path forward. By combining the linguistic strengths of generative models with the rigor of explicit decision logic and human oversight, cities can innovate responsibly. The experiences in France and Finland show that when trust is designed into the system, AI becomes not a risk to manage, but a capability to embrace.

Ultimately, smarter cities are not those that automate more, but those that rebuild trust. When AI helps citizens feel heard, understood, and treated fairly, it becomes not just a productivity tool, but a bridge between institutions and the people they serve.

Latest