Trust in AI: Designing a Practical Toolkit for Public Sector Service Development
Translated title
Trust in AI
Author
Sampat, Janhvi Jayesh
Term
4. term
Education
Publication year
2025
Submitted on
2025-05-25
Pages
137
Abstract
Artificial intelligence (AI) is increasingly used in public services. It can improve efficiency and support decisions, but its opaque logic and risk of bias raise concerns about transparency, accountability and public acceptance. This thesis explores how service developers in Copenhagen Municipality’s Department for Citizen Service Development (CSD) can design AI-enabled services that foster trust and fit users’ needs. Although research stresses that trust matters in systems that involve people, organizations and technology, there is a lack of practical, design-focused tools for real-world service development. To address this, the study treats trust as a dynamic concept shaped by cognitive, social and institutional factors. Using a systemic design approach that looks at the whole service and its stakeholders, the Trust Toolkit was developed to help public sector teams reflect on, evaluate and integrate trust considerations throughout the development of AI-enabled services. The toolkit was co-created and tested with CSD. The findings indicate that hands-on design support for trust is both necessary and feasible, but a toolkit is not a complete solution and should be used critically and evaluated further. The project contributes a practical artifact and a design perspective to discussions about trust, AI and public service innovation, and calls for ongoing work on how trust is defined, put into practice and sustained in public sector innovation.
Kunstig intelligens (AI) bruges mere og mere i offentlige services. Den kan øge effektiviteten og støtte beslutninger, men den uigennemsigtige logik og risikoen for skævhed giver bekymringer om åbenhed, ansvarlighed og borgernes accept. Dette speciale undersøger, hvordan serviceudviklere i Københavns Kommunes Department for Citizen Service Development (CSD) kan designe AI-understøttede services, der skaber tillid og matcher brugernes behov. Selvom forskningen peger på, at tillid er vigtig i systemer, hvor mennesker, organisationer og teknologi spiller sammen, mangler der praktiske, designorienterede værktøjer til den virkelige udviklingspraksis. Derfor blev tillid undersøgt som et dynamisk begreb, præget af kognitive, sociale og institutionelle forhold. Med en systemisk designtilgang, der ser på helheden og de mange aktører, blev Trust Toolkit udviklet til at hjælpe offentlige teams med at reflektere over, vurdere og indarbejde tillidshensyn gennem hele udviklingen af AI-understøttede services. Toolkittet blev udviklet i samarbejde med CSD gennem samskabelse og test. Resultaterne tyder på, at praktisk designstøtte til tillid både er nødvendig og mulig, men også at et toolkit ikke er en endelig løsning og bør vurderes kritisk i fremtidige indsatser. Projektet bidrager med et praktisk artefakt og et designperspektiv til debatten om tillid, AI og offentlig serviceinnovation og opfordrer til fortsat at undersøge, hvordan tillid forstås, omsættes til praksis og vedligeholdes i offentlig innovation.
[This apstract has been rewritten with the help of AI based on the project's original abstract]
Keywords
