AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Framing the Machine: The Effect of Uncertainty Expressions and Presentation of Self on Trust in AI

Authors

; ;

Term

4. Term

Publication year

2025

Pages

14

Abstract

Store sprogmodeller (LLM’er) bliver i stigende grad en del af hverdagen og arbejdslivet. Denne undersøgelse ser på, hvordan sproglig stil påvirker brugernes tillid til en chatbot. Vi fokuserer på to valg i formuleringen: (1) usikkerhedsudtryk—om chatbotten lyder sikker eller usikker (fx “jeg er sikker” vs. “jeg er ikke helt sikker”)—og (2) hvordan den præsenterer sig selv—om den taler i første person (“jeg”) eller som et upersonligt system (“systemet”). I et within-subjects 2×2-design med 24 deltagere brugte hver person en chatbot i fire versioner, mens de besvarede trivia-spørgsmål. Vi målte tillid både som oplevet (spørgeskemaer) og som adfærd (fx hvilken kilde deltagerne valgte), og vi supplerede med interviews for at få dybere indblik. Resultaterne viser, at når chatbotten udtrykker sikkerhed, opfattes den som mere kompetent, og deltagerne er mere tilbøjelige til at vælge den som deres primære informationskilde. Selvfremstillingen havde mere nuancerede effekter: For nogle øgede første person (“jeg”) oplevet integritet, når svaret var usikkert, mens andre foretrak en neutral, systemlignende tone. Mange deltagere brugte desuden Googles øverste søgeresultat til at tjekke svar, hvilket peger på, at tillid i praksis kalibreres ved at krydstjekke information. Samlet understreger resultaterne behovet for at designe LLM’ers kommunikation med omtanke for både sproglige signaler og brugerens kontekst, så tillid kan opbygges og kalibreres på en passende måde.

Large Language Models (LLMs) are increasingly woven into daily life and work. This study examines how a chatbot’s wording shapes user trust, focusing on two choices: (1) uncertainty expressions—whether answers sound certain or uncertain (e.g., “I’m certain” vs “I’m not sure”)—and (2) presentation of self—whether the chatbot speaks in the first person (“I”) or as an impersonal system (“the system”). Using a within-subjects 2×2 design with 24 participants, each person interacted with four versions of a chatbot while answering trivia questions. We assessed trust both as perception (questionnaires) and behavior (e.g., which source participants chose), and we conducted interviews for additional insights. Findings show that when the chatbot expressed certainty, participants rated it as more competent and were more likely to rely on it as their primary source of information. Self-presentation had subtler effects: for some, first-person phrasing increased perceived integrity when the answer was uncertain, while others preferred a neutral, system-like tone. Many participants checked the top Google result to verify answers, indicating that people calibrate trust by cross-checking information. Overall, the results highlight the need to design LLM communication with attention to both linguistic cues and user context to build and appropriately calibrate trust.

[This summary has been rewritten with the help of AI based on the project's original abstract]