Towards Ethical AI in Persuasive Technology - An Exploratory Study
Authors
Cevro, Frida Malene ; Jakab, Edina
Term
4. term
Publication year
2024
Pages
135
Abstract
This exploratory study examines the ethical implications of using artificial intelligence in persuasive technology. Through a systematic literature review and nine semi-structured interviews with representatives from companies, developers, governments, and researchers, the study compares stakeholder viewpoints on ethical principles such as beneficence and non-maleficence, fairness and justice, human autonomy and agency, transparency and explainability, and accountability and oversight. Using thematic analysis and drawing on Human-Computer Interaction, Persuasive System Design, ethical theory, and Self-Determination Theory, it maps challenges, motivations, and practices around persuasive AI. The study highlights end-users as an additional responsible stakeholder group, identifies generative AI as an emerging form of persuasive technology requiring ethical attention, and proposes a categorization of techniques and practices according to their ethical implications. It outlines eight priority areas for future work: clear definitions and boundaries, stakeholder engagement and collaboration, continuous monitoring and transparency, addressing the black box challenge, regulation and ethical standards, education and AI literacy, justified use of AI, and balancing innovation and regulation. These insights aim to inform more ethical design, deployment, and governance of persuasive AI.
Dette eksplorative studie undersøger de etiske implikationer ved brug af kunstig intelligens i persuasive teknologier. Gennem en systematisk litteraturgennemgang og ni semistrukturerede interviews med repræsentanter for virksomheder, udviklere, myndigheder og forskere sammenlignes interessenters syn på etiske principper som gavn og ikke-skade, retfærdighed, menneskelig autonomi og handlekraft, transparens og forklarbarhed samt ansvarlighed og tilsyn. Med tematisk analyse og teorier fra Human-Computer Interaction, Persuasive System Design, etik og Self-Determination Theory kortlægges udfordringer, motivationer og praksisser omkring persuasive AI. Studiet peger på slutbrugere som en yderligere ansvarlig interessentgruppe, identificerer generativ AI som en fremvoksende form for persuasive teknologi, der kræver etisk opmærksomhed, og foreslår en kategorisering af teknikker og praksisser efter deres etiske implikationer. Afslutningsvis præsenteres otte prioriterede indsatsområder: klare definitioner og grænser, interessentinddragelse og samarbejde, løbende monitorering og transparens, håndtering af black box-udfordringen, regulering og etiske standarder, uddannelse og AI-litteracitet, berettiget anvendelse af AI samt balancering af innovation og regulering. Disse indsigter kan informere mere etisk design, implementering og styring af persuasive AI.
[This apstract has been generated with the help of AI directly from the project full text]
