• Sharona Boonman
4. Semester, Digital Communication Leadership (Master Programme)
It can take the Danish child helpline, called BørneTelefonen, over a month to answer the letters from Danish children seeking help. Every child deserves an answer, but some problems are more severe than others. The current response rate of BørneTelefonen is especially concerning for neglected children. A way to deal with this problem is by automatically classifying the incoming messages into categories with answer priorities. However, the implications and limitations of automated classification need to be considered. Therefore, the main research question of this study is ‘To what extent can machine learning algorithms classify incoming messages to organizational helplines in comparison to human coders?’. Two sub questions are formulated to help answer the main research question. The first question is related to the technical possibilities of automatically classifying text messages (‘How accurate are machine learning algorithms when classifying incoming messages in comparison to human coders?’). The second question focusses on the possible limitations (‘What are possible technical, social, cultural, and ethical limitations when using machine learning algorithms to classify incoming messages?’). To answer the first sub research question, seven machine learning algorithms were applied to a dataset of 5664 messages of BørneTelefonen. Human coders labeled these messages in terms of neglect. When comparing the results of the machine learning algorithms against the performance of the human coders, support vector machine (SVM) performed most optimal. SVM had an F1 score of 94 percent for messages which were not labeled as neglect, and 23 percent for messages labeled as neglect. These scores are expected to improve when human-guided machine learning is applied to future incoming messages. Thus, machine learning classifiers offer great potentials to classify incoming messages. Concerning the results of the second sub question, technical limitations are discussed for the data used in this research, including the language of the messages, errors, and chat language. Also, potential societal implications may arise when using a machine learning classifier. Concerning the main research question, while it is algorithmically possible to automatically classify incoming messages, it might be necessary to inform people that their message is classified by an algorithm, and give them the chance to opt out of it. Further research is needed to understand the possible ethical issues involved to ensure fair and responsible use of machine learning for classifying incoming messages.
Publication date31 Jul 2020
Number of pages71
External collaboratorUCLA
Leah Lievrouw llievrou@ucla.edu
Plus-Plus A/S
Josef Trappel Josef.Trappel@sbg.ac.at
ID: 337716031