Recommendations over Knowledge Graph Entities in Cold-Start Interviews
Authors
Brams, Anders Højlund ; Jakobsen, Anders Langballe ; Jendal, Theis Erik
Term
4. term
Education
Publication year
2020
Submitted on
2020-06-11
Pages
38
Abstract
Et centralt problem i anbefalingssystemer er cold-start: hvordan man giver gode anbefalinger til nye brugere, som systemet endnu ikke kender. En udbredt løsning er en kort interviewproces, der indsamler brugerens præferencer. De fleste metoder spørger til konkrete produkter eller titler (anbefalelige entiteter), men brugere kan ofte nemmere forholde sig til mere generelle og beskrivende egenskaber ved produkter (beskrivende entiteter), som genre, temaer eller stil. Manglen på egnede datasæt har længe fastholdt fokus på konkrete produkter, men det nyligt publicerede MindReader-datasæt afhjælper dette. I dette arbejde gennemfører vi en omfattende undersøgelse af interviewstrategier og anbefalingsmodeller, herunder førende metoder, og vurderer effekten af at lade interviewet spørge til beskrivende entiteter i MindReader-datasættet, som vi yderligere udvider til 1.736 brugere og 174.872 vurderinger. For at konstruere optimale interviews foreslår vi en ny, adaptiv interviewlæring samt tilgange baseret på deep reinforcement learning (AI, der lærer gennem prøven-og-fejlen at vælge de mest informative spørgsmål). Til at omsætte svar til anbefalinger foreslår vi desuden en lineær kombination af Personalised PageRank, som lærer vægtningen mellem viden- og samarbejdsgrafer via et parvist rangerings-tab. Resultaterne viser, at næsten alle modeller forbedrer deres ydeevne med bredere, beskrivende spørgsmål, så interviewet i gennemsnit kan forkortes med cirka fire spørgsmål sammenlignet med at spørge til konkrete produkter. Især videnbevidste tilgange drager stor fordel af præferencer for beskrivende entiteter i cold-start-interviews og overgår state-of-the-art metoder i både anbefalingskvalitet og diversitet.
A core challenge in recommender systems is the cold-start problem: how to suggest items to new users about whom the system knows nothing. A common remedy is a short interview to collect preferences. Most prior approaches ask about specific items (recommendable entities), but users may find it easier to express preferences about broader, descriptive properties of items (descriptive entities), such as genre, themes, or style. Historically, the lack of suitable datasets kept the focus on items, but the recently released MindReader dataset alleviates this. In this work, we conduct a comprehensive study of interviewing strategies and recommendation models, including state-of-the-art methods, and evaluate the effect of allowing interview questions about descriptive entities in the MindReader dataset, which we further extend to 1,736 users and 174,872 ratings. To build optimal interviews, we propose a novel adaptive interview learning approach and methods based on deep reinforcement learning (AI that learns by trial and error which questions are most informative). To turn answers into recommendations, we further propose a linear combination of Personalised PageRank that learns how to weight knowledge and collaborative graphs through a pairwise ranking loss. Our findings show that nearly all models perform better with broader, descriptive questions, allowing the interview to be shortened by about four questions compared with asking about specific items. Knowledge-aware approaches benefit especially from descriptive entity preferences in cold-start interviews and outperform state-of-the-art methods in both recommendation quality and diversity.
[This abstract was generated with the help of AI]
Keywords
Documents
