• Sebastian Martin Young
4. term, Medialogy, Master (Master Programme)
The purpose of this paper was to explore and observe whether artificial intelligence agents outfitted with indirect advantages and communication methods could develop new and interesting behavioural patterns compared to traditional State of the Art Ad-Hoc design methodologies. The hypothesis of the paper was that agents when provided with indirect cooperative advantages would behave less predictably and in a more diverse manner. A game prototype was developed in Unity’s game engine, and two agents types with an adversarial relationship were designed to play through the game: Monster Agents and Friendly NPC Agents. The Friendly NPC Agent was further divided into three groups: Finite State Model, Reinforcement Learning: Perception Bonus Advantage, and Reinforcement Learning: Pack Tactics. The two Reinforcement Learning NPC were trained against the Monster Agents, and finally each Friendly NPC Agent ran for a session of 12 hours to collect data on their positions and game score. The results from the score test show that the Ad-Hoc Finite State Machine model significantly outperforms the Reinforcement Learning models in terms of completing the games objectives. The results from the Heatmap Test displays a chaotic and unpredictable behaviour from the Reinforcement Learning models. However, due to the limited training period of the agents, it is not possible to measure the exact impact the indirect advantages have on the model, and the alternative hypothesis can thus not be accepted.
Publication date25 May 2022
Number of pages61
ID: 471384074