Incorporating Trust and Trickery Management in First Person Shooters
Studenteropgave: Kandidatspeciale og HD afgangsprojekt
- Niels Christian Nielsen
- Henrik Oddershede
- Jacob Larsen
4. semester, Datalogi, Kandidat (Kandidatuddannelse)
This report describes how human character traits can be incorporated into comput
er controlled characters (bots) in team based First Person Shooter (FPS) type co
mputer games. The well known FPS Counter-Strike was chosen for use as a test env
ironment.
Initially a model of a bot which is exclusively based on the success of the team is devised.
This model is extended into one that allows for two kinds of personalities in a bot: a trickster personality, that lies about information in order to receive personal gain, and a team based personality that, regardless of its own gain, does what it believes is best for the team as a whole.
Detecting tricksters on a team is accomplished through incorporation of trust management. In short this means that each bot maintains trust values for all other bots. The personal trust values and the trust values received from other bots (reputation) are combined to determine whether or not any specific bot can be trusted to provide correct information. Applying trust management to a group of bots which must make group based decisions instead of only individual ones is not a trivial matter. Therefore these issues are also discussed.
Concludingly, issues about how trickster type bots may overcome the trust management implementation in their team mates are discussed. We call this Trickery Management. This extension means a trickster tries to guess threshold values in the trust management implementation of its team mates, calculating when to try to trick and when not to.
Initially a model of a bot which is exclusively based on the success of the team is devised.
This model is extended into one that allows for two kinds of personalities in a bot: a trickster personality, that lies about information in order to receive personal gain, and a team based personality that, regardless of its own gain, does what it believes is best for the team as a whole.
Detecting tricksters on a team is accomplished through incorporation of trust management. In short this means that each bot maintains trust values for all other bots. The personal trust values and the trust values received from other bots (reputation) are combined to determine whether or not any specific bot can be trusted to provide correct information. Applying trust management to a group of bots which must make group based decisions instead of only individual ones is not a trivial matter. Therefore these issues are also discussed.
Concludingly, issues about how trickster type bots may overcome the trust management implementation in their team mates are discussed. We call this Trickery Management. This extension means a trickster tries to guess threshold values in the trust management implementation of its team mates, calculating when to try to trick and when not to.
Sprog | Engelsk |
---|---|
Udgivelsesdato | jun. 2004 |