• Nana Sandberg
4. term, Psychology, Master (Master Programme)
The scientific literature within psychology and cognitive science has been increasingly interested in the fundamental scientific ideals of reliability and validity of methods, data, and theoretical constructs. In the wake of the “Replication Crisis”, focus has shifted to look not just the production of data, but at the method behind this production much more closely. Though not a new ideal, it was questioned whether researchers and institutions alike, had failed to critically reflect upon the methods used within psychological research.
One such area of research is spatial cognition research, which deals specifically with the study of cognition as it processes space. One study done by Kallai, Makany, Karadi and Jacobsen (2005) sought to categorize different spatial strategies from behavioral data collected within a virtual Morris Water Maze. The use of virtual methods as a test environment, the premise of deriving cognitive processes purely from behavioral data, and the method for categorizing behavioral data into strategies are all relevant to question, as they combined are key elements of this paradigm of research.
This paper has sought to explore these areas of experiment design and analysis, by replicating the original study done by Kallai et al. (2005), with the problem formulation, “Is it possible to reliably and validly record, detect and classify cognitive strategies from virtual maze methods?”
With this question in mind, this paper has sought to highlight central problems within the literature, the methodology and the method itself. An experiment replication with 20 participants were performed, using a virtual maze, and the resulting data was analyzed using an automated categorization method. Through this, it was possible to categorize all four strategies from the original study, however, it is still unclear whether quantifiable methods such as these can truly capture the scope of what strategies are in praxis. In relation to this, it is also difficult to ascertain whether behavioral data reflect a specific cognitive state. New technologies are finding their way into research as well, and it is not clearly understood how virtual environment relate to the real environments. This begs the question whether strategies within virtual environments are valid constructs outside of the test situation.
In conclusion, there are still many factors that play an important role in our ability to both record, detect and classify methods in a valid and reliable way. It will depend on the field gaining a better understanding of how technologies implemented in research interact with the participants and influence the data, and it will depend on the field in general adopting a praxis, that support the scrutiny and development of better and more robust methods.
Publication date1 Aug 2018
Number of pages54
ID: 284869396