AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


NCLTP: Non-Contrastive Learning for Trajectory Prediction

Term

4. term

Education

Publication year

2023

Submitted on

Pages

10

Abstract

The ability to predict the trajectories of pedestrians and cars is an important task for tasks such as autonomous driving and navigation for robots. Many current state-of-art methods are trained using contrastive methods that require human labelled data about the pedestrians, for instance their current action e.g., walking or standing. Human labelled data is both expensive and time-consuming to produce. In this study, I will present a method using a non-contrastive method, which produces competitive results without the need for human labelled data. Instead of comparing the action labels of pedestrians, the model uses different augmentations of the data to learn similar representations. Experiments for the proposed method are conducted on both first-person view (FPV) datasets and bird’s-eye view (BEV) datasets. This method provides competitive results to existing state-of-the-art methods, including methods that make use of human labeled annotations. The results of this paper should provide further research a base to work from and expand further upon this topic.

The ability to predict the trajectories of pedestrians and cars is an important task for tasks such as autonomous driving and navigation for robots. Many current state-of-art methods are trained using contrastive methods that require human labelled data about the pedestrians, for instance their current action e.g., walking or standing. Human labelled data is both expensive and time-consuming to produce. In this study, I will present a method using a non-contrastive method, which produces competitive results without the need for human labelled data. Instead of comparing the action labels of pedestrians, the model uses different augmentations of the data to learn similar representations. Experiments for the proposed method are conducted on both first-person view (FPV) datasets and bird’s-eye view (BEV) datasets. This method provides competitive results to existing state-of-the-art methods, including methods that make use of human labeled annotations. The results of this paper should provide further research a base to work from and expand further upon this topic.