• Lasse Østergaard
  • Mads Nørgaard Stenkær
  • Casper Krogh Frydkjær
4. term, Software, Master (Master Programme)
In this project, we investigate the use of BNNs for bridging the gap between expressing uncertainty in AI models and explaining the predictions of the models.
The BNN used in this project is build with TensorFlow and TensorFlow Probability, and is trained through Variational Inference.
Here, we propose the possibility of sampling models from a BNN and explaining these with LRP.
By using the relevance scores calculated with LRP from multiple sampled models, we can consider the variance in relevance scores for each individual feature.
We argue that this variance reflects the uncertainty in the predictions, and gives an insight into which features affect the uncertainty the most.

The LRP approach is evaluated by setting the values of features to $0$, where we find that when setting features to $0$, for features with low relevance scores and low variance, the predictions and uncertainty in these remain similar.
On the other hand, when doing the same for features with high relevance scores and high variance in these, the predictions differ from the original predictions, often with a lower uncertainty.
We conclude that there is a clear correlation between the variance in relevance scores and the uncertainty in predictions, and that our method is able to give an insight into which features contribute most to the uncertainty in predictions.
LanguageEnglish
Publication date18 Jun 2021
Number of pages73
External collaboratorEnversion A/S
Bo Thiesson thiesson@enversion.dk
Other
ID: 415030891