• Kenneth Kjærgaard Malowanczyk
  • Andreas Dahl Nielsen
  • Sebastian Reidar Petersen
4. term, Computer Science, Master (Master Programme)
During the last years, model interpretability has become an increasingly researched aspect in machine learning. Its ability to provide an explanation of the model can, from one side, increase the trustworthiness of the predictions and from the other side help in identifying hidden trends, thus going beyond the use of machine learning as a black box. In this paper, we propose a hierarchical training method to interpret convolutional neural networks trained on tabular data, and apply it for bandgap prediction of organometal halide perovskites, by assigning importance values to features. The feature space includes properties of the elements, precursors, and perovskite crystal structures, for a total of 39 features, which can be combined together. Using a Weight Parameter Saving Method, we are able to reuse previously trained network’s weights for training the next network, achieving faster convergence and better prediction performances. Using Shapley Additive Explanations for approximating feature importance and hierarchical training, a minimal feature set needed for bandgap prediction (within a squared error of 0.1) is found. This has the effect of reducing the feature space, while preserving the predictive performance of the model.
LanguageEnglish
Publication date8 Jun 2021
Number of pages5
External collaboratorUniversity of Pittsburgh
no name vbn@aub.aau.dk
Other
ID: 414201291