AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Learning Inference friendly Bayesian Networks - using incremental compilation

Authors

;

Term

4. term

Publication year

2008

Abstract

Dette projekt undersøger, hvordan man lærer strukturen i bayesianske netværk, med fokus på hvor komplekse netværkene bliver afhængigt af den valgte læringsmetode. Vi gennemgår junction tree-metoden, som bruges til at udføre inferens i bayesianske netværk, og beskriver, hvordan et junction tree bygges ud fra netværkets struktur. Analysen viser, at en væsentlig kilde til beregningsmæssig kompleksitet er størrelsen af klikerne (cliques) i junction tree’et: jo større kliker, desto tungere bliver inferensen. For at tage højde for dette foreslår vi en scoringsfunktion, der direkte kombinerer netværkets log-likelihood (hvor godt modellen passer til data) med den samlede størrelse af klikerne. En vægtparameter gør det muligt at afstemme balancen mellem modeltilpasning og beregningsomkostning. For at gøre søgningen effektiv anvender vi inkrementel kompilering, så junction tree’et ikke behøver at blive trianguleret helt forfra for hver kandidatstruktur. Vi sammenligner den foreslåede score med traditionel BIC-scoring (Bayesian Information Criterion), som vi også udstyrer med en vægtparameter, og evaluerer på præcision og inferenstid. Resultaterne viser, at det giver en fordel at inddrage junction tree-størrelse i scoringen: Netværk lært med vores funktion er oftest hurtigere at bruge til inferens og kan i nogle tilfælde give brugbare netværk, hvor BIC-baserede netværk bliver for komplekse.

This project examines how to learn the structure of Bayesian networks, focusing on how complex the resulting networks become under different learning choices. We review the junction tree method, which is used to run inference in Bayesian networks, and outline how a junction tree is compiled from a network’s structure. Our analysis shows that a major driver of computational complexity is the size of the cliques in the junction tree: larger cliques make inference more costly. To account for this, we propose a scoring function that directly combines a network’s log-likelihood (how well the model fits the data) with the total size of its cliques. A weight parameter lets users balance model fit against computational complexity. To speed up the search, we use incremental compilation so the entire junction tree does not need to be re-triangulated for every candidate structure. We compare this score to standard BIC scoring (Bayesian Information Criterion), also augmented with a weight parameter, and evaluate both precision and inference time. The results indicate that including junction tree size in the score is beneficial: networks learned with our function are usually faster to use for inference and, in some cases, remain usable where BIC-based networks become too complex.

[This abstract was generated with the help of AI]