Outline of the course
-
Introduction
- What is a graphical model?
- Directed vs. undirected models
- Uses: parameter reduction, causal exploration
- Challenges: learning parameters and structure
-
Directed Graphical Models (DGMs)
- Chain rule, conditional independence
- Examples: Naive Bayes, Markov Chains, Hidden Markov Models
- Gaussian Bayesian Networks
- Properties: d-separation, Markov blanket
-
Learning in DGMs
- Parameter learning from complete data
- Structure learning
- Chow–Liu algorithm
- Tree-augmented and mixture models
- Causal DAGs, interventions, do-calculus
-
Undirected Graphical Models (UGMs / Markov Random Fields)
- Conditional independence via graph separation
- Hammersley–Clifford theorem
- Examples: Ising model, Hopfield networks, Boltzmann Machines, RBMs
- Inference methods: Gibbs sampling, variational approximations
-
Gaussian Graphical Models (GGMs)
- Covariance and precision matrices
- Conditional independence structure
- Estimation: covariance selection, Graphical Lasso
-
Advanced Topics
- Inference and Markov properties
- Relation to exponential family distributions
- Applications: density estimation, knowledge discovery
-
Exercises & Projects
- Directed/Undirected/Gaussian GM exercises
- Example projects:
- MRF simulation
- Graphical Lasso
- RBMs on MNIST
- Structure learning with NoTears or DAG-GNN
Links
Reference document
The lecture closely follows and borrows material from
Kevin P. Murphy, Machine Learning: A Probabilistic Perspective (MLAPP), chapters 10, 19, 20, and 26.