Graphical Models

Outline of the course

  1. Introduction

    • What is a graphical model?
    • Directed vs. undirected models
    • Uses: parameter reduction, causal exploration
    • Challenges: learning parameters and structure
  2. Directed Graphical Models (DGMs)

    • Chain rule, conditional independence
    • Examples: Naive Bayes, Markov Chains, Hidden Markov Models
    • Gaussian Bayesian Networks
    • Properties: d-separation, Markov blanket
  3. Learning in DGMs

    • Parameter learning from complete data
    • Structure learning
      • Chow–Liu algorithm
      • Tree-augmented and mixture models
    • Causal DAGs, interventions, do-calculus
  4. Undirected Graphical Models (UGMs / Markov Random Fields)

    • Conditional independence via graph separation
    • Hammersley–Clifford theorem
    • Examples: Ising model, Hopfield networks, Boltzmann Machines, RBMs
    • Inference methods: Gibbs sampling, variational approximations
  5. Gaussian Graphical Models (GGMs)

    • Covariance and precision matrices
    • Conditional independence structure
    • Estimation: covariance selection, Graphical Lasso
  6. Advanced Topics

    • Inference and Markov properties
    • Relation to exponential family distributions
    • Applications: density estimation, knowledge discovery
  7. Exercises & Projects

    • Directed/Undirected/Gaussian GM exercises
    • Example projects:
      • MRF simulation
      • Graphical Lasso
      • RBMs on MNIST
      • Structure learning with NoTears or DAG-GNN


Reference document

The lecture closely follows and borrows material from
Kevin P. Murphy, Machine Learning: A Probabilistic Perspective (MLAPP), chapters 10, 19, 20, and 26.