Deep Learning Foundations: Andrew Wilson’s Talk on How Do We Build Models That Learn and Generalize?

Course webpage: Abstract: To answer scientific questions and reason about data, we must build models and perform inference within those models. But how should we approach model construction and inference to make the most successful predictions? How do we represent uncertainty and prior knowledge? How flexible should our models be? Should we use a single model, or multiple different models? Should we follow a different procedure depending on how much data are available? How do we learn desirable constraints, such as rotation, translation, or reflection symmetries, when they don’t improve standard training loss? How do we select between models that are entirely consistent with any observed data? What if our test data are drawn from a different but semantically related distribution? In this lecture, I will present a philosophy for model construction, grounded in probability theory. I will exemplify this approach with methods that exploit loss surface geometry for scalable and practical Bayesian deep learning, and resolutions to seemingly mysterious generalization behaviour such as double descent. I will also consider prior specification, model selection, generalized Bayesian inference, domain shift, and automatic constraint (symmetry) learning. REFERENCES: (1) (2) (3) (4) (5)
Back to Top