Normalizing flows
The talk can be watched here.
Normalizing flows model complex distributions based on a series of bijective functions from a simple base distribution. They come in very handy as they provide us with an invertible transformation that enables densities evaluation and samples generation.
However, those functions need to be computed easily to be tractable, which leads to additional assumptions and raises general questions: Which assumptions should we use to compute those functions? Can we model any complex distributions? How useful are normalizing flows?
- Dimitris Kalatzis is a PhD student at the Technical University of Denmark. He is working on generative models and geometric machine learning. He will give a short introduction to normalizing flows before sharing a more geometric perspective.
- Polina Kirichenko is a PhD student at New York University Center for Data Science. Her main interests lie in probabilistic deep learning, uncertainty estimation and generative models. She will present two of her latest papers: Why Normalizing Flows Fail to Detect Out-of-Distribution Data, and Semi-Supervised Learning with Normalizing Flows.
- Antoine Wehenkel is a PhD student at University of Liège in Belgium. His main research interests revolve around statistics, machine learning and information theory. He will revisit normalising flows as probabilistic graphical models, and will present his previous work: You say Normalizing Flows I see Bayesian Networks.