{"success":1,"msg":"","color":"rgb(28, 35, 49)","title":"Generalized Bayes for probabilistic uncertainty quantification in unsupervised learning <\/b>","description":"webinar","title2":"","start":"2020-07-23 13:00","end":"2020-07-23 14:00","responsable":"Garritt Page <\/i><\/a>","speaker":"David B. Dunson","id":"4","type":"webinar","timezone":"America\/New_York","activity":"The ZOOM link is: https:\/\/byu.zoom.us\/j\/8014227269<\/a>","abstract":"Loss-based optimization algorithms provide the most commonly used methods for unsupervised learning - focusing on inferring latent structure in data. Canonical examples include k-means and PCA. Bayesian alternatives to k-means focus on model-based clustering via mixture models, while Bayes alternative to PCA focus on latent factor modeling. Both types of approaches are much less widely used than their loss-based competitors due to complexity of implementations and issues with brittleness and sensitivity to model misspecification. To implement a fully Bayesian analysis, we need a likelihood for everything and such likelihoods are very hard to specify accurately. The generalized Bayes framework using Gibbs posteriors provides a general alternative to likelihood-based Bayesian inferences that can be viewed as a coherent update of Bayesian beliefs. Here we develop general theory and methods showing how G-Bayes can be used to provide probabilistic Bayesian implementations of loss-based unsupervised learning algorithms such as k-means and PCA - providing uncertainty quantification and borrowing the best of both worlds from the loss and Bayes frameworks. "}