Predictive belief representations in graphical models
Wednesday, March 25, 2015
3:30 pm - 4:30 pm
Gross Hall 330
Geoff Gordon, Carnegie Mellon University
12noon lunch and discussion for students, grad students and postdocs 3:30pm Seminar and reception Latent-variable graphical models are a general language for describing learning and inference problems. Unfortunately, working with these models can be computationally expensive: for example, even a simple learning problem such as maximum likelihood for HMMs can have intractably many local optima. Recently, spectral and method-of-moments estimators have made it practical to solve learning and inference problems in some classes of latent-variable models, providing computationally-efficient algorithms that still retain good statistical properties. We unify and extend a number of such results: in particular, we reduce latent-variable learning to supervised prediction via the idea of predictive belief representations. Predictive belief representations eliminate the graphical-model concepts of latent variables and messages, and replace them by interpretable predictions of directly-observable statistics. In the process, they make it easy to derive learning and inference algorithms for new classes of latent-variable models. These new algorithms can take advantage of the wide variety of well-studied supervised prediction methods, providing capabilities that are absent from other spectral and method-of-moments estimators.