Asymptotic and finite-sample properties of estimators based on stochastic gradients

Feb 7

Tuesday, February 7, 2017

3:30 pm - 4:30 pm
Gross Hall, 330 -- Ahmadieh Family Grand Hall

Presenter

Panagiotis (Panos) Toulis , University of Chicago

note: student lunch at 11:45am, seminar 3:30pmStochastic gradient descent procedures have gained popularity for iterative parameter estimation from large datasets because they are simpler and faster than classical optimization procedures. However, their statistical properties are not well understood, in theory. And in practice, avoiding numerical instability requires careful tuning of key parameters. In this talk, we will discuss implicit stochastic gradient descent procedures, which involve parameter updates that are implicitly defined. Intuitively, implicit updates shrink standard stochastic gradient descent updates, and the amount of shrinkage depends on the observed Fisher information matrix, which does not need to be explicitly computed. Thus, implicit procedures increase stability without increasing the computational burden. Our theoretical analysis provides the first full characterization of the asymptotic behavior of both standard and implicit stochastic gradient descent-based estimators, including finite-sample error bounds. Importantly, analytical expressions for the variances of these stochastic gradient-based estimators reveal their exact loss of efficiency, which enables principled statistical inference on large datasets. Part of ongoing work focuses on a crucial aspect of such inference with stochastic approximation procedures, which is to know when the procedure has reached the asymptotic regime.

Add to calendar

Contact

Dawn, Ariel
919-684-9312
ariel.dawn@duke.edu