Events Calendar

Previous month Previous day Next day Next month
By Year By Month By Week Today Search Jump to month
Non-Convex Optimization Through Stochastic Gradient Descent, Phase Retrieval As A Case Study
Prof Yan Shuo Tan
University of California, Berkley
Wednesday 24 July 2019, 03:00pm - 04:00pm
S16-05-96, DSAP Computer Lab 4

Despite the development of many new optimization techniques, constant step-size stochastic gradient descent remains the tool of choice in modern machine learning. It performs unreasonably well in practice, both in convex settings where there are poor guarantees, and also in non-convex settings, where there are often no guarantees. In this talk, we will sketch a proof of why it works for phase retrieval. This is the problem of solving systems of rank-1 quadratic equations, which can be formulated as the optimization of a non-convex, non-smooth objective.

Our analysis makes use of several new probabilistic arguments, including the "anti-concentration on wedges" property and a "summary state space" analysis.