Many problems in machine learning today can be cast as minimizing a convex loss function subject to some inequality constraints. As a result, the success of machine learning today depends on convex optimization methods that can scale to sizes reaching that of the World Wide Web. Problems in this class include basis pursuit, compressed sensing, graph reconstruction via precision matrix estimation, matrix completion under rank constraints, etc. One of the most popular optimization methods to use is the Alternating Direction Method of Multipliers. This is extremely well-scalable, but the convergence rate can be erratic. In this talk I will introduce the problem and algorithm with some applications and show how linear algebra can explain the erratic behavior. [Go to the full record in the library's catalogue]
This video is presented here with the permission of the speakers.
Any downloading, storage, reproduction, and redistribution are strictly prohibited
without the prior permission of the respective speakers.
Go to Full Disclaimer.
Full Disclaimer
This video is archived and disseminated for educational purposes only. It is presented here with the permission of the speakers, who have mandated the means of dissemination.
Statements of fact and opinions expressed are those of the inditextual participants. The HKBU and its Library assume no responsibility for the accuracy, validity, or completeness of the information presented.
Any downloading, storage, reproduction, and redistribution, in part or in whole, are strictly prohibited without the prior permission of the respective speakers. Please strictly observe the copyright law.