While there were thousands of papers in the literature in the recent years on federated learning, very few are concerned with its production use in practice. As the (perhaps only) practical paradigm that preserves data privacy when training a shared machine learning model, I expect that federated learning will be widely used in production. But are research results in the literature valid and practical in production federated learning systems? In this talk, I will share our recent experiences with claims in the existing literature along the lines of privacy leakage attacks, and show that their assumptions do not necessarily hold in production systems. I will also introduce more efficient ways to solve the "unlearning" problem, which is necessary due to regulatory constraints in production, such as the GDPR. Our experiments were conducted on Plato, a new open-source federated learning framework that I designed from scratch in the past two years to be as close to production systems as possible, while using a minimum amount of computing resources. [Go to the full record in the library's catalogue]
This video is presented here with the permission of the speakers.
Any downloading, storage, reproduction, and redistribution are strictly prohibited
without the prior permission of the respective speakers.
Go to Full Disclaimer.
Full Disclaimer
This video is archived and disseminated for educational purposes only. It is presented here with the permission of the speakers, who have mandated the means of dissemination.
Statements of fact and opinions expressed are those of the inditextual participants. The HKBU and its Library assume no responsibility for the accuracy, validity, or completeness of the information presented.
Any downloading, storage, reproduction, and redistribution, in part or in whole, are strictly prohibited without the prior permission of the respective speakers. Please strictly observe the copyright law.