It's been about a decade since the advent of deep learning. During this time, it has transformed not only the field of computer science, but also numerous scientific and industrial domains. One of the main advantages of deep learning is its ability to learn feature extractors. In this talk, we will review the statistical feature extraction methods developed by our research team before and after the advent of deep learning, and explore their relevance to state-of-the-art neural network architectures. We will also discuss the future direction of feature extraction, with an emphasis on the space - particularly the non-Euclidean space - that serves as the basis for deep neural networks construction. [Go to the full record in the library's catalogue]
This video is presented here with the permission of the speakers.
Any downloading, storage, reproduction, and redistribution are strictly prohibited
without the prior permission of the respective speakers.
Go to Full Disclaimer.
This video is archived and disseminated for educational purposes only. It is presented here with the permission of the speakers, who have mandated the means of dissemination.
Statements of fact and opinions expressed are those of the inditextual participants. The HKBU and its Library assume no responsibility for the accuracy, validity, or completeness of the information presented.
Any downloading, storage, reproduction, and redistribution, in part or in whole, are strictly prohibited without the prior permission of the respective speakers. Please strictly observe the copyright law.