VIP / Speakers

« Back to List

Self-supervised Learning and its applications to speech processing

Time / Place:

⏱️ 10/7 (Thur.) 21:10-21:35 at Online Track 2

Abstract:

There is a trend in the machine learning community to adopt self-supervised approaches to pre-train deep networks. Self-supervised learning utilizes proxy supervised learning tasks, for example, distinguishing parts of the input signal from distractors, or generating masked input segments conditioned on the unmasked ones, to obtain training data from unlabeled corpora. These approaches make it possible to use a tremendous amount of unlabeled data on the web to train large networks and solve complicated tasks.

ELMo, BERT, and GPT in NLP are famous examples in this direction. However, there is still much unexplored territory in the research direction for self-supervised learning.

This talk will introduce self-supervised learning, and show how to apply this technology to speech processing tasks.

😊 Share this page to friends:

Biography:

李宏毅
  • 李宏毅 Hung-yi Lee Website: https://speech.ee.ntu.edu.tw/~hylee/index.html
  • Department of Electrical Engineering, National Taiwan University / Associate Professor
  • Hung-yi Lee is currently an associate professor of the Department of Electrical Engineering of National Taiwan University (NTU), with a joint appointment at the Department of Computer Science & Information Engineering.

    He received Ph.D. degree from NTU in 2012. From 2012 to 2013, he was a postdoctoral fellow in Research Center for Information Technology Innovation, Academia Sinica.

    From 2013 to 2014, he was a visiting scientist at the Spoken Language Systems Group of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He gave 3-hour tutorial at ICASSP 2018, APSIPA 2018, ISCSLP 2018, Interspeech 2019, SIPS 2019, and Interspeech 2020.

    He is the co-organizer of the special session, New Trends in self-supervised speech processing, at Interspeech 2020, and co-organizer of the workshop, Self-Supervised Learning for Speech and Audio Processing, at NeurIPS 2020. He is a member of the IEEE Speech and Language Processing Technical Committee (SLTC). He owns a YouTube channel teaching deep learning.

    The total number of views of the channel exceeded 6.9 million, and the number of subscribers reached 80,000.

😊 Share this page to friends:

😊 Share this page to friends: