XLS-R: Large-Scale Cross-lingual Speech Representation Learning on 128 Languages
Read everything about the Speech Challenge:
Join Discord:
Speaker: Changhan Wang, Meta AI Research
In this talk, Changhan will present XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0. XLS-R has up to 2B parameters and was trained on nearly half a million hours of publicly available speech audio in 128 languages, an order of magnitude more public data than the largest known prior work. On the CoVoST-2 speech translation benchmark, XLS-R improves the previous state of the art by an average of 7.4 BLEU over 21 translation directions into English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107 language identification. The XLS-R team hopes to work toget
1 view
24
8
3 years ago 00:27:05 1
XLS-R: Large-Scale Cross-lingual Speech Representation Learning on 128 Languages