Self-supervised studying (SSL) has made vital advances in speech illustration studying. Fashions like wav2vec 2.0 and HuBERT have achieved state-of-the-art leads to duties comparable to speech recognition, significantly in monolingual settings. Nonetheless, multilingual SSL fashions are inclined to underperform their monolingual counterparts on every particular person language, particularly in multilingual situations with few languages such because the bilingual setting. On this work, we examine a novel strategy to scale back this efficiency hole by introducing restricted visible grounding into bilingual speech SSL fashions. Our outcomes present that visible grounding advantages each monolingual and bilingual fashions, with particularly pronounced positive aspects for the latter, lowering the multilingual efficiency hole on zero-shot phonetic discrimination from 31.5% for audio-only fashions to eight.04% with grounding.
- † Tampere College
- ** Work achieved whereas at Apple