The speedy progress of basis fashions and enormous language fashions (LLMs) has fueled considerably enchancment within the capabilities of machine studying methods that profit from mutlimodal enter knowledge. Nevertheless, present multimodal fashions are
predominantly constructed on high of pre-trained LLMs, which might restrict correct modeling of temporal dependencies throughout different modalities and thus restrict the mannequin’s capability to collectively course of and leverage multimodal inputs. To particularly examine
the alignment of textual content, video, and speech modalities in LLM-style (decoder-only) fashions, we contemplate a simplified multimodal era job, Video-Textual content to Speech (VTTS): speech era conditioned on each its corresponding textual content and video
of speaking folks. The final word objective is to generate speech that not solely follows the textual content but in addition aligns temporally with the video and is in line with the facial expressions. On this paper, we first introduce Visatronic, a unified multimodal decoder-only transformer mannequin that adopts an LLM-style structure to embed visible, textual, and speech inputs right into a shared subspace, treating all modalities as temporally aligned token streams. Subsequent, we rigorously discover totally different token mixing methods to grasp the easiest way to propagate info from the steps the place video and textual content conditioning is enter to the steps the place the audio is generated. We extensively consider Visatronic on the difficult VoxCeleb2 dataset and show zero-shot generalization to LRS3, the place Visatronic, educated on VoxCeleb2, achieves a 4.5% WER, outperforming prior SOTA strategies educated solely on LRS3, which report a 21.4% WER. This highlights important features throughout goal metrics, resembling phrase error fee and phoneme-level synchronization, and subjective assessments of naturalness and expressiveness. Moreover, we suggest a brand new goal metric, TimeSync, particularly designed to measure phoneme-level temporal alignment between generated and reference speech, additional making certain synchronization high quality.
- † Technische Universität Darmstadt
Determine 1: Visatronic overview. Along with present textual content to speech (left) and lips to speech duties (center), we tackle multimodal generative job (proper), video-text to speech (VTTS), the place the mannequin is conditioned on the video of speaking folks and corresponding textual content transcriptions to be able to generate speech.