Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    GlassWorm Spreads through 72 Malicious Open VSX Extensions Hidden in Transitive Dependencies

    March 14, 2026

    Seth Godin on Management, Vulnerability, and Making an Influence within the New World Of Work

    March 14, 2026

    mAceReason-Math: A Dataset of Excessive-High quality Multilingual Math Issues Prepared For RLVR

    March 14, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Visatronic: A Multimodal Decoder-Solely Mannequin for Speech Synthesis
    Machine Learning & Research

    Visatronic: A Multimodal Decoder-Solely Mannequin for Speech Synthesis

    Oliver ChambersBy Oliver ChambersJuly 17, 2025No Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Visatronic: A Multimodal Decoder-Solely Mannequin for Speech Synthesis
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    The speedy progress of basis fashions and enormous language fashions (LLMs) has fueled considerably enchancment within the capabilities of machine studying methods that profit from mutlimodal enter knowledge. Nevertheless, present multimodal fashions are
    predominantly constructed on high of pre-trained LLMs, which might restrict correct modeling of temporal dependencies throughout different modalities and thus restrict the mannequin’s capability to collectively course of and leverage multimodal inputs. To particularly examine
    the alignment of textual content, video, and speech modalities in LLM-style (decoder-only) fashions, we contemplate a simplified multimodal era job, Video-Textual content to Speech (VTTS): speech era conditioned on each its corresponding textual content and video
    of speaking folks. The final word objective is to generate speech that not solely follows the textual content but in addition aligns temporally with the video and is in line with the facial expressions. On this paper, we first introduce Visatronic, a unified multimodal decoder-only transformer mannequin that adopts an LLM-style structure to embed visible, textual, and speech inputs right into a shared subspace, treating all modalities as temporally aligned token streams. Subsequent, we rigorously discover totally different token mixing methods to grasp the easiest way to propagate info from the steps the place video and textual content conditioning is enter to the steps the place the audio is generated. We extensively consider Visatronic on the difficult VoxCeleb2 dataset and show zero-shot generalization to LRS3, the place Visatronic, educated on VoxCeleb2, achieves a 4.5% WER, outperforming prior SOTA strategies educated solely on LRS3, which report a 21.4% WER. This highlights important features throughout goal metrics, resembling phrase error fee and phoneme-level synchronization, and subjective assessments of naturalness and expressiveness. Moreover, we suggest a brand new goal metric, TimeSync, particularly designed to measure phoneme-level temporal alignment between generated and reference speech, additional making certain synchronization high quality.

    • † Technische Universität Darmstadt

    Determine 1: Visatronic overview. Along with present textual content to speech (left) and lips to speech duties (center), we tackle multimodal generative job (proper), video-text to speech (VTTS), the place the mannequin is conditioned on the video of speaking folks and corresponding textual content transcriptions to be able to generate speech.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    mAceReason-Math: A Dataset of Excessive-High quality Multilingual Math Issues Prepared For RLVR

    March 14, 2026

    P-EAGLE: Quicker LLM inference with Parallel Speculative Decoding in vLLM

    March 14, 2026

    We Used 5 Outlier Detection Strategies on a Actual Dataset: They Disagreed on 96% of Flagged Samples

    March 13, 2026
    Top Posts

    GlassWorm Spreads through 72 Malicious Open VSX Extensions Hidden in Transitive Dependencies

    March 14, 2026

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025
    Don't Miss

    GlassWorm Spreads through 72 Malicious Open VSX Extensions Hidden in Transitive Dependencies

    By Declan MurphyMarch 14, 2026

    The GlassWorm malware marketing campaign has advanced, considerably escalating its assaults on software program builders.…

    Seth Godin on Management, Vulnerability, and Making an Influence within the New World Of Work

    March 14, 2026

    mAceReason-Math: A Dataset of Excessive-High quality Multilingual Math Issues Prepared For RLVR

    March 14, 2026

    AMC Robotics and HIVE Announce Collaboration to Advance AI-Pushed Robotics Compute Infrastructure

    March 14, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.