Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Bans ChatGPT Accounts Utilized by Russian, Iranian and Chinese language Hacker Teams

    June 9, 2025

    At the moment’s NYT Connections: Sports activities Version Hints, Solutions for June 9 #259

    June 9, 2025

    Malicious npm Utility Packages Allow Attackers to Wipe Manufacturing Techniques

    June 9, 2025
    Facebook X (Twitter) Instagram
    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest Vimeo
    UK Tech Insider
    Home»Machine Learning & Research»FastVLM: Environment friendly Imaginative and prescient encoding for Imaginative and prescient Language Fashions
    Machine Learning & Research

    FastVLM: Environment friendly Imaginative and prescient encoding for Imaginative and prescient Language Fashions

    Charlotte LiBy Charlotte LiApril 19, 2025Updated:April 29, 2025No Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    FastVLM: Environment friendly Imaginative and prescient encoding for Imaginative and prescient Language Fashions
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Scaling the enter picture decision is important for enhancing the efficiency of Imaginative and prescient Language Fashions (VLMs), notably in text-rich picture understanding duties. Nevertheless, widespread visible encoders equivalent to ViTs turn into inefficient at excessive resolutions because of the giant variety of tokens and excessive encoding latency. At completely different operational resolutions, the imaginative and prescient encoder of a VLM might be optimized alongside two axes: decreasing encoding latency and minimizing the variety of visible tokens handed to the LLM, thereby decreasing general latency. Primarily based on a complete effectivity evaluation of the interaction between picture decision, imaginative and prescient latency, token rely, and LLM dimension, we introduce FastVLM—a mannequin that achieves an optimized trade-off between decision, latency, and accuracy. FastVLM incorporates FastViTHD, a novel hybrid imaginative and prescient encoder designed to output fewer tokens and considerably scale back encoding time for high-resolution photos. Not like earlier strategies, FastVLM achieves the optimum steadiness between visible token rely and picture decision solely by scaling the enter picture, eliminating the necessity for added token pruning and simplifying the mannequin design. Within the LLaVA-1.5 setup, FastVLM achieves 3.2x enchancment in time-to-first-token (TTFT) whereas sustaining comparable efficiency on VLM benchmarks in comparison with prior works. In comparison with LLaVa-OneVision on the highest decision (1152×1152), FastVLM achieves comparable efficiency on key benchmarks like SeedBench and MMMU, utilizing the identical 0.5B LLM, however with 85x quicker TTFT and a imaginative and prescient encoder that’s 3.4x smaller.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Charlotte Li
    • Website

    Related Posts

    Construct a Textual content-to-SQL resolution for information consistency in generative AI utilizing Amazon Nova

    June 7, 2025

    Multi-account assist for Amazon SageMaker HyperPod activity governance

    June 7, 2025

    Implement semantic video search utilizing open supply giant imaginative and prescient fashions on Amazon SageMaker and Amazon OpenSearch Serverless

    June 6, 2025
    Leave A Reply Cancel Reply

    Top Posts

    OpenAI Bans ChatGPT Accounts Utilized by Russian, Iranian and Chinese language Hacker Teams

    June 9, 2025

    How AI is Redrawing the World’s Electrical energy Maps: Insights from the IEA Report

    April 18, 2025

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025
    Don't Miss

    OpenAI Bans ChatGPT Accounts Utilized by Russian, Iranian and Chinese language Hacker Teams

    By Declan MurphyJune 9, 2025

    OpenAI has revealed that it banned a set of ChatGPT accounts that had been doubtless…

    At the moment’s NYT Connections: Sports activities Version Hints, Solutions for June 9 #259

    June 9, 2025

    Malicious npm Utility Packages Allow Attackers to Wipe Manufacturing Techniques

    June 9, 2025

    Slack is being bizarre for lots of people immediately

    June 9, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2025 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.