Close Menu
    Main Menu
    • Home
    • News
    • Tech
    • Robotics
    • ML & Research
    • AI
    • Digital Transformation
    • AI Ethics & Regulation
    • Thought Leadership in AI

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Russian hackers accused of assault on Poland electrical energy grid

    January 26, 2026

    Palantir Defends Work With ICE to Workers Following Killing of Alex Pretti

    January 26, 2026

    The Workers Who Quietly Maintain Groups Collectively

    January 26, 2026
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Facebook X (Twitter) Instagram
    UK Tech InsiderUK Tech Insider
    Home»Machine Learning & Research»Language Fashions Enhance When Pretraining Information Matches Goal Duties
    Machine Learning & Research

    Language Fashions Enhance When Pretraining Information Matches Goal Duties

    Oliver ChambersBy Oliver ChambersJuly 19, 2025No Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
    Language Fashions Enhance When Pretraining Information Matches Goal Duties
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link


    Each information choice methodology inherently has a goal. In observe, these targets usually emerge implicitly by means of benchmark-driven iteration: researchers develop choice methods, practice fashions, measure benchmark efficiency, then refine accordingly. This raises a pure query: what occurs after we make this optimization specific? To discover this, we suggest benchmark-targeted rating (BETR), a easy methodology that selects pretraining paperwork primarily based on similarity to benchmark coaching examples. BETR embeds benchmark examples and a pattern of pretraining paperwork in a shared house, scores this pattern by similarity to benchmarks, then trains a light-weight classifier to foretell these scores for the total corpus.
    We examine information choice strategies by coaching over 500 fashions spanning 10¹⁹ to 10²² FLOPs and becoming scaling legal guidelines to them. From this, we discover that merely aligning pretraining information to analysis benchmarks utilizing BETR achieves a 2.1x compute multiplier over DCLM-Baseline (4.7x over unfiltered information) and improves efficiency on 9 out of 10 duties throughout all scales. BETR additionally generalizes effectively: when concentrating on a various set of benchmarks disjoint from our analysis suite, it nonetheless matches or outperforms baselines. Our scaling evaluation additional reveals a transparent pattern: bigger fashions require much less aggressive filtering. General, our findings present that instantly matching pretraining information to focus on duties exactly shapes mannequin capabilities and spotlight that optimum choice methods should adapt to mannequin scale.

    • † College of Washington
    • ‡ Stanford
    • § Anthropic
    • ** Work achieved whereas at Apple
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Oliver Chambers
    • Website

    Related Posts

    How CLICKFORCE accelerates data-driven promoting with Amazon Bedrock Brokers

    January 26, 2026

    5 Breakthroughs in Graph Neural Networks to Watch in 2026

    January 26, 2026

    AI within the Workplace – O’Reilly

    January 26, 2026
    Top Posts

    Evaluating the Finest AI Video Mills for Social Media

    April 18, 2025

    Utilizing AI To Repair The Innovation Drawback: The Three Step Resolution

    April 18, 2025

    Midjourney V7: Quicker, smarter, extra reasonable

    April 18, 2025

    Meta resumes AI coaching utilizing EU person knowledge

    April 18, 2025
    Don't Miss

    Russian hackers accused of assault on Poland electrical energy grid

    By Declan MurphyJanuary 26, 2026

    On Dec. 29 and 30, the Polish electrical energy grid was subjected to a cyberattack…

    Palantir Defends Work With ICE to Workers Following Killing of Alex Pretti

    January 26, 2026

    The Workers Who Quietly Maintain Groups Collectively

    January 26, 2026

    Nike Knowledge Breach Claims Floor as WorldLeaks Leaks 1.4TB of Recordsdata On-line – Hackread – Cybersecurity Information, Knowledge Breaches, AI, and Extra

    January 26, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    UK Tech Insider
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms Of Service
    • Our Authors
    © 2026 UK Tech Insider. All rights reserved by UK Tech Insider.

    Type above and press Enter to search. Press Esc to cancel.