Giant basis fashions are usually skilled on knowledge from a number of domains, with the information combination—the proportion of every area used—enjoying a vital position in mannequin efficiency. The usual method to deciding on this combination depends on trial and error, which turns into impractical for large-scale pretraining. We suggest a scientific technique to find out the optimum knowledge combination for any goal area utilizing scaling legal guidelines. Our method precisely predicts the lack of a mannequin of dimension N skilled with D tokens and a selected area weight vector h. We validate the universality of those scaling legal guidelines by demonstrating their predictive energy in three distinct and large-scale settings: giant language mannequin (LLM), native multimodal mannequin (NMM), and enormous imaginative and prescient fashions (LVM) pretraining. We additional present that these scaling legal guidelines can extrapolate to new knowledge mixtures and throughout scales: their parameters could be precisely estimated utilizing a number of small-scale coaching runs, and used to estimate the efficiency at bigger scales and unseen area weights. The scaling legal guidelines enable to derive the optimum area weights for any goal area underneath a given coaching funds (N,D), offering a principled different to expensive trial-and-error strategies.