Pretraining sturdy imaginative and prescient or multimodal basis fashions (e.g., CLIP) depends on large-scale datasets that could be noisy, doubtlessly misaligned, and have long-tail distributions. Earlier works have proven promising leads to augmenting datasets by producing artificial samples. Nevertheless, they solely assist domain-specific advert hoc use circumstances (e.g., both picture or textual content solely, however not each), and are restricted in information range resulting from an absence of fine-grained management over the synthesis course of. On this paper, we design a controllable image-text synthesis pipeline, CtrlSynth, for data-efficient and sturdy multimodal studying. The important thing concept is to decompose the visible semantics of a picture into fundamental parts, apply user-specified management insurance policies (e.g., take away, add, or change operations), and recompose them to synthesize pictures or texts. The decompose and recompose characteristic in CtrlSynth permits customers to manage information synthesis in a fine-grained method by defining personalized management insurance policies to control the essential parts. CtrlSynth leverages the capabilities of pretrained basis fashions resembling giant language fashions or diffusion fashions to purpose and recompose fundamental parts such that artificial samples are pure and composed in numerous methods. CtrlSynth is a closed-loop, training-free, and modular framework, making it straightforward to assist totally different pretrained fashions. With intensive experiments on 31 datasets spanning totally different imaginative and prescient and vision-language duties, we present that CtrlSynth considerably improves zero-shot classification, image-text retrieval, and compositional reasoning efficiency of CLIP fashions.
- † Work performed whereas at Apple
- ‡ Meta