Contrastive Language-Picture Pre-training (CLIP) has been a celebrated technique for coaching imaginative and prescient encoders to generate picture/textual content representations facilitating numerous purposes. Lately, CLIP has been extensively adopted because the imaginative and prescient spine of multimodal massive language fashions (MLLMs) to attach picture inputs for language interactions. The success of CLIP as a vision-language basis mannequin depends on aligning web-crawled noisy textual content annotations at picture ranges. However, such standards could develop into inadequate for downstream duties in want of fine-grained imaginative and prescient representations, particularly when region-level understanding is demanding for MLLMs. On this paper, we enhance the localization functionality of CLIP with a number of advances. We suggest a pre-training technique known as Contrastive Localized Language-Picture Pre-training (CLOC) by complementing CLIP with region-text contrastive loss and modules. We formulate a brand new idea, promptable embeddings, of which the encoder produces picture embeddings simple to remodel into area representations given spatial hints. To assist large-scale pre-training, we design a visually-enriched and spatially-localized captioning framework to successfully generate region-text pseudo-labels at scale. By scaling as much as billions of annotated pictures, CLOC permits high-quality regional embeddings for picture area recognition and retrieval duties, and is usually a drop-in alternative of CLIP to boost MLLMs, particularly on referring and grounding duties.
- ** Work carried out whereas at Apple