Video Joint Embedding Predictive Architectures (V-JEPA) study generalizable off-the-shelf video illustration by predicting masked areas in latent house with an exponential transferring common (EMA)-updated trainer. Whereas EMA prevents illustration collapse, it complicates scalable mannequin choice and {couples} trainer and pupil architectures. We revisit masked-latent prediction and present {that a} frozen trainer suffices. Concretely, we (i) practice a goal encoder with a easy pixel-reconstruction goal beneath V-JEPA masking, then (ii) freeze it and practice a pupil to foretell the trainer’s latents on masked areas. This results in a two-stage, unregularized scheme that we discuss with as SALT (Static-teacher Uneven Latent Coaching). SALT decouples optimization into pixel reconstruction (trainer) and masked latent prediction (pupil), rising transparency, effectivity, and scalability whereas preserving the flexibility of illustration to generalize beneath frozen analysis. Empirically, our pupil fashions outperform just lately proposed V-JEPA 2 encoders beneath frozen spine analysis throughout various benchmarks. They’re additionally extra compute-optimal: at matched pretraining FLOPs, our technique achieves larger probing accuracy, and its scaling curves dominate V-JEPA’s accuracy-FLOPs Pareto frontier. Lastly, we discover that pupil high quality is remarkably strong to trainer high quality: high-performing college students emerge even with small, sub-optimal lecturers. This factors to a compute finances allocation that ought to overwhelmingly favor the coed. These outcomes place SALT as a easy, scalable, and compute-efficient various to EMA-based self-distillation for video illustration studying.
- † Work performed whereas at Apple