The sector of video technology has made exceptional developments, but there stays a urgent want for a transparent, systematic recipe that may information the event of sturdy and scalable fashions. On this work, we current a complete research that systematically explores the interaction of mannequin architectures, coaching recipes, and knowledge curation methods, culminating in a easy and scalable text-image-conditioned video technology technique, named STIV. Our framework integrates picture situation right into a Diffusion Transformer (DiT) by body alternative, whereas incorporating textual content conditioning through a joint image-text conditional classifier-free steerage. This design allows STIV to carry out each text-to-video (T2V) and text-image-to-video (TI2V) duties concurrently. Moreover, STIV might be simply prolonged to numerous purposes, comparable to video prediction, body interpolation, multi-view technology, and lengthy video technology, and many others. With complete ablation research on T2I, T2V, and TI2V, STIV display sturdy efficiency, regardless of its easy design. An 8.7B mannequin with 512 decision achieves 83.1 on VBench T2V, surpassing each main open and closed-source fashions like CogVideoX-5B, Pika, Kling, and Gen-3. The identical-sized mannequin additionally achieves a state-of-the-art results of 90.1 on VBench I2V job at 512 decision. By offering a clear and extensible recipe for constructing cutting-edge video technology fashions, we goal to empower future analysis and speed up progress towards extra versatile and dependable video technology options.
- † College of California, Los Angeles
- ** Work performed whereas at Apple