Use LTX Video standalone, or through LTX Studio, Lightricks' all-in-one platform for video production.
Use LTX Video standalone, or through LTX Studio, Lightricks' all-in-one platform for video production.
LTX Video is a family of open-source video generation models built on a transformer-based latent diffusion architecture. It supports text-to-video, image-to-video, keyframe animation, sequence conditioning, and video extension. The 2B distilled version offers faster-than-real-time generation, while the full 13B model delivers higher visual quality with slightly longer, yet comparatively fast runtimes.
LTX Video is extremely fast, with some models capable of generating video faster than real-time playback. Exact speeds vary by model and hardware configuration.
LTX Video runs on a wide range of hardware, from community GPUs to cutting-edge data center accelerators like NVIDIA H100s and Google TPUs. While it’s optimized for modern infrastructure to maximize performance, it’s built to be accessible for independent creators and researchers as well.
Yes. The code is released under the Apache 2.0 license, and model weights are available under Lightricks' custom license (see full list here). Both are publicly available for use, research, and development. We invite the community to explore, extend, and contribute.
LTX Video natively supports video extension and keyframe-based generation, allowing you to create longer and more coherent scenes by extending videos forward or backward.
Yes. LTX Video is designed for customization. Whether you're building for a specific style, domain, or application, you can easily fine-une it using LoRA-based training and multi-GPU support. Ready to make it your own? Start with the official framework here.
You can run the distilled model for fewer iterations (or steps) and still achieve results similar to the full model. This is because the distilled model is separately trained to replicate the behavior of the larger model using a technique called knowledge distillation, where a smaller "student" model learns to mimic the outputs of a larger "teacher" model. The result is a faster, lighter model that retains much of the performance of the original.
The quantized version uses the same original model but compresses its weights into lower-precision formats, reducing memory usage and speeding up inference without retraining.
Both approaches make LTX Video easier to run on limited hardware, with only slight trade-offs in quality, depending on your use case.
Multiscale rendering is a structured pipeline where a video is generated in multiple resolution stages, starting from a low-resolution latent representation and progressively refining it at higher resolutions, while preserving the original structure, motion, and temporal coherence throughout. Rather than generating a high-resolution video all at once, each stage focuses on a different scale of information.
LTX Studio is the future of storytelling, transforming imagination into reality with our AI-driven platform. We streamline the production process from scripting to final edits, making advanced storytelling tools accessible for creators of all levels. Designed for professionals yet intuitive enough for anyone, LTX Studio is where visions come to life, redefining the art of narrative.