- How does the AI Video Generator work?
- You enter a text prompt and the tool sends it to the currently selected model on HuggingFace. The model interprets your prompt and returns a generated video clip that appears directly in chat.
- How do I write a better video prompt?
- Use clear details: subject, setting, action, camera movement, lighting, and style. Example: "A slow cinematic dolly shot through a neon city street at night, rain reflections, high contrast."
- What video format do I get?
- Generated results are delivered as a downloadable video file (typically MP4) and shown in the chat preview player.
- How long does video generation take?
- Most prompts finish within seconds to about a minute, depending on queue load and prompt complexity.
- Which model is selected by default?
- By default, this tool starts with Wan 2.2 14B Lightning. You can switch to LTX-2 Turbo Video anytime from the model dropdown in settings.
- Can I use generated videos commercially?
- In general yes, but you are responsible for legal use, licensing, and compliance with local laws and platform rules.
- Are my prompts private?
- Prompts are sent to HuggingFace-hosted inference endpoints for generation. We do not intend permanent storage of your generated media.