- How does AI Image to Video work?
- Upload one image and enter a prompt describing movement, camera direction, and style. The tool sends both to the selected Wan image-to-video model on HuggingFace and returns a generated video in chat.
- What makes a good image-to-video prompt?
- Describe subject motion, camera movement, scene atmosphere, and style. Example: "Slow cinematic push-in, soft wind in the hair, warm sunset light, realistic motion."
- Which model is selected by default?
- By default, the tool selects Wan 2.2 14B Lightning when you open the page for the first time. You can switch to LTX-2 Turbo Video anytime from the model dropdown.
- Do I need both image and text prompt?
- Yes. This tool requires an input image and a prompt. The image provides visual context, and the prompt controls animation behavior.
- What output format do I get?
- Generated results are returned as a downloadable video file (typically MP4) and shown in the chat preview player.
- How long does generation take?
- Most generations finish in seconds to about a minute, depending on queue load, prompt complexity, and settings.
- Are my uploads private?
- Your image and prompt are sent to HuggingFace-hosted inference endpoints for generation. We do not intend permanent storage of your generated media.