
Qtum.ai Text to Video Launched
Qtum AI Video is here
From image generation to conversational AI to a full-featured MCP, qtum.ai has been steadily delivering on its roadmap of AI-powered tools. Today, we're excited to announce the next milestone: video generation with sound, now live at qtum.ai/studio/video
No subscription or credit card required, simply log in with your Gmail account. Users don't have to wait in a queue to generate video, just enter your prompt and insert your first frame and click generate.
It comes complete with native audio. Whether you're building a product demo, a cinematic short, or a social media clip, Qtum.ai Video gives you professional-quality output from a simple text prompt.
What's Inside Qtum.ai Video
Cutting-Edge Models, Multiple Options
The studio supports a range of state-of-the-art video generation models, including:
• Seedance 1.0
• Seedance 1.5
• Seedance 2.0 (the flagship — more on this below)
• WAN models
You can generate videos across a variety of aspect ratios and resolutions, from vertical social media formats to widescreen cinematic outputs. Prices for each model are quoted before you click generate
And there are still more models to come… We are currently testing with LTX 2.3. Stay tuned.
Video With Sound, No Post-Production Needed
Unlike earlier AI video tools that required manual audio layering, Qtum.ai Video generates audio alongside video in a single pass. Ambient soundscapes, synchronized effects, and music that follows the narrative rhythm, all included out of the box.
Easy Access: Google Login & Qtum MetaMask Snap
Getting started is simple. Users can authenticate via two methods:
• Standard Google login — sign in with your existing Google account for instant access.
• Qtum MetaMask Snap — Web3-native login coming this month, letting you authenticate directly with your Qtum wallet through the MetaMask Snap integration.
Payment is supported in $QTUM, with $USDC functionality currently in development.
Spotlight: Seedance 2.0
Seedance 2.0 is the flagship model available in Qtum.ai Video, and for good reason. It's a fully multimodal AI video model that goes far beyond "type a sentence, get a clip."
Standout capabilities include:
• Character consistency: stable faces, clothing, and styles across frames, less character drift
• Camera replication: upload a reference video and the model replicates the camera choreography
• Multi-shot storytelling: build longer narratives with seamless shot transitions
• Native audio sync: dialogue, ambient sound, and beat-synced music generated in one pass
• Watermark-free output: clean, professional videos ready for immediate use
Example Prompts to Get You Started
Seedance 2.0 follows a clear formula: Subject + Action + Environment + Camera + Lighting/Mood + Style. Here are a few ready-to-use examples across different styles:
Cinematic / Narrative
A lone astronaut walks across an amber desert under twin moons. Camera slow lateral tracking. Cinematic sci-fi tone. 16:9, avoid temporal flicker.
Travel / Aerial
Aerial sunrise over coastal cliffs. Smooth drone movement forward. Warm golden-hour grading, atmospheric haze. Energetic travel intro feeling.
Product / Commercial
A close-up perfume bottle on reflective black glass. Slow camera dolly-in. Soft rim light, realistic liquid highlights. Premium ad style, 4K cinematic look.
Urban / Moody
A young woman walks through neon rainy streets at night. Handheld follow camera. Shallow depth of field, emotional film tone, subtle motion blur.
Try It Now:
Qtum.ai Video is live today. Head over to qtum.ai/studio/video, sign in with Google or your Qtum MetaMask Snap, and start generating.