Home
Explore
Vidu Models
vidu/q3/reference-to-video
Vidu Q3 Reference to Video
image-to-video

Vidu Q3 Reference-to-Video API by Vidu

vidu/q3/reference-to-video
Reference-to-video

Vidu Q3 Reference-to-Video generates videos from 1-4 reference images with consistent subjects. Features intelligent camera switching with better consistency across multiple camera positions, audio support, and resolutions up to 1080p.

INPUT

Loading parameter configuration...

OUTPUT

Idle
Your generated videos will appear here
Configure your settings and click Run to get started

Your request will cost $0.042 per run. For $10 you can run this model approximately 238 times.

Here's what you can do next:

Parameters

Code Example

import requests
import time

# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "vidu/q3/reference-to-video",
    "prompt": "A beautiful sunset over the ocean with gentle waves",
    "width": 512,
    "height": 512,
    "duration": 3,
    "fps": 24,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] in ["completed", "succeeded"]:
            print("Generated video:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

video_url = check_status()

Install

Install the required package for your language.

bash
pip install requests

Authentication

All API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP Headers

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
Keep your API key secure

Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.

Submit a request

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Submit a Request

Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.

POST/api/v1/model/generateVideo

Request Body

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "vidu/q3/reference-to-video",
    "input": {
        "prompt": "A beautiful sunset over the ocean with gentle waves"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

Response

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

Check Status

Poll the prediction endpoint to check the current status of your request.

GET/api/v1/model/prediction/{prediction_id}

Polling Example

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

Status Values

processingThe request is still being processed.
completedGeneration is complete. Outputs are available.
succeededGeneration succeeded. Outputs are available.
failedGeneration failed. Check the error field.

Completed Response

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.mp4"
    ],
    "metrics": {
      "predict_time": 45.2
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

Upload Files

Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.

POST/api/v1/model/uploadMedia

Upload Example

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

Response

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

Input Schema

The following parameters are accepted in the request body.

Total: 0Required: 0Optional: 0

No parameters available.

Example Request Body

json
{
  "model": "vidu/q3/reference-to-video"
}

Output Schema

The API returns a prediction response with the generated output URLs.

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for video generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

Example Response

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.mp4"
  ],
  "metrics": {
    "predict_time": 45.2
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.

Supported Clients

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ supported clients

Install

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

Setup API Key

Get your API key from the Atlas Cloud dashboard and set it as an environment variable.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Capabilities

Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.

Image GenerationGenerate images with models like Nano Banana 2, Z-Image, and more.
Video CreationCreate videos from text or images with Kling, Vidu, Veo, etc.
LLM ChatChat with Qwen, DeepSeek, and other large language models.
Media UploadUpload local files for image editing and image-to-video workflows.

MCP Server

Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.

Supported Clients

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ supported clients

Install

bash
npx -y atlascloud-mcp

Configuration

Add the following configuration to your IDE's MCP settings file.

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

Available Tools

atlas_generate_imageGenerate images from text prompts.
atlas_generate_videoCreate videos from text or images.
atlas_chatChat with large language models.
atlas_list_modelsBrowse 300+ available AI models.
atlas_quick_generateOne-step content creation with auto model selection.
atlas_upload_mediaUpload local files for API workflows.

API Schema

Schema not available

No examples available

Please log in to view request history

You need to be logged in to access your model request history.

Log In

1. Introduction

Vidu Q3 is an advanced AI video generation model developed by Shengshu Technology (生数科技) in collaboration with Tsinghua University. Released on January 30, 2026, Vidu Q3 is designed to produce high-fidelity, synchronized audio-visual content with industry-leading continuous video length and native support for integrated audio generation.

The model represents a significant advancement in automated video synthesis by unifying multiple complex video generation tasks—such as lip-synced dialogue, dynamic camera movements, and multi-shot storytelling—into a single-pass framework. Leveraging a novel Transformer-based diffusion architecture, Vidu Q3 sets a new standard for cinematic and marketing video content creation with its combination of spatial-temporal coherence, multimodal input flexibility, and real-time directorial control.


2. Key Features & Innovations

  • Native Audio-Video Synchronization: Vidu Q3 generates lip-synced dialogue, sound effects, and background music simultaneously within a single pass, ensuring precise temporal alignment between audio tracks and visual lip movements without requiring post-processing.

  • Extended High-Definition Video Generation: Supports up to 16 seconds of continuous video at 1080p resolution and 24 frames per second—the longest continuous generation duration among leading competitors—enabling more complex storytelling sequences.

  • Smart Cuts for Scene Detection: Integrates automatic scene boundary detection and multi-shot narrative transitions, which facilitate the smooth generation of dynamic video scenes without manual intervention.

  • Native Camera Control: Allows frame-level directorial commands such as pans, push-ins, and tracking shots within the generation pipeline, granting users granular cinematic control over the resulting video composition.

  • Multimodal Input Flexibility: Accepts both text-to-video and image-to-video inputs with configurable start and end frame controls, enabling versatile use cases that range from scripted storyboarding to visual style transfer.

  • Transformer-based Diffusion Architecture with Spatiotemporal Attention: The underlying Universal Vision Transformer (U-ViT) utilizes spatiotemporal attention mechanisms instead of conventional convolutional U-Nets, improving motion consistency and temporal coherence across generated frames.

  • Model Variants Tailored for Fidelity and Speed: Offers differentiated configurations including Q3 Pro for maximum visual fidelity, Q3 Turbo optimized for higher generation speed, and the legacy Q2 Series focused on character consistency.


3. Model Architecture & Technical Details

Vidu Q3 is architected on the U-ViT (Universal Vision Transformer) framework, replacing traditional convolutional U-Net diffusion models with a Transformer-based diffusion approach. This design enables enhanced modeling of spatiotemporal dependencies essential for consistent video generation with coherent motion and scene dynamics.

The training utilized large-scale, multimodal datasets encompassing paired video, audio, and textual data to foster robust cross-modal understanding and synthesis. Multiple training stages refined resolution and temporal granularity, progressing toward 1080p, 24fps output over sequences up to 16 seconds.

Specialized modules incorporated include spatiotemporal attention layers for motion consistency and native audio-visual synchronization, alongside smart cut detection layers for automatic scene segmentation. The pipeline supports multimodal conditioning inputs (text and images) with frame-level temporal control allowing start and end frame specification.

Post-training refinement employed techniques such as supervised fine-tuning on domain-specific cinematic data and continuous evaluation on video generation benchmarks to optimize lip-sync accuracy and camera control responsiveness.


4. Performance Highlights

Vidu Q3 demonstrably leads in multiple benchmark categories, particularly for continuous video length and audiovisual integration quality. It achieves an ELO rating between approximately 1220–1244 on the Artificial Analysis Video Arena, outperforming contemporaries such as Runway Gen-4.5 and Kling 2.5 Turbo.

RankModelDeveloperELO ScoreRelease Date
1Sora 2[Undisclosed]~1250+Pre-2026
2Vidu Q3Shengshu Tech & Tsinghua1220–1244Jan 30, 2026
3Runway Gen-4.5Runway~12002025
4Kling 2.5 TurboKling AI~1190Late 2025

Qualitatively, Vidu Q3 delivers superior cinematics including advanced native camera motion and scene transitions compared to Veo 3.1 and Grok Imagine, while maintaining better audio integration than Sora 2 and Kling 3.0. Its 16-second generation duration notably surpasses the typical 8-15 second range of competitors, allowing more complex narratives per generation.


5. Intended Use & Applications

  • Commercial Advertising: Produces 12-16 second product demonstration videos with synchronized audio and high realism, suitable for digital marketing campaigns.

  • Marketing Videos: Generates videos combining dialogue, sound effects, and background music tailored for brand storytelling and promotional content.

  • Cinematic Short-Form Storytelling: Enables filmmakers and content creators to automatically craft multi-shot video sequences with directorial camera control and scene transitions.

  • Social Media Content Creation: Facilitates rapid production of engaging social videos with lip-synced speech and dynamic visuals optimized for platform consumption.

  • Architectural Visualization: Visualizes architectural designs with realistic camera movements and synchronized ambient sounds enhancing presentation fidelity.

  • Educational Video Production: Supports creation of instructional content blending narrated audio with synchronized visual demonstrations and scene changes.

Start From 300+ Models,

Explore all models

Join our Discord community

Join the Discord community for the latest model updates, prompts, and support.