
Veo3.1 Image-to-Video API by Google
Quickly animate static images into motion-rich, high-quality clips. Veo 3.1 Fast Image-to-Video accelerates rendering for fast previews and iterative visual storytelling.
输入
输出
空闲每次运行将花费 $0.2。$10 可运行约 50 次。
代码示例
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "google/veo3.1/image-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()安装
安装所需的依赖包。
pip install requests认证
所有 API 请求需要通过 API Key 进行认证。您可以在 Atlas Cloud 控制台获取 API Key。
export ATLASCLOUD_API_KEY="your-api-key-here"HTTP 请求头
import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}切勿在客户端代码或公开仓库中暴露您的 API Key。请使用环境变量或后端代理。
提交请求
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())提交请求
提交一个异步生成请求。API 返回一个 prediction ID,您可以用它来检查状态和获取结果。
/api/v1/model/generateVideo请求体
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "google/veo3.1/image-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")响应
{
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}检查状态
轮询 prediction 端点以检查请求的当前状态。
/api/v1/model/prediction/{prediction_id}轮询示例
import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)状态值
processing请求仍在处理中。completed生成完成,输出可用。succeeded生成成功,输出可用。failed生成失败,请检查 error 字段。完成响应
{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}上传文件
将文件上传到 Atlas Cloud 存储,获取可在 API 请求中使用的 URL。使用 multipart/form-data 上传。
/api/v1/model/uploadMedia上传示例
import requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")响应
{
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Input Schema
以下参数在请求体中被接受。
暂无可用参数。
请求体示例
{
"model": "google/veo3.1/image-to-video"
}Output Schema
API 返回包含生成输出 URL 的 prediction 响应。
响应示例
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills
Atlas Cloud Skills 将 300+ AI 模型直接集成到您的 AI 编程助手中。一条命令安装,即可用自然语言生成图像、视频和与 LLM 对话。
支持的客户端
安装
npx skills add AtlasCloudAI/atlas-cloud-skills设置 API Key
从 Atlas Cloud 控制台获取 API Key,并将其设置为环境变量。
export ATLASCLOUD_API_KEY="your-api-key-here"功能
安装后,您可以在 AI 助手中使用自然语言访问所有 Atlas Cloud 模型。
MCP Server
Atlas Cloud MCP Server 通过 Model Context Protocol 将您的 IDE 与 300+ AI 模型连接。支持任何兼容 MCP 的客户端。
支持的客户端
安装
npx -y atlascloud-mcp配置
将以下配置添加到您的 IDE 的 MCP 设置文件中。
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}可用工具
API Schema
Schema 不可用Google Veo 3.1 — Image-to-Video (I2V) Model
Veo 3.1 I2V is Google DeepMind’s latest image-to-video generation model — an evolution of Veo’s cinematic foundation. It transforms a single still image or a pair of start & end frames into a high-fidelity 1080p motion sequence with natural movement, realistic lighting, and synchronized contextual audio.
Perfect for storyboarding, concept animation, and creative scene development, Veo 3.1 I2V captures the feeling of camera motion and environmental change while preserving your image’s style and composition.
Why it stands out
-
** Cinematic Motion Generation**
Animates still images with realistic subject and camera movement — from subtle pans to sweeping transitions.
-
** Frame Interpolation**
Supports single-frame animation and two-frame transitions — letting you morph from one image to another with fluid continuity.
-
** Native Audio Support**
Adds synchronized ambient sound, dialogue, or music automatically aligned with visual motion.
-
** Contextual Understanding**
Interprets both image content and prompt text to guide scene flow and atmosphere.
-
** High-Resolution Output**
Generates at 720p or 1080p, 24 FPS, and supports landscape (16:9) or portrait (9:16) aspect ratios.
Key Parameters
-
prompt — Describe motion or story context (e.g., “Slow dolly zoom on a city skyline as sunset light fades”).
-
image — Provide a starting frame (JPEG / PNG / WEBP).
-
lastFrame (optional) — Provide an ending frame to create an interpolation-style transition.
-
durationSeconds — Choose video length: 4s, 6s, or 8s.
-
resolution — 720p or 1080p.
-
aspectRatio — Landscape (16:9) or Portrait (9:16).
Pricing (Preview Stage)
| Model | Description | Input Type | Output | Price |
|---|---|---|---|---|
| Veo 3.1 (Video + Audio) | Generate videos with synchronized sound | Image / Image Pair | Video + Audio | $0.40 / sec |
| Veo 3.1 (Video only) | Generate silent motion sequences | Image / Image Pair | Video | $0.20 / sec |
Typical cost: ~$3.20 for an 8-second 1080p video (standard mode).
How to Use
-
Upload your starting image
Use a clear, well-lit frame.
-
(Optional) Add a last frame
Provide an ending image if you want a smooth transition.
-
Write your prompt
Describe the motion or transformation (e.g., “camera slowly zooms out as night falls”).
-
Set parameters
Choose duration (4s / 6s / 8s), resolution (720p / 1080p), and aspect ratio (16:9 or 9:16).
-
Generate video
Submit your request — Veo 3.1 I2V will produce motion, lighting, and audio automatically.
Pro Tips
-
Use consistent framing between start and end images for smoother interpolation.
-
Add camera verbs like “pan,” “tilt,” “dolly,” for cinematic control.
-
Keep prompts concise and clear — focus on movement and lighting.
-
For realistic transitions, limit drastic composition or color shifts between frames.
-
To ensure repeatability, use the same random seed value.
Notes & Limitations
-
Supported durations: 4, 6, or 8 seconds.
-
Frame rate: 24 FPS (fixed).
-
Generation time: ~2–3 minutes for 8s @1080p.






