kwaivgi/kling-v2.6-pro/motion-control

Kling 2.6 Pro Motion Control turns reference motion clips (dance, action, gesture) into smooth, realistic animations. Upload a character image (or source video) and a motion video; the model transfers the movement while preserving identity and temporal consistency.

IMAGE-TO-VIDEONEW
首頁
探索
kwaivgi/kling-v2.6-pro/motion-control
Kling v2.6 Pro Motion Control
圖生影片
PRO

Kling 2.6 Pro Motion Control turns reference motion clips (dance, action, gesture) into smooth, realistic animations. Upload a character image (or source video) and a motion video; the model transfers the movement while preserving identity and temporal consistency.

輸入

正在載入參數設定...

輸出

閒置
生成的影片將在這裡顯示
設定參數後點擊執行開始生成

每次執行將花費 0.095。$10 可執行約 105 次。

你可以繼續:

參數

程式碼範例

import requests
import time

# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "kwaivgi/kling-v2.6-pro/motion-control",
    "prompt": "A beautiful sunset over the ocean with gentle waves",
    "width": 512,
    "height": 512,
    "duration": 3,
    "fps": 24,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] in ["completed", "succeeded"]:
            print("Generated video:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

video_url = check_status()

安裝

為您的程式語言安裝所需的套件。

bash
pip install requests

驗證

所有 API 請求都需要透過 API 金鑰進行驗證。您可以從 Atlas Cloud 儀表板取得 API 金鑰。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP 標頭

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
請妥善保管您的 API 金鑰

切勿在客戶端程式碼或公開儲存庫中暴露您的 API 金鑰。請改用環境變數或後端代理。

提交請求

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

提交請求

提交非同步生成請求。API 會傳回一個預測 ID,您可以用它來檢查狀態並取得結果。

POST/api/v1/model/generateVideo

請求主體

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "kwaivgi/kling-v2.6-pro/motion-control",
    "input": {
        "prompt": "A beautiful sunset over the ocean with gentle waves"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

回應

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

檢查狀態

輪詢預測端點以檢查請求的當前狀態。

GET/api/v1/model/prediction/{prediction_id}

輪詢範例

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

狀態值

processing請求仍在處理中。
completed生成完成。輸出已可取得。
succeeded生成成功。輸出已可取得。
failed生成失敗。請檢查錯誤欄位。

完成回應

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.mp4"
    ],
    "metrics": {
      "predict_time": 45.2
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

上傳檔案

上傳檔案至 Atlas Cloud 儲存空間並取得 URL,可用於您的 API 請求。使用 multipart/form-data 上傳。

POST/api/v1/model/uploadMedia

上傳範例

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

回應

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

輸入 Schema

以下參數可在請求主體中使用。

總計: 0必填: 0選填: 0

無可用參數。

範例請求主體

json
{
  "model": "kwaivgi/kling-v2.6-pro/motion-control"
}

輸出 Schema

API 傳回包含生成輸出 URL 的預測回應。

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for video generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

範例回應

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.mp4"
  ],
  "metrics": {
    "predict_time": 45.2
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills 將 300 多個 AI 模型直接整合至您的 AI 程式碼助手。一鍵安裝,即可使用自然語言生成圖片、影片,以及與 LLM 對話。

支援的客戶端

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ 支援的客戶端

安裝

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

設定 API 金鑰

從 Atlas Cloud 儀表板取得 API 金鑰,並設為環境變數。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

功能

安裝完成後,您可以在 AI 助手中使用自然語言存取所有 Atlas Cloud 模型。

圖片生成使用 Nano Banana 2、Z-Image 等模型生成圖片。
影片創作使用 Kling、Vidu、Veo 等從文字或圖片創建影片。
LLM 對話與 Qwen、DeepSeek 及其他大型語言模型對話。
媒體上傳上傳本機檔案,用於圖片編輯和圖片轉影片工作流程。

MCP Server

Atlas Cloud MCP Server 透過 Model Context Protocol 將您的 IDE 與 300 多個 AI 模型連接。支援任何 MCP 相容的客戶端。

支援的客戶端

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ 支援的客戶端

安裝

bash
npx -y atlascloud-mcp

設定

將以下設定新增至您 IDE 的 MCP 設定檔中。

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

可用工具

atlas_generate_image根據文字提示生成圖片。
atlas_generate_video從文字或圖片創建影片。
atlas_chat與大型語言模型對話。
atlas_list_models瀏覽 300 多個可用的 AI 模型。
atlas_quick_generate一步完成內容創建,自動選擇模型。
atlas_upload_media上傳本機檔案用於 API 工作流程。

API Schema

Schema 不可用

請登入以檢視請求歷史

您需要登入才能存取模型請求歷史記錄。

登入

Kling v2.6 Pro Motion Control

Kling v2.6 Pro Motion Control is Kuaishou's advanced motion transfer model that animates a reference image by applying the movement from a reference video. Upload a character image and a motion clip (like a dance or action sequence), and the model extracts the motion path to generate smooth, realistic video where your subject performs those exact movements.

Key capabilities

  • Motion extraction and transfer Upload a 3 to 30-second reference video showing any movement (dance, walk cycle, martial arts, gestures), and the model captures the full motion sequence frame-by-frame to apply it to your image.
  • Full-body motion accuracy The system captures detailed movements including posture, limb positions, and complex actions, ensuring smooth and natural-looking animation even for fast or intricate sequences.
  • Flexible character orientation control Choose whether the final video follows the reference image's aspect ratio and composition ("image" mode) or the reference video's framing ("video" mode), with duration limits adjusted accordingly.
  • Audio preservation option Retain the original audio from your reference video or generate silent output, giving you control over the final soundscape.
  • Prompt-guided refinement Use text prompts to adjust scene details, styling, lighting, and atmosphere while maintaining the core motion transfer from the reference video.

Parameters and how to use

  • image: (required) The reference image showing the subject you want to animate
  • video: (required) The reference video containing the motion sequence to transfer
  • character_orientation: (required) Controls output framing and duration limits
  • prompt: Text description to refine scene details, style, and atmosphere
  • keep_original_sound: Whether to preserve audio from the reference video
  • negative_prompt: Elements to avoid in the generated video

How to use

  • Prompt

Describe the scene setting, visual style, lighting, and atmosphere you want while the motion is being transferred. The model will apply your reference video's movement to your reference image, so focus your prompt on environmental details rather than the action itself.

Example: "cinematic lighting, shallow depth of field, urban street background, golden hour, film grain"

Media requirements

Images

  • Max file size: 10 MB
  • Tip: Use clear, well-lit images showing the full subject for best motion transfer results

Videos

  • Duration limits depend on character_orientation setting (see below)

Other parameters

  • character_orientation – (required) Choose one:

image – Output matches the reference image's framing and composition. video – Output matches the reference video's framing and composition. Reference video can be up to 30 seconds.

  • keep_original_sound – Boolean, defaults to true

true – Preserve audio from the reference video false – Generate silent video output

  • negative_prompt – Optional text to specify unwanted elements like "blurry, distorted, watermark, low quality, flickering". Max 2,500 characters.

After you finish configuring the parameters, click Run, preview the result, and iterate if needed.

Pricing

Duration (s)Billed Duration (s)Total Price (USD)
55$0.560
1010$1.120
1515$1.680
3030$3.360

Notes

Best practices:

  • For complex movements like dance or martial arts, use reference videos between 3 and 10 seconds showing clear, unobstructed motion
  • Ensure your reference image shows the subject in good lighting with minimal occlusion
  • Start with the default settings and use prompts primarily for scene styling rather than motion instructions
  • The model works best when the reference image subject and reference video subject are similar in type (e.g., both human characters)

Use cases:

  • Animate character illustrations with real dance choreography or action sequences
  • Create product demonstration videos by transferring human gestures to animated mascots
  • Generate character performance clips for storyboarding and concept work
  • Produce social media content by applying trending motion clips to custom characters
  • Kling v2.6 Pro Image-to-Video – Generate videos from a single image with prompt-driven motion and optional native audio.
  • Kling v2.6 Pro Text-to-Video – Create videos entirely from text prompts with cinematic visuals and audio–video co-generation.
  • Kling Omni Video O1 Reference-to-Video – Maintain subject identity across frames using multi-reference inputs for character-consistent video generation.

300+ 模型,即刻開啟,

探索全部模型