A speed-optimized text-to-video option that prioritizes lower latency while retaining strong visual fidelity. Ideal for iteration, batch generation, and prompt testing.

A speed-optimized text-to-video option that prioritizes lower latency while retaining strong visual fidelity. Ideal for iteration, batch generation, and prompt testing.
Cada execução custará 0.035. Com $10 você pode executar aproximadamente 285 vezes.
Você pode continuar com:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.5/text-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Instale o pacote necessário para a sua linguagem de programação.
pip install requestsTodas as solicitações de API requerem autenticação por meio de uma chave de API. Você pode obter sua chave de API no painel do Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Nunca exponha sua chave de API em código do lado do cliente ou repositórios públicos. Use variáveis de ambiente ou um proxy de backend.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Envie uma solicitação de geração assíncrona. A API retorna um ID de predição que você pode usar para verificar o status e obter o resultado.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.5/text-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Consulte o endpoint de predição para verificar o status atual da sua solicitação.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingA solicitação ainda está sendo processada.completedA geração está completa. As saídas estão disponíveis.succeededA geração foi bem-sucedida. As saídas estão disponíveis.failedA geração falhou. Verifique o campo de erro.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Envie arquivos para o armazenamento do Atlas Cloud e obtenha uma URL que pode ser usada nas suas solicitações de API. Use multipart/form-data para enviar.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Os seguintes parâmetros são aceitos no corpo da solicitação.
Nenhum parâmetro disponível.
{
"model": "alibaba/wan-2.5/text-to-video"
}A API retorna uma resposta de predição com as URL de saída geradas.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}O Atlas Cloud Skills integra mais de 300 modelos de IA diretamente no seu assistente de codificação com IA. Um comando para instalar e depois use linguagem natural para gerar imagens, vídeos e conversar com LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsObtenha sua chave de API no painel do Atlas Cloud e defina-a como variável de ambiente.
export ATLASCLOUD_API_KEY="your-api-key-here"Após a instalação, você pode usar linguagem natural no seu assistente de IA para acessar todos os modelos do Atlas Cloud.
O Atlas Cloud MCP Server conecta seu IDE com mais de 300 modelos de IA através do Model Context Protocol. Funciona com qualquer cliente compatível com MCP.
npx -y atlascloud-mcpAdicione a seguinte configuração ao arquivo de configuração de MCP do seu IDE.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schema não disponívelVocê precisa fazer login para acessar o histórico de solicitações do modelo.
Fazer LoginSom e Imagem, Tudo em Uma Única Tomada
O modelo de IA revolucionário da ByteDance que gera áudio e vídeo perfeitamente sincronizados simultaneamente a partir de um único processo unificado. Experimente a verdadeira geração nativa áudio-visual com sincronização labial de precisão milimétrica em mais de 8 idiomas.
Despite Google's recent price cuts, Veo 3 remains expensive overall. Wan 2.5 is lightweight and cost-effective, providing creators with more options while significantly reducing production costs.
With Wan 2.5, no separate voice recording or manual lip alignment is needed. Just provide a clear, structured prompt to generate complete videos with audio/voiceover and lip sync in one go - faster and simpler.
When prompts are in Chinese, Wan 2.5 reliably generates A/V synchronized videos. In contrast, Veo 3 often displays "unknown language" for Chinese prompts.
Wan 2.5 excels at character trait restoration, accurately presenting character appearance, expressions, and movement styles, making generated video characters more recognizable and personalized for enhanced storytelling and immersion.
Supports Studio Ghibli-style rendering, creating hand-painted watercolor textures and animation effects. Brings warm, dreamy visual experiences that enhance artistic appeal and storytelling depth.
Whether it's product launches, promotional campaigns, or brand marketing, Wan 2.5 helps you quickly generate high-quality videos, making creation easy and efficient.
Provides ideal content localization solutions for multinational companies, making creation easier and more efficient.
Creators can leverage Wan 2.5 to improve video production efficiency while ensuring high-quality output.
Wan 2.5 makes corporate training more efficient and engaging.
Wan 2.5 lets creativity flow without expensive equipment or actors - AI generates everything efficiently.
Transform creativity into reality without high costs - Wan 2.5 makes quality content production easy and economical.
Generate complete videos with synchronized audio, voiceover, and lip-sync in a single process
Supports simultaneous generation of two characters with synchronized actions, expressions, and lip-sync for natural interactions
High-quality video output with realistic character expressions and precise lip synchronization
Excellent support for Chinese prompts and reliable generation of multilingual content
Significantly lower costs compared to competitors while maintaining professional quality
Precisely recreates character appearance, expressions, and movement styles with high fidelity and personality
Supports various artistic styles including Studio Ghibli-inspired hand-painted watercolor textures
Perfect for dialogue scenes, interviews, or dual-person short films with natural audio-visual consistency
Discover the power of Wan 2.5 through these curated examples. From digital human lip-sync to dual character scenes, artistic rendering to character restoration - experience the possibilities.
A middle-aged man sitting at a wooden desk in a cozy study room, surrounded by bookshelves and a warm lamp glow. He opens an old book and reads aloud with a calm, deep voice: 'History teaches us more than just facts… it shows us who we are.' The room has subtle background sounds: pages turning, the faint ticking of a clock, and distant rain against the window.
A young couple sitting on a park bench during sunset. The woman leans her head on the man's shoulder. He whispers softly: 'No matter where we go, I'll always be here with you.' The sound includes the rustling of leaves, distant laughter of children playing, and the gentle hum of cicadas in the evening air.
A graceful ballerina with her hair in a messy bun, performing a powerful and emotional contemporary ballet routine. She is in a minimalist, dark art studio. Abstract patterns of light and shadow, projected from a hidden source, dance across her body and the surrounding walls, constantly shifting with her movements. The camera focuses on the tension in her muscles and the expressive gestures of her hands. A single, dramatic slow-motion shot captures her mid-air leap, with the light patterns swirling around her like a galaxy. Moody, artistic, high contrast.
Studio Ghibli-inspired anime style. A young girl with a straw hat lies peacefully in a sun-dappled magical forest, surrounded by friendly, glowing forest spirits (Kodama). A gentle breeze rustles the leaves of the giant, ancient trees. The air is filled with sparkling dust motes, illuminated by shafts of sunlight. The art style is soft, with a hand-painted watercolor texture. The scene feels serene, magical, and heartwarming.
Junte-se a cineastas, anunciantes e criadores de todo o mundo que estão revolucionando a criação de conteúdo de vídeo com a tecnologia inovadora do Seedance 1.5 Pro.
| Field | Description |
|---|---|
| Model Name | Wan 2.5 |
| Developed By | Alibaba Group |
| Release Date | September 24, 2025 |
| Model Type | Generative AI, Video Foundation Model |
| Related Links | Official Website: https://wan.video/, Hugging Face: https://huggingface.co/Wan-AI, Technical Paper (Wan Series): https://arxiv.org/abs/2503.20314 |
Wan 2.5 is a state-of-the-art, open-source video foundation model developed by Alibaba's Wan AI team. It is designed to generate high-quality, cinematic videos complete with synchronized audio directly from text or image prompts. The model represents a significant advancement in the field of generative AI, aiming to lower the barrier for creative video production. Its core contribution lies in its ability to produce coherent, dynamic, and narratively consistent video clips with a high degree of realism and integrated audio-visual elements, such as lip-sync and sound effects, in a single, streamlined process.
Wan 2.5 introduces several key features that distinguish it from previous models and competitors:
Wan 2.5 is built upon the Diffusion Transformer (DiT) paradigm, which has become a mainstream approach for high-quality generative tasks. The technical report for the Wan model series outlines a suite of innovations that contribute to its performance.
The architecture includes a novel Variational Autoencoder (VAE) designed for high-efficiency video compression, enabling the model to handle high-resolution video data effectively. The Wan series is available in multiple sizes to balance performance and computational requirements, such as the 1.3B and 14B parameter models detailed for Wan 2.2. The model was trained on a massive, curated dataset comprising billions of images and videos, which enhances its ability to generalize across a wide range of motions, semantics, and aesthetic styles.
Wan 2.5 is designed for a wide array of applications in creative and commercial fields. Its intended uses include:
Wan 2.5 has demonstrated significant performance improvements over previous versions and holds a competitive position against other leading video generation models. Independent reviews and benchmarks provide insight into its capabilities.
A review conducted by Curious Refuge Labs™ evaluated the model's visual generation capabilities across several metrics.
| Metric | Score (out of 10) |
|---|---|
| Prompt Adherence | 7.0 |
| Temporal Consistency | 6.6 |
| Visual Fidelity | 6.5 |
| Motion Quality | 5.9 |
| Style & Cinematic Realism | 5.7 |
| Overall Score | 6.3 |
These scores indicate strong prompt understanding and a notable improvement in visual quality from Wan 2.2, although it still shows limitations in complex motion and realism compared to top-tier commercial models.