A multimodal AI model API aggregation platform designed specifically for developers. One API accesses top models from around the world.

Stop juggling keys, SDKs, and provider-specific JSON. Atlas Cloud aggregates 300+ models — LLM, image, video, and audio — behind one OpenAI-compatible endpoint. We pull directly from official sources and verified cloud hubs, so the result is the real model, not a filtered clone. Swap the model string; the rest of your code stays identical.
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.ATLASCLOUD_API_KEY,
baseURL: "https://api.atlascloud.ai/v1"
});
const model = "moonshotai/kimi-k2.6";
const prompt = "Summarise this PDF in 3 bullets.";
const resp = await client.chat.completions.create({
model,
messages: [{ role: "user", content: prompt }]
});
console.log(resp.choices[0].message.content);
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": ["-y", "atlascloud-mcp"],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Drop this into any MCP-compatible client — Cursor, Windsurf, VS Code, Claude Desktop, Zed, JetBrains, Trae, Claude Code, Gemini CLI, Codex CLI, Goose and more.
Our platform already hosts 300+ models ready to run in production. You can call any of them with one line of code.
Drop one JSON block into Cursor, Claude Code, Claude Desktop, VS Code, Windsurf, Zed, JetBrains, Codex CLI, Gemini CLI, Goose or any other MCP-compatible client. No provider glue code.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": ["-y", "atlascloud-mcp"],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Once the atlascloud MCP server is wired up, your agent can call any of Atlas Cloud's 300+ models from plain English. Mention Atlas Cloud by name so the agent routes through the MCP tool.
Use the Atlas Cloud MCP server to ask DeepSeek V3.2 to summarise this PDF in three bullet points.
Use Atlas Cloud to generate an image with Seedream v5.0 — a cyberpunk street market at rainy dusk, 1024x1024.
Call the Atlas Cloud MCP tool and create a 10s cinematic shot of a rocket launch at dawn with Seedance 2.0 at 1080p.
Via the Atlas Cloud MCP server, edit ~/photos/cat.jpg with Nano Banana 2 — add a wizard hat, keep composition identical.
Get running in minutes — follow the six steps below to go from a fresh account to a production integration.
Sign up at atlascloud.ai and verify your email to start exploring every model on the platform.
Everything you need to know before writing your first line of code.
No. The chat endpoint is OpenAI-compatible — point the OpenAI SDK (or any HTTP client) at api.atlascloud.ai/v1 and swap the model string. Streaming, tool use and function calling all work unchanged.
Chat is synchronous. Image and video models run as async predictions: you POST to the submit endpoint and get a prediction id back, then GET the prediction endpoint with that id until status is succeeded. Poll roughly every 2 seconds — no webhooks required.
300+ models across LLM, image, video and audio — DeepSeek, Qwen, Kimi, GLM, Seedance, Seedream, Nano Banana and more. Browse the full catalogue at /models; the model id you copy there is the exact string to pass in the API call.
You pay per token or per prediction depending on modality — pricing shows on each model card. Default rate limits are generous and enough for most production workloads. If you need more, email [email protected] and we'll raise the cap for you.
Yes — one MCP config plugs Atlas Cloud into every major MCP-compatible client (Cursor, Windsurf, VS Code, Claude Desktop, Claude Code, Zed, JetBrains, Codex CLI, Gemini CLI, Goose and more). The agent can then call any Atlas Cloud model from plain English. A one-line Skills install works too.
Check docs.atlascloud.ai for reference and guides, or open a ticket from the console. For MCP and Skills issues, the AtlasCloudAI/mcp-server and AtlasCloudAI/atlas-cloud-skills repos on GitHub accept issues and PRs.
Join the Discord community for the latest model updates, prompts, and support.