Qwen LLM Models

Qwenは、Alibaba Cloudによって開発された大規模言語モデル(LLM)ファミリーであり、強力な自然言語理解とテキスト生成のために設計されています。Qwenモデルは軽量なものから非常に大規模なものまで多岐にわたり、多言語機能をサポートしているため、チャットボット、コンテンツ作成、コーディング支援、推論タスクに適しています。Qwenはオープンソースの可用性、強力なパフォーマンス、柔軟なデプロイメントを重視しており、開発者や企業が多様なユースケースにわたって効率的でコスト効率の高いAIアプリケーションを構築できるようにします。

主要モデルを探索

Atlas Cloudは、業界をリードする最新のクリエイティブモデルを提供します。

NEW
Qwen3 Max 20260123

Qwen3-Max is a flagship large language model designed for ultra-long context understanding, powerful reasoning, and high-performance text and code generation, making it well suited for complex, large-scale, and production-grade AI applications.

LLM

Qwen3 Max 20260123

252.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:252.00K
入力:$1.2/M token
出力:$6/M token
最大出力:32.00K
$1.2/6M 入力/出力
HOT
Qwen3-VL-235B-A22B-Instruct

説明がありません。

LLM

Qwen3-VL-235B-A22B-Instruct

131.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:131.07K
入力:$0.3/M token
出力:$1.5/M token
最大出力:32.77K
$0.3/1.5M 入力/出力
NEW
Qwen3 30B A3B Instruct 2507

Qwen's latest and most powerful open-source model.

LLM

Qwen3 30B A3B Instruct 2507

131.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:131.07K
入力:$0.1/M token
出力:$0.3/M token
最大出力:131.07K
$0.1/0.3M 入力/出力
NEW
HOT
Qwen3 Next 80B A3B Thinking

Qwen's latest and most powerful open-source model.

LLM

Qwen3 Next 80B A3B Thinking

262.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:262.14K
入力:$0.15/M token
出力:$1.5/M token
最大出力:262.14K
$0.15/1.5M 入力/出力
NEW
HOT
Qwen3 Next 80B A3B Instruct

Qwen's latest and most powerful open-source model.

LLM

Qwen3 Next 80B A3B Instruct

262.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:262.14K
入力:$0.15/M token
出力:$1.5/M token
最大出力:32.77K
$0.15/1.5M 入力/出力
Qwen3 8B

The latest Qwen reasoning model.

LLM

Qwen3 8B

32.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:32.00K
入力:$0.05/M token
出力:$0.4/M token
最大出力:8.19K
$0.05/0.4M 入力/出力
Qwen3 235B A22B Thinking 2507

The latest Qwen reasoning model.

LLM

Qwen3 235B A22B Thinking 2507

128.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:128.00K
入力:$0.28/M token
出力:$2.3/M token
最大出力:32.77K
$0.28/2.3M 入力/出力
Qwen3 VL 235B A22B Thinking

The latest Qwen reasoning model.

LLM

Qwen3 VL 235B A22B Thinking

128.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:128.00K
入力:$0.5/M token
出力:$2.5/M token
最大出力:32.77K
$0.5/2.5M 入力/出力
Qwen3 30B A3B

The latest Qwen reasoning model.

LLM

Qwen3 30B A3B

41.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:40.96K
入力:$0.08/M token
出力:$1.25/M token
最大出力:32.77K
$0.08/1.25M 入力/出力
Qwen3 30B A3B Thinking 2507

The latest Qwen reasoning model.

LLM

Qwen3 30B A3B Thinking 2507

128.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:128.00K
入力:$0.08/M token
出力:$0.4/M token
最大出力:32.77K
$0.08/0.4M 入力/出力
Qwen2.5 7B Instruct

The latest Qwen model.

LLM

Qwen2.5 7B Instruct

128.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:128.00K
入力:$0.04/M token
出力:$0.1/M token
最大出力:8.19K
$0.04/0.1M 入力/出力
NEW
Qwen3 32B

The latest Qwen reasoning model.

LLM

Qwen3 32B

41.0K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:40.96K
入力:$0.1/M token
出力:$1.2/M token
最大出力:40.96K
$0.1/1.2M 入力/出力
NEW
HOT
Qwen3 Coder

Qwen3-Coder is the code version of Qwen3.

LLM

Qwen3 Coder

262.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:262.14K
入力:$0.78/M token
出力:$3.8/M token
最大出力:65.54K
$0.78/3.8M 入力/出力
NEW
HOT
Qwen3-235B-A22B-Instruct-2507

235B-parameter MoE thinking model in Qwen3 series.

LLM

Qwen3-235B-A22B-Instruct-2507

131.1K コンテキスト:
入力タイプ:
出力タイプ:
コンテキスト:131.07K
入力:$0.2/M token
出力:$0.88/M token
最大出力:32.77K
$0.2/0.88M 入力/出力

Qwen LLM Modelsの特徴

Atlas Cloudは、業界をリードする最新のクリエイティブモデルを提供します。

リアルコマースDNA

ライブ本番環境からの実際のグローバルEコマースおよびエンタープライズワークフローに基づいてトレーニングされています。

本番環境向けに構築

グローバル企業が日常的に使用する実際のクラウドサービスのAPIおよびSaaSツールを使用して開発されました。

本番環境レベルのコードを記述

実際のクラウドおよび自動化システムで稼働している数十億行の実用コードでファインチューニングされています。

あらゆる環境で動作

ノートPCからサーバーまで、あらゆるデバイスに対応した規模のモデルを提供します。

真にオープンなライセンス

完全な商用利用の自由を備えた Apache 2.0、あらゆるユースケースに最適化。

大規模な実証済み

稼働中のシステム全体で1億人以上の実ユーザーをサポートしています。

Qwen LLM Modelsでできること

Atlas Cloudは、業界をリードする最新のクリエイティブモデルを提供します。

1import requests
2
3url = "https://api.atlascloud.ai/v1/chat/completions"
4headers = {
5    "Content-Type": "application/json",
6    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
7}
8data = {
9    "model": "",
10    "messages": [
11        {
12            "role": "user",
13            "content": "what is difference between http and https"
14        }
15    ],
16    "max_tokens": 32768,
17    "temperature": 1,
18    "stream": True
19}
20
21response = requests.post(url, headers=headers, json=data)
22print(response.json())

正確でスケーラブルな回答により、エンタープライズ向けのチャットボットやバーチャルエージェントを強化します

フルスタック言語対応の、本番環境ですぐに使えるコードを生成・デバッグ

特定領域への深い理解に基づき、科学技術研究を支援します

金融、物流、オペレーション、法務、コンプライアンスにおける複雑な多段階の課題を解決

商品説明やマーケティングコピーなど、大量のコンテンツ作成を自動化

法務、コンプライアンス、規制分野のワークフロー向けに、即座に監査可能なアウトプットを提供します

Atlas CloudでQwen LLM Modelsを使用する理由

高度なQwen LLM ModelsモデルとAtlas CloudのGPU加速プラットフォームを組み合わせることで、比類のないパフォーマンス、スケーラビリティ、開発者エクスペリエンスを提供。

パフォーマンスと柔軟性

低レイテンシ:
リアルタイム推論のためのGPU最適化推論。

統合API:
1つの統合でQwen LLM Models、GPT、Gemini、DeepSeekを実行。

透明な料金:
サーバーレスオプション付きの予測可能なtoken単位の課金。

エンタープライズとスケール

開発者エクスペリエンス:
SDK、分析、ファインチューニングツール、テンプレート。

信頼性:
99.99%の稼働率、RBAC、コンプライアンス対応ロギング。

セキュリティとコンプライアンス:
SOC 2 Type II、HIPAA準拠、米国内のデータ主権。

さらにファミリーを探索

Seedance 2.0 Video Models

Seedance 2.0(by Bytedance) is a multimodal video generation model that redefines "controllable creation," moving beyond the limitations of text or start/end frames. It supports quad-modal inputs—text, image, video, and audio—and introduces an industry-leading "Universal Reference" system. By precisely replicating the composition, camera movement, and character actions from reference assets, Seedance 2.0 solves critical issues with character consistency and physical coherence, empowering creators to act as true "directors" with deep control over their output.

ファミリーを表示

Vidu Video Models

Vidu (by ShengShu Technology) is a foundational video model built on the proprietary U-ViT architecture, combining the strengths of Diffusion and Transformer models. It features superior semantic understanding and generation capabilities, producing coherent, fluid visuals that adhere to physical laws without the need for interpolation. With exceptional spatiotemporal consistency and a deep understanding of diverse cultural elements, Vidu empowers professional filmmakers and creators with a stable, efficient, and imaginative tool for video production.

ファミリーを表示

MiniMax LLM Models

MiniMax is a large language model developed by MiniMax AI, focused on efficient reasoning, long-context understanding, and scalable text generation. It is designed for complex tasks such as dialogue systems, document analysis, content creation, and AI agents. With an emphasis on high performance at lower computational cost, MiniMax is well suited for enterprise applications and developer use cases where stability, efficiency, and cost control are important.

ファミリーを表示

GLM LLM Models

GLM (General Language Model) is a large language model developed by ZAI (Zhipu AI) for text understanding, generation, and reasoning. It supports both Chinese and English and performs well in dialogue, content creation, translation, and code assistance. GLM is widely used in chatbots, enterprise AI systems, and developer applications due to its stable performance and versatility.

ファミリーを表示

Moonshot LLM Models

Kimi is a large language model developed by Moonshot AI, designed for reasoning, coding, and long-context understanding. It performs well in complex tasks such as code generation, analysis, and intelligent assistants. With strong performance and efficient architecture, Kimi is suitable for enterprise AI applications and developer use cases. Its balance of capability and cost makes it an increasingly popular choice in the LLM ecosystem.

ファミリーを表示

Open AI Model Families

Explore OpenAI’s language and video models on Atlas Cloud: ChatGPT for advanced reasoning and interaction, and Sora-2 for physics-aware video generation.

ファミリーを表示

Van Video Models

Van Model is a flagship video model family, perfectly retaining the cinematic visuals and complex dynamics of 3D VAE and Flow Matching. By leveraging proprietary compute distillation, it breaks the "quality equals cost" barrier to deliver extreme inference speeds and ultra-low costs. This makes Van the premier engine for enterprises and developers seeking high-frequency, scalable video production on a budget.

ファミリーを表示

Kling 3.0 Video Models

Kling AI Video 3.0 (by Kuaishou) is a groundbreaking model designed to bridge the worlds of sound and visuals through its unique Single-pass architecture. By simultaneously generating visuals, natural voiceovers, sound effects, and ambient atmosphere, it eliminates the disjointed workflows of traditional tools. This true audio-visual integration simplifies complex post-production, providing creators with an immersive storytelling solution that significantly boosts both creative depth and output efficiency.

ファミリーを表示

Veo3.1 Video Models

Veo 3.1 (by Google) is a flagship generative video model that sets a new standard for cinematic AI by deeply integrating semantic capabilities to deliver cinematic visuals, synchronized audio, and complex storytelling in a single workflow. Distinguishing itself through superior adherence to cinematic terminology and physics-based consistency, it offers professional filmmakers an unparalleled tool for transforming scripts into coherent, high-fidelity productions with precise directorial control.

ファミリーを表示

Sora-2 Video Models

The Sora-2 family from OpenAI is the next-generation video + audio generation model, enabling both text-to-video and image-to-video outputs with synchronized dialogue, sound effect, improved physical realism, and fine-grained control.

ファミリーを表示

Nano Banana Image Models

Nano Banana is a fast, lightweight image generation model for playful, vibrant visuals. Optimized for speed and accessibility, it creates high-quality images with smooth shapes, bold colors, and clear compositions—perfect for mascots, stickers, icons, social posts, and fun branding.

ファミリーを表示

Wan2.6 Video Models

Wan 2.6 is a next-generation AI video generation model from Alibaba’s Tongyi Lab, designed for professional-quality, multimodal video creation. It combines advanced narrative understanding, multi-shot storytelling, and native audio–visual synchronization to produce smooth 1080p videos up to 15 s long from text and reference inputs. Wan 2.6 also supports character consistency and role-guided generation, enabling creators to turn scripts into cohesive scenes with seamless motion and lip syncing. Its efficiency and rich creative control make it ideal for short films, advertising, social media content, and automated video workflows.

ファミリーを表示

Seedance 2.0 Video Models

Seedance 2.0(by Bytedance) is a multimodal video generation model that redefines "controllable creation," moving beyond the limitations of text or start/end frames. It supports quad-modal inputs—text, image, video, and audio—and introduces an industry-leading "Universal Reference" system. By precisely replicating the composition, camera movement, and character actions from reference assets, Seedance 2.0 solves critical issues with character consistency and physical coherence, empowering creators to act as true "directors" with deep control over their output.

ファミリーを表示

Vidu Video Models

Vidu (by ShengShu Technology) is a foundational video model built on the proprietary U-ViT architecture, combining the strengths of Diffusion and Transformer models. It features superior semantic understanding and generation capabilities, producing coherent, fluid visuals that adhere to physical laws without the need for interpolation. With exceptional spatiotemporal consistency and a deep understanding of diverse cultural elements, Vidu empowers professional filmmakers and creators with a stable, efficient, and imaginative tool for video production.

ファミリーを表示

MiniMax LLM Models

MiniMax is a large language model developed by MiniMax AI, focused on efficient reasoning, long-context understanding, and scalable text generation. It is designed for complex tasks such as dialogue systems, document analysis, content creation, and AI agents. With an emphasis on high performance at lower computational cost, MiniMax is well suited for enterprise applications and developer use cases where stability, efficiency, and cost control are important.

ファミリーを表示

GLM LLM Models

GLM (General Language Model) is a large language model developed by ZAI (Zhipu AI) for text understanding, generation, and reasoning. It supports both Chinese and English and performs well in dialogue, content creation, translation, and code assistance. GLM is widely used in chatbots, enterprise AI systems, and developer applications due to its stable performance and versatility.

ファミリーを表示

Moonshot LLM Models

Kimi is a large language model developed by Moonshot AI, designed for reasoning, coding, and long-context understanding. It performs well in complex tasks such as code generation, analysis, and intelligent assistants. With strong performance and efficient architecture, Kimi is suitable for enterprise AI applications and developer use cases. Its balance of capability and cost makes it an increasingly popular choice in the LLM ecosystem.

ファミリーを表示

Open AI Model Families

Explore OpenAI’s language and video models on Atlas Cloud: ChatGPT for advanced reasoning and interaction, and Sora-2 for physics-aware video generation.

ファミリーを表示

Van Video Models

Van Model is a flagship video model family, perfectly retaining the cinematic visuals and complex dynamics of 3D VAE and Flow Matching. By leveraging proprietary compute distillation, it breaks the "quality equals cost" barrier to deliver extreme inference speeds and ultra-low costs. This makes Van the premier engine for enterprises and developers seeking high-frequency, scalable video production on a budget.

ファミリーを表示

Kling 3.0 Video Models

Kling AI Video 3.0 (by Kuaishou) is a groundbreaking model designed to bridge the worlds of sound and visuals through its unique Single-pass architecture. By simultaneously generating visuals, natural voiceovers, sound effects, and ambient atmosphere, it eliminates the disjointed workflows of traditional tools. This true audio-visual integration simplifies complex post-production, providing creators with an immersive storytelling solution that significantly boosts both creative depth and output efficiency.

ファミリーを表示

Veo3.1 Video Models

Veo 3.1 (by Google) is a flagship generative video model that sets a new standard for cinematic AI by deeply integrating semantic capabilities to deliver cinematic visuals, synchronized audio, and complex storytelling in a single workflow. Distinguishing itself through superior adherence to cinematic terminology and physics-based consistency, it offers professional filmmakers an unparalleled tool for transforming scripts into coherent, high-fidelity productions with precise directorial control.

ファミリーを表示

Sora-2 Video Models

The Sora-2 family from OpenAI is the next-generation video + audio generation model, enabling both text-to-video and image-to-video outputs with synchronized dialogue, sound effect, improved physical realism, and fine-grained control.

ファミリーを表示

Nano Banana Image Models

Nano Banana is a fast, lightweight image generation model for playful, vibrant visuals. Optimized for speed and accessibility, it creates high-quality images with smooth shapes, bold colors, and clear compositions—perfect for mascots, stickers, icons, social posts, and fun branding.

ファミリーを表示

Wan2.6 Video Models

Wan 2.6 is a next-generation AI video generation model from Alibaba’s Tongyi Lab, designed for professional-quality, multimodal video creation. It combines advanced narrative understanding, multi-shot storytelling, and native audio–visual synchronization to produce smooth 1080p videos up to 15 s long from text and reference inputs. Wan 2.6 also supports character consistency and role-guided generation, enabling creators to turn scripts into cohesive scenes with seamless motion and lip syncing. Its efficiency and rich creative control make it ideal for short films, advertising, social media content, and automated video workflows.

ファミリーを表示