qwen

Qwen: Qwen3 235B A22B Thinking 2507

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...

Input Cost
$0.13
per 1M tokens
Output Cost
$0.60
per 1M tokens
Context Window
262,144
tokens
Compare vs GPT-4o
Developer ID: qwen/qwen3-235b-a22b-thinking-2507

Related Models

qwen
$0.10/1M

Qwen: Qwen3.5-9B

Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver s...

📝 262,144 ctx Compare →
qwen
$0.07/1M

Qwen: Qwen3 Coder 30B A3B Instruct

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 ...

📝 160,000 ctx Compare →
qwen
$0.16/1M

Qwen: Qwen3.5-35B-A3B

The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid archit...

📝 262,144 ctx Compare →