qwen

Qwen: Qwen3.5-Flash

The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.

Input Cost
$0.10
per 1M tokens
Output Cost
$0.40
per 1M tokens
Context Window
1,000,000
tokens
Compare vs GPT-4o
Developer ID: qwen/qwen3.5-flash-02-23

Related Models

qwen
$0.06/1M

Qwen: Qwen3 14B

Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed...

📝 40,960 ctx Compare →
qwen
$0.14/1M

Qwen: Qwen VL Plus

Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recogniti...

📝 131,072 ctx Compare →
qwen
$0.20/1M

Qwen: Qwen2.5-VL 7B Instruct

Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements: ...

📝 32,768 ctx Compare →