Back to all models
qwen
Qwen: Qwen3 8B
Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling.
qwen/qwen3-8b
Context Size
40.96K
Input Price
307.5 Ks/M
Output Price
2,460 Ks/M
Architecture
Text
Supported Parameters
frequency_penaltyinclude_reasoninglogit_biasmax_tokensmin_ppresence_penaltyreasoningrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_p
Details
TokenizerQwen3
Instruct Typeqwen3
Max Completion8,192 tokens
Provider Context40.96K tokens
ModeratedNo