SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
qwen

Qwen: Qwen3 32B

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.

qwen/qwen3-32b

Context Size

40.96K

Input Price

492 Ks/M

Output Price

1,476 Ks/M


Architecture

Text

Supported Parameters

frequency_penaltyinclude_reasoningmax_tokensmin_ppresence_penaltyreasoningrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_p

Details

TokenizerQwen3
Instruct Typeqwen3
Max Completion40,960 tokens
Provider Context40.96K tokens
ModeratedNo