SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
qwen

Qwen: Qwen2.5 Coder 7B Instruct

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.

qwen/qwen2.5-coder-7b-instruct

Context Size

32.768K

Input Price

184.5 Ks/M

Output Price

553.5 Ks/M


Architecture

Text

Supported Parameters

frequency_penaltymax_tokenspresence_penaltyrepetition_penaltyresponse_formatstructured_outputstemperaturetop_ktop_p

Details

TokenizerQwen
Provider Context32.768K tokens
ModeratedNo