SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
meta-llama

Meta: Llama 4 Scout

Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input (text and image) and multilingual output (text and code) across 12 supported languages. Designed for assistant-style interaction and visual reasoning, Scout uses 16 experts per forward pass and features a context length of 10 million tokens, with a training corpus of ~40 trillion tokens. Built for high efficiency and local or commercial deployment, Llama 4 Scout incorporates early fusion for seamless modality integration. It is instruction-tuned for use in multilingual chat, captioning, and image understanding tasks. Released under the Llama 4 Community License, it was last trained on data up to August 2024 and launched publicly on April 5, 2025.

meta-llama/llama-4-scout

Context Size

327.68K

Input Price

492 Ks/M

Output Price

1,845 Ks/M


Architecture

Text
Image

Supported Parameters

frequency_penaltymax_tokensmin_ppresence_penaltyrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_p

Details

TokenizerLlama4
Max Completion16,384 tokens
Provider Context327.68K tokens
ModeratedNo