SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
meta-llama

Meta: Llama 3.2 11B Vision Instruct

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

meta-llama/llama-3.2-11b-vision-instruct

Context Size

131.072K

Input Price

301.35 Ks/M

Output Price

301.35 Ks/M


Architecture

Text
Image

Supported Parameters

frequency_penaltymax_tokensmin_ppresence_penaltyrepetition_penaltyresponse_formatseedstoptemperaturetop_ktop_p

Details

TokenizerLlama3
Instruct Typellama3
Max Completion16,384 tokens
Provider Context131.072K tokens
ModeratedNo