SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
liquid

LiquidAI: LFM2-8B-A1B

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

liquid/lfm2-8b-a1b

Context Size

32.768K

Input Price

61.5 Ks/M

Output Price

123 Ks/M


Architecture

Text

Supported Parameters

frequency_penaltymax_tokensmin_ppresence_penaltyrepetition_penaltyseedstoptemperaturetop_ktop_p

Details

TokenizerOther
Provider Context32.768K tokens
ModeratedNo