SargalaySargalay

Command Palette

Search for a command to run...

Back to all models
minimax

MiniMax: MiniMax M1

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

minimax/minimax-m1

Context Size

1,000K

Input Price

2,460 Ks/M

Output Price

13,530 Ks/M


Architecture

Text

Supported Parameters

frequency_penaltyinclude_reasoningmax_tokenspresence_penaltyreasoningrepetition_penaltyseedstoptemperaturetool_choicetoolstop_ktop_p

Details

TokenizerOther
Max Completion40,000 tokens
Provider Context1,000K tokens
ModeratedNo