HuggingFace

Model instances

MiniMax M2.1

An agent-focused LLM that brings top-tier autonomous capabilities into production. With 230B parameters (10B active), 60 tokens/sec output speed, and a 204,800-token context window, MiniMax-M2.1 represents a reliable choice for agentic coding and automation tasks.

Beyond the model itself, MiniMax also introduced VIBE (Visual & Interactive Benchmark for Execution in Application Development), a benchmark that evaluates a model’s ability to build complete, functional applications “from zero to one.” VIBE covers five core areas (Web, Simulation, Android, iOS, and Backend) and MiniMax-M2.1 delivers strong results overall, with particularly high scores on VIBE-Web (91.5) and VIBE-Android (89.7).

Why should you use MiniMax-M2.1:

Note that MiniMax-M2.1 is released under a modified MIT license. The only restriction is that if you use the model (or derivative works) in your commercial product, you must explicitly display the name “MiniMax M2.1” in the user interface.

Reading

Articles


Tags: ai   model   llm  

Last modified 22 March 2026