LFM2.5 is a new family of hybrid models designed for on-device deployment. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
Best-in-class performance: A 1.2B model rivaling much larger models, bringing high-quality AI to your pocket.
LFM2.5-1.2B-Thinking is a general-purpose text-only model with the following features:
Number of parameters: 1.17B
Number of layers: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
Training budget: 28T tokens
Context length: 32,768 tokens
Vocabulary size: 65,536
Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
Last modified 22 March 2026