GTE Multilingual Base

HuggingFace

A compact yet high-performance embedding model from the GTE family, designed for multilingual retrieval and long-context text representation. It focuses on delivering strong retrieval accuracy while keeping hardware and inference requirements low, making it well suited for production RAG systems that need speed, scalability, and multilingual coverage without relying on large decoder-only models.

Key features:

Resources

Articles, Blogs, Essays


Tags: ai   model   embedding  

Last modified 07 May 2026