← Back to Radar
AdoptAutomate

nomic-embed-text

Local embedding model. Fast, free, good enough for production RAG.

nomic-embed-text is an open embedding model that runs locally via Ollama. The quality is competitive with commercial embedding APIs, and running it locally means zero per-token costs and no rate limits.

For RAG pipelines processing thousands of documents, the cost difference is substantial. Commercial embedding APIs charge per token — local inference on nomic-embed-text costs only electricity. We use it for all document embedding in our RAG stack. The 768-dimension vectors work well with Qdrant for semantic search.

aiembeddingslocalrag