Skip to main content
langchain-localai is a 3rd party integration package for LocalAI. It provides a simple way to use LocalAI services in LangChain.The source code is available on GitHub
Let’s load the LocalAI Embedding class. In order to use the LocalAI Embedding class, you need to have the LocalAI service hosted somewhere and configure the embedding models. See the documentation at localai.io/basics/getting_started/index.html and localai.io/features/embeddings/index.html.
pip install -U langchain-localai
from langchain_localai import LocalAIEmbeddings

embeddings = LocalAIEmbeddings(
    openai_api_base="http://localhost:8080", model="embedding-model-name"
)
text = "This is a test document."

query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])

Legacy langchain-community LocalAIEmbeddings documentation

For proper compatibility, please ensure you are using the openai SDK at version 0.x.
Let’s load the LocalAI Embedding class with embeddings model.
pip install -U langchain-community
from langchain_community.embeddings import LocalAIEmbeddings
import os

# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"

embeddings = LocalAIEmbeddings(
    openai_api_base="http://localhost:8080", model="embedding-model-name"
)

text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.