Skip to main content

개요

벡터 스토어는 임베딩된 데이터를 저장하고 유사도 검색을 수행합니다.

인터페이스

LangChain은 벡터 스토어를 위한 통합 인터페이스를 제공하여 다음과 같은 작업을 수행할 수 있습니다:
  • add_documents - 스토어에 문서를 추가합니다.
  • delete - ID로 저장된 문서를 삭제합니다.
  • similarity_search - 의미적으로 유사한 문서를 쿼리합니다.
이러한 추상화를 통해 애플리케이션 로직을 변경하지 않고도 다양한 구현체 간에 전환할 수 있습니다.

초기화

벡터 스토어를 초기화하려면 임베딩 모델을 제공해야 합니다:
from langchain_core.vectorstores import InMemoryVectorStore
vector_store = InMemoryVectorStore(embedding=SomeEmbeddingModel())

문서 추가

다음과 같이 page_content와 선택적 메타데이터를 포함하는 Document 객체를 추가합니다:
vector_store.add_documents(documents=[doc1, doc2], ids=["id1", "id2"])

문서 삭제

ID를 지정하여 삭제합니다:
vector_store.delete(ids=["id1"])

유사도 검색

similarity_search를 사용하여 의미적 쿼리를 실행하면 가장 가까운 임베딩된 문서가 반환됩니다:
similar_docs = vector_store.similarity_search("your query here")
많은 벡터 스토어는 다음과 같은 매개변수를 지원합니다:
  • k — 반환할 결과 수
  • filter — 메타데이터 기반 조건부 필터링

유사도 메트릭 및 인덱싱

임베딩 유사도는 다음을 사용하여 계산될 수 있습니다:
  • 코사인 유사도
  • 유클리드 거리
  • 내적
효율적인 검색은 종종 HNSW(Hierarchical Navigable Small World)와 같은 인덱싱 방법을 사용하지만, 세부 사항은 벡터 스토어에 따라 다릅니다.

메타데이터 필터링

메타데이터(예: 출처, 날짜)로 필터링하면 검색 결과를 정제할 수 있습니다:
vector_store.similarity_search(
  "query",
  k=3,
  filter={"source": "tweets"}
)

주요 통합

임베딩 모델 선택:
pip install -qU langchain-openai
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
  os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
pip install -qU "langchain[azure]"
import getpass
import os

if not os.environ.get("AZURE_OPENAI_API_KEY"):
  os.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass("Enter API key for Azure: ")

from langchain_openai import AzureOpenAIEmbeddings

embeddings = AzureOpenAIEmbeddings(
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
    openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
pip install -qU langchain-google-genai
import getpass
import os

if not os.environ.get("GOOGLE_API_KEY"):
  os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")

from langchain_google_genai import GoogleGenerativeAIEmbeddings

embeddings = GoogleGenerativeAIEmbeddings(model="models/gemini-embedding-001")
pip install -qU langchain-google-vertexai
from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings(model="text-embedding-005")
pip install -qU langchain-aws
from langchain_aws import BedrockEmbeddings

embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v2:0")
pip install -qU langchain-huggingface
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
pip install -qU langchain-ollama
from langchain_ollama import OllamaEmbeddings

embeddings = OllamaEmbeddings(model="llama3")
pip install -qU langchain-cohere
import getpass
import os

if not os.environ.get("COHERE_API_KEY"):
  os.environ["COHERE_API_KEY"] = getpass.getpass("Enter API key for Cohere: ")

from langchain_cohere import CohereEmbeddings

embeddings = CohereEmbeddings(model="embed-english-v3.0")
pip install -qU langchain-mistralai
import getpass
import os

if not os.environ.get("MISTRALAI_API_KEY"):
  os.environ["MISTRALAI_API_KEY"] = getpass.getpass("Enter API key for MistralAI: ")

from langchain_mistralai import MistralAIEmbeddings

embeddings = MistralAIEmbeddings(model="mistral-embed")
pip install -qU langchain-nomic
import getpass
import os

if not os.environ.get("NOMIC_API_KEY"):
  os.environ["NOMIC_API_KEY"] = getpass.getpass("Enter API key for Nomic: ")

from langchain_nomic import NomicEmbeddings

embeddings = NomicEmbeddings(model="nomic-embed-text-v1.5")
pip install -qU langchain-nvidia-ai-endpoints
import getpass
import os

if not os.environ.get("NVIDIA_API_KEY"):
  os.environ["NVIDIA_API_KEY"] = getpass.getpass("Enter API key for NVIDIA: ")

from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings

embeddings = NVIDIAEmbeddings(model="NV-Embed-QA")
pip install -qU langchain-voyageai
import getpass
import os

if not os.environ.get("VOYAGE_API_KEY"):
  os.environ["VOYAGE_API_KEY"] = getpass.getpass("Enter API key for Voyage AI: ")

from langchain-voyageai import VoyageAIEmbeddings

embeddings = VoyageAIEmbeddings(model="voyage-3")
pip install -qU langchain-ibm
import getpass
import os

if not os.environ.get("WATSONX_APIKEY"):
  os.environ["WATSONX_APIKEY"] = getpass.getpass("Enter API key for IBM watsonx: ")

from langchain_ibm import WatsonxEmbeddings

embeddings = WatsonxEmbeddings(
    model_id="ibm/slate-125m-english-rtrvr",
    url="https://us-south.ml.cloud.ibm.com",
    project_id="<WATSONX PROJECT_ID>",
)
pip install -qU langchain-core
from langchain_core.embeddings import DeterministicFakeEmbedding

embeddings = DeterministicFakeEmbedding(size=4096)
pip install -qU "langchain[langchain-xai]"
import getpass
import os

if not os.environ.get("XAI_API_KEY"):
  os.environ["XAI_API_KEY"] = getpass.getpass("Enter API key for xAI: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("grok-2", model_provider="xai")
pip install -qU "langchain[langchain-perplexity]"
import getpass
import os

if not os.environ.get("PPLX_API_KEY"):
  os.environ["PPLX_API_KEY"] = getpass.getpass("Enter API key for Perplexity: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("llama-3.1-sonar-small-128k-online", model_provider="perplexity")
pip install -qU "langchain[langchain-deepseek]"
import getpass
import os

if not os.environ.get("DEEPSEEK_API_KEY"):
  os.environ["DEEPSEEK_API_KEY"] = getpass.getpass("Enter API key for DeepSeek: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("deepseek-chat", model_provider="deepseek")
벡터 스토어 선택:
pip install -qU langchain-core
from langchain_core.vectorstores import InMemoryVectorStore

vector_store = InMemoryVectorStore(embeddings)
pip install -qU langchain-astradb
from langchain_astradb import AstraDBVectorStore

vector_store = AstraDBVectorStore(
    embedding=embeddings,
    api_endpoint=ASTRA_DB_API_ENDPOINT,
    collection_name="astra_vector_langchain",
    token=ASTRA_DB_APPLICATION_TOKEN,
    namespace=ASTRA_DB_NAMESPACE,
)
pip install -qU langchain-azure-ai azure-cosmos
from langchain_azure_ai.vectorstores.azure_cosmos_db_no_sql import (
    AzureCosmosDBNoSqlVectorSearch,
)
vector_search = AzureCosmosDBNoSqlVectorSearch.from_documents(
    documents=docs,
    embedding=openai_embeddings,
    cosmos_client=cosmos_client,
    database_name=database_name,
    container_name=container_name,
    vector_embedding_policy=vector_embedding_policy,
    full_text_policy=full_text_policy,
    indexing_policy=indexing_policy,
    cosmos_container_properties=cosmos_container_properties,
    cosmos_database_properties={},
    full_text_search_enabled=True,
)
pip install -qU langchain-azure-ai pymongo
from langchain_azure_ai.vectorstores.azure_cosmos_db_mongo_vcore import (
    AzureCosmosDBMongoVCoreVectorSearch,
)

vectorstore = AzureCosmosDBMongoVCoreVectorSearch.from_documents(
    docs,
    openai_embeddings,
    collection=collection,
    index_name=INDEX_NAME,
)
pip install -qU langchain-chroma
from langchain_chroma import Chroma

vector_store = Chroma(
    collection_name="example_collection",
    embedding_function=embeddings,
    persist_directory="./chroma_langchain_db",  # Where to save data locally, remove if not necessary
)
pip install -qU langchain-community
import faiss
from langchain_community.docstore.in_memory import InMemoryDocstore
from langchain_community.vectorstores import FAISS

embedding_dim = len(embeddings.embed_query("hello world"))
index = faiss.IndexFlatL2(embedding_dim)

vector_store = FAISS(
    embedding_function=embeddings,
    index=index,
    docstore=InMemoryDocstore(),
    index_to_docstore_id={},
)
pip install -qU langchain-milvus
from langchain_milvus import Milvus

URI = "./milvus_example.db"

vector_store = Milvus(
    embedding_function=embeddings,
    connection_args={"uri": URI},
    index_params={"index_type": "FLAT", "metric_type": "L2"},
)
pip install -qU langchain-mongodb
from langchain_mongodb import MongoDBAtlasVectorSearch

vector_store = MongoDBAtlasVectorSearch(
    embedding=embeddings,
    collection=MONGODB_COLLECTION,
    index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
    relevance_score_fn="cosine",
)
pip install -qU langchain-postgres
from langchain_postgres import PGVector

vector_store = PGVector(
    embeddings=embeddings,
    collection_name="my_docs",
    connection="postgresql+psycopg://..."
)
pip install -qU langchain-postgres
from langchain_postgres import PGEngine, PGVectorStore

$engine = PGEngine.from_connection_string(
    url="postgresql+psycopg://..."
)

vector_store = PGVectorStore.create_sync(
    engine=pg_engine,
    table_name='test_table',
    embedding_service=embedding
)
pip install -qU langchain-pinecone
from langchain_pinecone import PineconeVectorStore
from pinecone import Pinecone

pc = Pinecone(api_key=...)
index = pc.Index(index_name)

vector_store = PineconeVectorStore(embedding=embeddings, index=index)
pip install -qU langchain-qdrant
from qdrant_client.models import Distance, VectorParams
from langchain_qdrant import QdrantVectorStore
from qdrant_client import QdrantClient

client = QdrantClient(":memory:")

vector_size = len(embeddings.embed_query("sample text"))

if not client.collection_exists("test"):
    client.create_collection(
        collection_name="test",
        vectors_config=VectorParams(size=vector_size, distance=Distance.COSINE)
    )
vector_store = QdrantVectorStore(
    client=client,
    collection_name="test",
    embedding=embeddings,
)
VectorstoreDelete by IDFilteringSearch by VectorSearch with scoreAsyncPasses Standard TestsMulti TenancyIDs in add Documents
AstraDBVectorStore
AzureCosmosDBNoSqlVectorStore
AzureCosmosDBMongoVCoreVectorStore
Chroma
Clickhouse
CouchbaseSearchVectorStore
DatabricksVectorSearch
ElasticsearchStore
FAISS
InMemoryVectorStore
Milvus
Moorcheh
MongoDBAtlasVectorSearch
openGauss
PGVector
PGVectorStore
PineconeVectorStore
QdrantVectorStore
Weaviate
SQLServer
ZeusDB

전체 벡터 스토어

Activeloop Deep Lake

Alibaba Cloud OpenSearch

AnalyticDB

Annoy

Apache Doris

ApertureDB

Astra DB Vector Store

Atlas

AwaDB

Azure Cosmos DB Mongo vCore

Azure Cosmos DB No SQL

Azure AI Search

Bagel

BagelDB

Baidu Cloud ElasticSearch VectorSearch

Baidu VectorDB

Apache Cassandra

Chroma

Clarifai

ClickHouse

Couchbase

DashVector

Databricks

IBM Db2

DingoDB

DocArray HnswSearch

DocArray InMemorySearch

Amazon Document DB

DuckDB

China Mobile ECloud ElasticSearch

Elasticsearch

Epsilla

Faiss

Faiss (Async)

FalkorDB

Gel

Google AlloyDB

Google BigQuery Vector Search

Google Cloud SQL for MySQL

Google Cloud SQL for PostgreSQL

Firestore

Google Memorystore for Redis

Google Spanner

Google Vertex AI Feature Store

Google Vertex AI Vector Search

Hippo

Hologres

Jaguar Vector Database

Kinetica

LanceDB

Lantern

Lindorm

LLMRails

ManticoreSearch

MariaDB

Marqo

Meilisearch

Amazon MemoryDB

Milvus

Momento Vector Index

Moorcheh

MongoDB Atlas

MyScale

Neo4j Vector Index

NucliaDB

Oceanbase

openGauss

OpenSearch

Oracle AI Vector Search

Pathway

Postgres Embedding

PGVecto.rs

PGVector

PGVectorStore

Pinecone

Pinecone (sparse)

Qdrant

Relyt

Rockset

SAP HANA Cloud Vector Engine

ScaNN

SemaDB

SingleStore

scikit-learn

SQLiteVec

SQLite-VSS

SQLServer

StarRocks

Supabase

SurrealDB

Tablestore

Tair

Tencent Cloud VectorDB

ThirdAI NeuralDB

TiDB Vector

Tigris

TileDB

Timescale Vector

Typesense

Upstash Vector

USearch

Vald

VDMS

Vearch

Vectara

Vespa

viking DB

vlite

Weaviate

Xata

YDB

Yellowbrick

Zep

Zep Cloud

ZeusDB

Zilliz


Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.
I