LangChain Vector Stores: Efficient Retrieval

Vector stores enable efficient similarity search. Store and retrieve document embeddings. Popular Vector Stores ✅ Pinecone ✅ Chroma ✅ Weaviate ✅ FAISS ✅ Milvus Example with Chroma from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_texts(texts, embeddings) results = vectorstore.similarity_search(“query”, k=3) Benefits ✅ Fast similarity search ✅ Scalable ✅ Persistent … Read more

LangChain Text Splitters: Chunking Documents

Text splitters break documents into manageable chunks. Split text for optimal LLM processing. Splitter Types CharacterTextSplitter: By characters RecursiveCharacterTextSplitter: Smart splitting TokenTextSplitter: By tokens SentenceTextSplitter: By sentences Example from langchain.text_splitter import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200 ) chunks = splitter.split_text(long_text) Best Practices ✅ Use overlap for context ✅ Match chunk size to model limits … Read more

LangChain Document Loaders: Data Ingestion

Document loaders ingest data from various sources. Load documents from files, APIs, and databases. Supported Formats ✅ PDF ✅ Word documents ✅ CSV ✅ JSON ✅ HTML ✅ Databases Example from langchain.document_loaders import PyPDFLoader, TextLoader pdf_loader = PyPDFLoader(“document.pdf”) pdf_docs = pdf_loader.load() text_loader = TextLoader(“file.txt”) text_docs = text_loader.load() Custom Loaders Create loaders for custom data sources. … Read more

LangChain Output Parsers: Structured Responses

Output parsers convert LLM text into structured data. Get reliable, parseable outputs from LLMs. Parser Types JsonParser: JSON output PydanticParser: Pydantic models ListParser: List output DatetimeParser: Date/time parsing Example from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel class Person(BaseModel): name: str age: int parser = PydanticOutputParser(pydantic_object=Person) prompt = PromptTemplate(template=”{query}\n{format_instructions}”) Benefits ✅ Type-safe outputs ✅ Validation … Read more

LangChain Prompt Templates: Dynamic Prompts

Prompt templates create reusable, dynamic prompts. Master prompt engineering with LangChain templates. Template Types PromptTemplate: String templates ChatPromptTemplate: Chat-specific FewShotPromptTemplate: With examples Example from langchain.prompts import PromptTemplate template = “”” You are a {role}. Task: {task} Context: {context} “”” prompt = PromptTemplate( template=template, input_variables=[“role”, “task”, “context”] ) Partial Variables Pre-fill some variables for reuse. Conclusion … Read more

LangChain Chains: Combining Multiple Operations

Chains allow you to combine multiple LLM operations. Build complex workflows with LangChain chains. Chain Types SimpleChain: Single input/output SequentialChain: Multiple steps in sequence RouterChain: Conditional routing Sequential Chain Example from langchain.chains import SequentialChain chain1 = LLMChain(llm=llm, prompt=prompt1) chain2 = LLMChain(llm=llm, prompt=prompt2) overall_chain = SequentialChain(chains=[chain1, chain2]) Best Practices ✅ Keep chains focused ✅ Handle errors … Read more

LangChain Tools: Extending AI Capabilities

Tools give LLMs access to external functions and APIs. Create and use custom tools in LangChain. Built-in Tools ✅ Web Search ✅ Calculator ✅ Python REPL ✅ File operations Custom Tool Example from langchain.tools import BaseTool class WeatherTool(BaseTool): name = “weather” description = “Get weather for a location” def _run(self, location: str): # API call … Read more

LangChain RAG: Retrieval-Augmented Generation

RAG combines document retrieval with LLM generation. Build powerful document-based Q&A systems. RAG Components 1. Document Loader 2. Text Splitter 3. Embeddings 4. Vector Store 5. Retriever Implementation from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import Chroma loader = TextLoader(“document.txt”) documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000) texts = … Read more

LangChain Agents: Autonomous AI Assistants

Agents use LLMs to decide which actions to take. Build autonomous AI assistants with LangChain agents. Agent Types Zero-shot Agent: Decides without examples Conversational Agent: For chat applications ReAct Agent: Reasoning and acting Creating an Agent from langchain.agents import initialize_agent, Tool from langchain.tools import DuckDuckGoSearchRun search = DuckDuckGoSearchRun() tools = [Tool(name=”Search”, func=search.run, description=”Search the web”)] … Read more

LangChain Memory: Building Context-Aware Applications

Memory allows LLMs to remember previous interactions. Learn to implement different memory types in LangChain. Memory Types ConversationBufferMemory: Stores all messages ConversationSummaryMemory: Summarizes conversations VectorStoreMemory: Uses vector similarity Implementation from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationChain memory = ConversationBufferMemory() conversation = ConversationChain(llm=llm, memory=memory) conversation.predict(input=”Hi, I’m learning LangChain”) conversation.predict(input=”What did I say my name was?”) … Read more