LangChain Integration: OpenAI, DeepSeek, and More

LangChain integrates with multiple LLM providers. Switch between models easily. Supported Providers ✅ OpenAI ✅ DeepSeek ✅ Anthropic ✅ HuggingFace ✅ Local models Using DeepSeek from langchain_openai import OpenAI llm = OpenAI( openai_api_key=”your-deepseek-key”, openai_api_base=”https://api.deepseek.com” ) Model Switching Change the model without rewriting code. Conclusion LangChain provides unified API for all LLMs!

LangChain Streaming: Real-time Output

Streaming provides real-time LLM responses. Implement streaming for better user experience. Streaming with Callbacks from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()]) llm.invoke(“Tell me a story”) Async Streaming Use async callbacks for web applications. Benefits ✅ Faster perceived response ✅ Better UX ✅ Progressive display Conclusion Streaming improves user experience!

LangChain Callbacks: Monitoring and Logging

Callbacks monitor and log LangChain operations. Track tokens, costs, and performance. Callback Types ✅ StdOutCallbackHandler: Print to console ✅ FileCallbackHandler: Log to file ✅ Custom handlers Example from langchain.callbacks import StdOutCallbackHandler handler = StdOutCallbackHandler() chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler]) Custom Callback class MyCallback(BaseCallbackHandler): def on_llm_start(self, serialized, prompts): print(f”LLM started with {len(prompts)} prompts”) Conclusion Callbacks provide … Read more

LangChain Embeddings: Text to Vectors

Embeddings convert text into numerical vectors. Understand and use embeddings for semantic search. Embedding Models ✅ OpenAI Embeddings ✅ HuggingFace Embeddings ✅ Cohere Embeddings ✅ Local models Example from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vector = embeddings.embed_query(“Hello world”) vectors = embeddings.embed_documents([“doc1”, “doc2”]) Similarity Calculation Use cosine similarity to compare embeddings. Conclusion Embeddings enable semantic … Read more

LangChain Vector Stores: Efficient Retrieval

Vector stores enable efficient similarity search. Store and retrieve document embeddings. Popular Vector Stores ✅ Pinecone ✅ Chroma ✅ Weaviate ✅ FAISS ✅ Milvus Example with Chroma from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_texts(texts, embeddings) results = vectorstore.similarity_search(“query”, k=3) Benefits ✅ Fast similarity search ✅ Scalable ✅ Persistent … Read more

LangChain Text Splitters: Chunking Documents

Text splitters break documents into manageable chunks. Split text for optimal LLM processing. Splitter Types CharacterTextSplitter: By characters RecursiveCharacterTextSplitter: Smart splitting TokenTextSplitter: By tokens SentenceTextSplitter: By sentences Example from langchain.text_splitter import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200 ) chunks = splitter.split_text(long_text) Best Practices ✅ Use overlap for context ✅ Match chunk size to model limits … Read more

LangChain Document Loaders: Data Ingestion

Document loaders ingest data from various sources. Load documents from files, APIs, and databases. Supported Formats ✅ PDF ✅ Word documents ✅ CSV ✅ JSON ✅ HTML ✅ Databases Example from langchain.document_loaders import PyPDFLoader, TextLoader pdf_loader = PyPDFLoader(“document.pdf”) pdf_docs = pdf_loader.load() text_loader = TextLoader(“file.txt”) text_docs = text_loader.load() Custom Loaders Create loaders for custom data sources. … Read more

LangChain Output Parsers: Structured Responses

Output parsers convert LLM text into structured data. Get reliable, parseable outputs from LLMs. Parser Types JsonParser: JSON output PydanticParser: Pydantic models ListParser: List output DatetimeParser: Date/time parsing Example from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel class Person(BaseModel): name: str age: int parser = PydanticOutputParser(pydantic_object=Person) prompt = PromptTemplate(template=”{query}\n{format_instructions}”) Benefits ✅ Type-safe outputs ✅ Validation … Read more

LangChain Prompt Templates: Dynamic Prompts

Prompt templates create reusable, dynamic prompts. Master prompt engineering with LangChain templates. Template Types PromptTemplate: String templates ChatPromptTemplate: Chat-specific FewShotPromptTemplate: With examples Example from langchain.prompts import PromptTemplate template = “”” You are a {role}. Task: {task} Context: {context} “”” prompt = PromptTemplate( template=template, input_variables=[“role”, “task”, “context”] ) Partial Variables Pre-fill some variables for reuse. Conclusion … Read more

LangChain Chains: Combining Multiple Operations

Chains allow you to combine multiple LLM operations. Build complex workflows with LangChain chains. Chain Types SimpleChain: Single input/output SequentialChain: Multiple steps in sequence RouterChain: Conditional routing Sequential Chain Example from langchain.chains import SequentialChain chain1 = LLMChain(llm=llm, prompt=prompt1) chain2 = LLMChain(llm=llm, prompt=prompt2) overall_chain = SequentialChain(chains=[chain1, chain2]) Best Practices ✅ Keep chains focused ✅ Handle errors … Read more