Building a Chatbot API with Memory

Build a chatbot API with conversation memory. Create contextual chatbot services. Architecture API Endpoint → Memory Store → LLM Implementation class ChatbotAPI: def __init__(self): self.memories = {} def chat(self, session_id, message): if session_id not in self.memories: self.memories[session_id] = [] self.memories[session_id].append({ “role”: “user”, “content”: message }) response = client.chat.completions.create( model=”gpt-4″, messages=self.memories[session_id] ) return response.choices[0].message.content Conclusion Memory … Read more

Embedding API: Text Vectors at Scale

Generate embeddings for text at scale. Create text vectors for semantic search. Example response = client.embeddings.create( model=”text-embedding-3-small”, input=[“Hello world”, “How are you?”] ) embeddings = [e.embedding for e in response.data] Models ✅ text-embedding-3-small ✅ text-embedding-3-large Pricing $0.02/1M tokens (small) $0.13/1M tokens (large) Conclusion Embeddings enable semantic search!

Vision API: Image Understanding with AI

Use vision models for image understanding. Process images with LLMs. GPT-4 Vision Example response = client.chat.completions.create( model=”gpt-4-vision-preview”, messages=[{ “role”: “user”, “content”: [ {“type”: “text”, “text”: “What’s in this image?”}, {“type”: “image_url”, “image_url”: {“url”: “image_url”}} ] }] ) Use Cases ✅ Image analysis ✅ Document OCR ✅ Visual Q&A Conclusion Vision APIs enable image understanding!

Function Calling with AI APIs

Use function calling for structured outputs. Make LLMs call your functions. Example tools = [{ “type”: “function”, “function”: { “name”: “get_weather”, “parameters”: { “type”: “object”, “properties”: {“location”: {“type”: “string”}} } } }] response = client.chat.completions.create( model=”gpt-4″, messages=[…], tools=tools ) Conclusion Function calling enables structured interactions!

Async API Calls for Better Performance

Use async calls to improve performance. Make concurrent API requests efficiently. Async Example import asyncio from openai import AsyncOpenAI client = AsyncOpenAI() async def generate(prompt): return await client.chat.completions.create(…) async def main(): tasks = [generate(f”Task {i}”) for i in range(10)] results = await asyncio.gather(*tasks) Benefits ✅ Faster processing ✅ Better throughput ✅ Resource efficient Conclusion Async … Read more

Building AI-Powered APIs

Create your own AI-powered API. Build APIs that use LLMs for processing. Architecture FastAPI → AI Service → LLM Provider FastAPI Example from fastapi import FastAPI app = FastAPI() @app.post(“/generate”) async def generate_text(prompt: str): response = client.chat.completions.create(…) return {“text”: response.choices[0].message.content} Conclusion Build your own AI APIs easily!

API Cost Monitoring and Optimization

Monitor and optimize API costs. Track spending and reduce costs. Monitoring Tools ✅ Built-in dashboards ✅ Custom tracking ✅ Alerts Cost Tracking class CostTracker: def __init__(self): self.total_tokens = 0 self.total_cost = 0 def track(self, input_tokens, output_tokens): self.total_tokens += input_tokens + output_tokens self.total_cost += calculate_cost(…) Conclusion Monitoring prevents bill surprises!

Streaming vs Batch API Responses

Choose between streaming and batch responses. Understand when to use each approach. Streaming ✅ Real-time output ✅ Better UX ✅ Early stopping Batch ✅ Simpler code ✅ Full response ✅ Easier testing When to Use Streaming: Chat apps, long responses Batch: Processing, batch jobs Conclusion Choose based on your use case!

Error Handling for AI APIs

Handle API errors gracefully. Build robust error handling for production. Common Errors ✅ AuthenticationError ✅ RateLimitError ✅ APIConnectionError ✅ InvalidRequestError Error Handling Pattern try: response = client.chat.completions.create(…) except AuthenticationError: log_and_alert(“Invalid API key”) except RateLimitError: time.sleep(60) retry() Conclusion Error handling ensures reliability!

API Authentication and Security

Secure your AI API integrations. Protect API keys and sensitive data. Security Best Practices ✅ Never hardcode API keys ✅ Use environment variables ✅ Rotate keys regularly ✅ Use secrets manager Environment Variables import os api_key = os.environ.get(“OPENAI_API_KEY”) Key Rotation Regularly rotate API keys to minimize exposure risk. Conclusion Security is critical for production applications!