Claude Agent Skill · by Wshobson

Langchain Architecture

This one's for building serious LLM applications with LangChain 1.x and LangGraph, not just chatbot demos. It covers the modern agent patterns like ReAct and pl

Install
Terminal · npx
$npx skills add https://github.com/wshobson/agents --skill langchain-architecture
Works with Paperclip

How Langchain Architecture fits into a Paperclip company.

Langchain Architecture drops into any Paperclip agent that handles this kind of work. Assign it to a specialist inside a pre-configured PaperclipOrg company and the skill becomes available on every heartbeat — no prompt engineering, no tool wiring.

S
SaaS FactoryPaired

Pre-configured AI company — 18 agents, 18 skills, one-time purchase.

$27$59
Explore pack
Source file
SKILL.md634 lines
Expand
---name: langchain-architecturedescription: Design LLM applications using LangChain 1.x and LangGraph for agents, memory, and tool integration. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows.--- # LangChain & LangGraph Architecture Master modern LangChain 1.x and LangGraph for building sophisticated LLM applications with agents, state management, memory, and tool integration. ## When to Use This Skill - Building autonomous AI agents with tool access- Implementing complex multi-step LLM workflows- Managing conversation memory and state- Integrating LLMs with external data sources and APIs- Creating modular, reusable LLM application components- Implementing document processing pipelines- Building production-grade LLM applications ## Package Structure (LangChain 1.x) ```langchain (1.2.x)         # High-level orchestrationlangchain-core (1.2.x)    # Core abstractions (messages, prompts, tools)langchain-community       # Third-party integrationslanggraph                 # Agent orchestration and state managementlangchain-openai          # OpenAI integrationslangchain-anthropic       # Anthropic/Claude integrationslangchain-voyageai        # Voyage AI embeddingslangchain-pinecone        # Pinecone vector store``` ## Core Concepts ### 1. LangGraph Agents LangGraph is the standard for building agents in 2026. It provides: **Key Features:** - **StateGraph**: Explicit state management with typed state- **Durable Execution**: Agents persist through failures- **Human-in-the-Loop**: Inspect and modify state at any point- **Memory**: Short-term and long-term memory across sessions- **Checkpointing**: Save and resume agent state **Agent Patterns:** - **ReAct**: Reasoning + Acting with `create_react_agent`- **Plan-and-Execute**: Separate planning and execution nodes- **Multi-Agent**: Supervisor routing between specialized agents- **Tool-Calling**: Structured tool invocation with Pydantic schemas ### 2. State Management LangGraph uses TypedDict for explicit state: ```pythonfrom typing import Annotated, TypedDictfrom langgraph.graph import MessagesState # Simple message-based stateclass AgentState(MessagesState):    """Extends MessagesState with custom fields."""    context: Annotated[list, "retrieved documents"] # Custom state for complex agentsclass CustomState(TypedDict):    messages: Annotated[list, "conversation history"]    context: Annotated[dict, "retrieved context"]    current_step: str    results: list``` ### 3. Memory Systems Modern memory implementations: - **ConversationBufferMemory**: Stores all messages (short conversations)- **ConversationSummaryMemory**: Summarizes older messages (long conversations)- **ConversationTokenBufferMemory**: Token-based windowing- **VectorStoreRetrieverMemory**: Semantic similarity retrieval- **LangGraph Checkpointers**: Persistent state across sessions ### 4. Document Processing Loading, transforming, and storing documents: **Components:** - **Document Loaders**: Load from various sources- **Text Splitters**: Chunk documents intelligently- **Vector Stores**: Store and retrieve embeddings- **Retrievers**: Fetch relevant documents ### 5. Callbacks & Tracing LangSmith is the standard for observability: - Request/response logging- Token usage tracking- Latency monitoring- Error tracking- Trace visualization ## Quick Start ### Modern ReAct Agent with LangGraph ```pythonfrom langgraph.prebuilt import create_react_agentfrom langgraph.checkpoint.memory import MemorySaverfrom langchain_anthropic import ChatAnthropicfrom langchain_core.tools import toolimport astimport operator # Initialize LLM (Claude Sonnet 4.6 recommended)llm = ChatAnthropic(model="claude-sonnet-4-6", temperature=0) # Define tools with Pydantic schemas@tooldef search_database(query: str) -> str:    """Search internal database for information."""    # Your database search logic    return f"Results for: {query}" @tooldef calculate(expression: str) -> str:    """Safely evaluate a mathematical expression.     Supports: +, -, *, /, **, %, parentheses    Example: '(2 + 3) * 4' returns '20'    """    # Safe math evaluation using ast    allowed_operators = {        ast.Add: operator.add,        ast.Sub: operator.sub,        ast.Mult: operator.mul,        ast.Div: operator.truediv,        ast.Pow: operator.pow,        ast.Mod: operator.mod,        ast.USub: operator.neg,    }     def _eval(node):        if isinstance(node, ast.Constant):            return node.value        elif isinstance(node, ast.BinOp):            left = _eval(node.left)            right = _eval(node.right)            return allowed_operators[type(node.op)](left, right)        elif isinstance(node, ast.UnaryOp):            operand = _eval(node.operand)            return allowed_operators[type(node.op)](operand)        else:            raise ValueError(f"Unsupported operation: {type(node)}")     try:        tree = ast.parse(expression, mode='eval')        return str(_eval(tree.body))    except Exception as e:        return f"Error: {e}" tools = [search_database, calculate] # Create checkpointer for memory persistencecheckpointer = MemorySaver() # Create ReAct agentagent = create_react_agent(    llm,    tools,    checkpointer=checkpointer) # Run agent with thread ID for memoryconfig = {"configurable": {"thread_id": "user-123"}}result = await agent.ainvoke(    {"messages": [("user", "Search for Python tutorials and calculate 25 * 4")]},    config=config)``` ## Architecture Patterns ### Pattern 1: RAG with LangGraph ```pythonfrom langgraph.graph import StateGraph, START, ENDfrom langchain_anthropic import ChatAnthropicfrom langchain_voyageai import VoyageAIEmbeddingsfrom langchain_pinecone import PineconeVectorStorefrom langchain_core.documents import Documentfrom langchain_core.prompts import ChatPromptTemplatefrom typing import TypedDict, Annotated class RAGState(TypedDict):    question: str    context: Annotated[list[Document], "retrieved documents"]    answer: str # Initialize componentsllm = ChatAnthropic(model="claude-sonnet-4-6")embeddings = VoyageAIEmbeddings(model="voyage-3-large")vectorstore = PineconeVectorStore(index_name="docs", embedding=embeddings)retriever = vectorstore.as_retriever(search_kwargs={"k": 4}) # Define nodesasync def retrieve(state: RAGState) -> RAGState:    """Retrieve relevant documents."""    docs = await retriever.ainvoke(state["question"])    return {"context": docs} async def generate(state: RAGState) -> RAGState:    """Generate answer from context."""    prompt = ChatPromptTemplate.from_template(        """Answer based on the context below. If you cannot answer, say so.         Context: {context}         Question: {question}         Answer:"""    )    context_text = "\n\n".join(doc.page_content for doc in state["context"])    response = await llm.ainvoke(        prompt.format(context=context_text, question=state["question"])    )    return {"answer": response.content} # Build graphbuilder = StateGraph(RAGState)builder.add_node("retrieve", retrieve)builder.add_node("generate", generate)builder.add_edge(START, "retrieve")builder.add_edge("retrieve", "generate")builder.add_edge("generate", END) rag_chain = builder.compile() # Use the chainresult = await rag_chain.ainvoke({"question": "What is the main topic?"})``` ### Pattern 2: Custom Agent with Structured Tools ```pythonfrom langchain_core.tools import StructuredToolfrom pydantic import BaseModel, Field class SearchInput(BaseModel):    """Input for database search."""    query: str = Field(description="Search query")    filters: dict = Field(default={}, description="Optional filters") class EmailInput(BaseModel):    """Input for sending email."""    recipient: str = Field(description="Email recipient")    subject: str = Field(description="Email subject")    content: str = Field(description="Email body") async def search_database(query: str, filters: dict = {}) -> str:    """Search internal database for information."""    # Your database search logic    return f"Results for '{query}' with filters {filters}" async def send_email(recipient: str, subject: str, content: str) -> str:    """Send an email to specified recipient."""    # Email sending logic    return f"Email sent to {recipient}" tools = [    StructuredTool.from_function(        coroutine=search_database,        name="search_database",        description="Search internal database",        args_schema=SearchInput    ),    StructuredTool.from_function(        coroutine=send_email,        name="send_email",        description="Send an email",        args_schema=EmailInput    )] agent = create_react_agent(llm, tools)``` ### Pattern 3: Multi-Step Workflow with StateGraph ```pythonfrom langgraph.graph import StateGraph, START, ENDfrom typing import TypedDict, Literal class WorkflowState(TypedDict):    text: str    entities: list    analysis: str    summary: str    current_step: str async def extract_entities(state: WorkflowState) -> WorkflowState:    """Extract key entities from text."""    prompt = f"Extract key entities from: {state['text']}\n\nReturn as JSON list."    response = await llm.ainvoke(prompt)    return {"entities": response.content, "current_step": "analyze"} async def analyze_entities(state: WorkflowState) -> WorkflowState:    """Analyze extracted entities."""    prompt = f"Analyze these entities: {state['entities']}\n\nProvide insights."    response = await llm.ainvoke(prompt)    return {"analysis": response.content, "current_step": "summarize"} async def generate_summary(state: WorkflowState) -> WorkflowState:    """Generate final summary."""    prompt = f"""Summarize:    Entities: {state['entities']}    Analysis: {state['analysis']}     Provide a concise summary."""    response = await llm.ainvoke(prompt)    return {"summary": response.content, "current_step": "complete"} def route_step(state: WorkflowState) -> Literal["analyze", "summarize", "end"]:    """Route to next step based on current state."""    step = state.get("current_step", "extract")    if step == "analyze":        return "analyze"    elif step == "summarize":        return "summarize"    return "end" # Build workflowbuilder = StateGraph(WorkflowState)builder.add_node("extract", extract_entities)builder.add_node("analyze", analyze_entities)builder.add_node("summarize", generate_summary) builder.add_edge(START, "extract")builder.add_conditional_edges("extract", route_step, {    "analyze": "analyze",    "summarize": "summarize",    "end": END})builder.add_conditional_edges("analyze", route_step, {    "summarize": "summarize",    "end": END})builder.add_edge("summarize", END) workflow = builder.compile()``` ### Pattern 4: Multi-Agent Orchestration ```pythonfrom langgraph.graph import StateGraph, START, ENDfrom langgraph.prebuilt import create_react_agentfrom langchain_core.messages import HumanMessagefrom typing import Literal class MultiAgentState(TypedDict):    messages: list    next_agent: str # Create specialized agentsresearcher = create_react_agent(llm, research_tools)writer = create_react_agent(llm, writing_tools)reviewer = create_react_agent(llm, review_tools) async def supervisor(state: MultiAgentState) -> MultiAgentState:    """Route to appropriate agent based on task."""    prompt = f"""Based on the conversation, which agent should handle this?     Options:    - researcher: For finding information    - writer: For creating content    - reviewer: For reviewing and editing    - FINISH: Task is complete     Messages: {state['messages']}     Respond with just the agent name."""     response = await llm.ainvoke(prompt)    return {"next_agent": response.content.strip().lower()} def route_to_agent(state: MultiAgentState) -> Literal["researcher", "writer", "reviewer", "end"]:    """Route based on supervisor decision."""    next_agent = state.get("next_agent", "").lower()    if next_agent == "finish":        return "end"    return next_agent if next_agent in ["researcher", "writer", "reviewer"] else "end" # Build multi-agent graphbuilder = StateGraph(MultiAgentState)builder.add_node("supervisor", supervisor)builder.add_node("researcher", researcher)builder.add_node("writer", writer)builder.add_node("reviewer", reviewer) builder.add_edge(START, "supervisor")builder.add_conditional_edges("supervisor", route_to_agent, {    "researcher": "researcher",    "writer": "writer",    "reviewer": "reviewer",    "end": END}) # Each agent returns to supervisorfor agent in ["researcher", "writer", "reviewer"]:    builder.add_edge(agent, "supervisor") multi_agent = builder.compile()``` ## Memory Management ### Token-Based Memory with LangGraph ```pythonfrom langgraph.checkpoint.memory import MemorySaverfrom langgraph.prebuilt import create_react_agent # In-memory checkpointer (development)checkpointer = MemorySaver() # Create agent with persistent memoryagent = create_react_agent(llm, tools, checkpointer=checkpointer) # Each thread_id maintains separate conversationconfig = {"configurable": {"thread_id": "session-abc123"}} # Messages persist across invocations with same thread_idresult1 = await agent.ainvoke({"messages": [("user", "My name is Alice")]}, config)result2 = await agent.ainvoke({"messages": [("user", "What's my name?")]}, config)# Agent remembers: "Your name is Alice"``` ### Production Memory with PostgreSQL ```pythonfrom langgraph.checkpoint.postgres import PostgresSaver # Production checkpointercheckpointer = PostgresSaver.from_conn_string(    "postgresql://user:pass@localhost/langgraph") agent = create_react_agent(llm, tools, checkpointer=checkpointer)``` ### Vector Store Memory for Long-Term Context ```pythonfrom langchain_community.vectorstores import Chromafrom langchain_voyageai import VoyageAIEmbeddings embeddings = VoyageAIEmbeddings(model="voyage-3-large")memory_store = Chroma(    collection_name="conversation_memory",    embedding_function=embeddings,    persist_directory="./memory_db") async def retrieve_relevant_memory(query: str, k: int = 5) -> list:    """Retrieve relevant past conversations."""    docs = await memory_store.asimilarity_search(query, k=k)    return [doc.page_content for doc in docs] async def store_memory(content: str, metadata: dict = {}):    """Store conversation in long-term memory."""    await memory_store.aadd_texts([content], metadatas=[metadata])``` ## Callback System & LangSmith ### LangSmith Tracing ```pythonimport osfrom langchain_anthropic import ChatAnthropic # Enable LangSmith tracingos.environ["LANGCHAIN_TRACING_V2"] = "true"os.environ["LANGCHAIN_API_KEY"] = "your-api-key"os.environ["LANGCHAIN_PROJECT"] = "my-project" # All LangChain/LangGraph operations are automatically tracedllm = ChatAnthropic(model="claude-sonnet-4-6")``` ### Custom Callback Handler ```pythonfrom langchain_core.callbacks import BaseCallbackHandlerfrom typing import Any, Dict, List class CustomCallbackHandler(BaseCallbackHandler):    def on_llm_start(        self, serialized: Dict[str, Any], prompts: List[str], **kwargs    ) -> None:        print(f"LLM started with {len(prompts)} prompts")     def on_llm_end(self, response, **kwargs) -> None:        print(f"LLM completed: {len(response.generations)} generations")     def on_llm_error(self, error: Exception, **kwargs) -> None:        print(f"LLM error: {error}")     def on_tool_start(        self, serialized: Dict[str, Any], input_str: str, **kwargs    ) -> None:        print(f"Tool started: {serialized.get('name')}")     def on_tool_end(self, output: str, **kwargs) -> None:        print(f"Tool completed: {output[:100]}...") # Use callbacksresult = await agent.ainvoke(    {"messages": [("user", "query")]},    config={"callbacks": [CustomCallbackHandler()]})``` ## Streaming Responses ```pythonfrom langchain_anthropic import ChatAnthropic llm = ChatAnthropic(model="claude-sonnet-4-6", streaming=True) # Stream tokensasync for chunk in llm.astream("Tell me a story"):    print(chunk.content, end="", flush=True) # Stream agent eventsasync for event in agent.astream_events(    {"messages": [("user", "Search and summarize")]},    version="v2"):    if event["event"] == "on_chat_model_stream":        print(event["data"]["chunk"].content, end="")    elif event["event"] == "on_tool_start":        print(f"\n[Using tool: {event['name']}]")``` ## Testing Strategies ```pythonimport pytestfrom unittest.mock import AsyncMock, patch @pytest.mark.asyncioasync def test_agent_tool_selection():    """Test agent selects correct tool."""    with patch.object(llm, 'ainvoke') as mock_llm:        mock_llm.return_value = AsyncMock(content="Using search_database")         result = await agent.ainvoke({            "messages": [("user", "search for documents")]        })         # Verify tool was called        assert "search_database" in str(result) @pytest.mark.asyncioasync def test_memory_persistence():    """Test memory persists across invocations."""    config = {"configurable": {"thread_id": "test-thread"}}     # First message    await agent.ainvoke(        {"messages": [("user", "Remember: the code is 12345")]},        config    )     # Second message should remember    result = await agent.ainvoke(        {"messages": [("user", "What was the code?")]},        config    )     assert "12345" in result["messages"][-1].content``` ## Performance Optimization ### 1. Caching with Redis ```pythonfrom langchain_community.cache import RedisCachefrom langchain_core.globals import set_llm_cacheimport redis redis_client = redis.Redis.from_url("redis://localhost:6379")set_llm_cache(RedisCache(redis_client))``` ### 2. Async Batch Processing ```pythonimport asynciofrom langchain_core.documents import Document async def process_documents(documents: list[Document]) -> list:    """Process documents in parallel."""    tasks = [process_single(doc) for doc in documents]    return await asyncio.gather(*tasks) async def process_single(doc: Document) -> dict:    """Process a single document."""    chunks = text_splitter.split_documents([doc])    embeddings = await embeddings_model.aembed_documents(        [c.page_content for c in chunks]    )    return {"doc_id": doc.metadata.get("id"), "embeddings": embeddings}``` ### 3. Connection Pooling ```pythonfrom langchain_pinecone import PineconeVectorStorefrom pinecone import Pinecone # Reuse Pinecone clientpc = Pinecone(api_key=os.environ["PINECONE_API_KEY"])index = pc.Index("my-index") # Create vector store with existing indexvectorstore = PineconeVectorStore(index=index, embedding=embeddings)```