API Reference
Complete reference documentation for the langgraph-ollama-local package. All functions, classes, and patterns for building local AI agents with LangGraph and Ollama.
Early Preview
This documentation is in active development. If you encounter issues, please report them on GitHub.
Quick Navigation
Configuration
Setup and configuration for local LLMs and LangGraph
LocalAgentConfig- Main configuration classOllamaConfig- Ollama server settingsLangGraphConfig- LangGraph execution settings- Environment variable reference
RAG
Document loading, indexing, and retrieval
DocumentLoader- Multi-format document loadingDocumentIndexer- ChromaDB indexing pipelineLocalEmbeddings- Local embedding models- Graders: Document, Hallucination, Answer, Query Router
Multi-Agent
Multi-agent collaboration patterns
create_multi_agent_graph()- Supervisor-based coordinationcreate_hierarchical_graph()- Nested team structuresMultiAgentState,TeamState,HierarchicalState- Supervisor and team patterns
Patterns
Advanced multi-agent patterns
- Swarm - Decentralized agent networks
- Handoff - Peer-to-peer agent transfers
- Map-Reduce - Parallel agent execution
- Evaluation - Automated agent testing
State Types
TypedDict state schemas for all patterns
- Multi-Agent states
- Pattern-specific states
- Custom state creation
- Reducer reference
Getting Started
Installation & Setup
# Install the package
pip install langgraph-ollama-local
# Install with RAG support
pip install langgraph-ollama-local[rag]
# Install with all features
pip install langgraph-ollama-local[all]Basic Configuration
from langgraph_ollama_local import LocalAgentConfig
# Create configuration
config = LocalAgentConfig()
# Create LLM client
llm = config.create_chat_client()
# Create checkpointer for persistence
checkpointer = config.create_checkpointer(backend="memory")Your First Agent
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
# Create agent
agent = create_react_agent(llm, [multiply])
# Run agent
result = agent.invoke({
"messages": [("user", "What is 25 times 17?")]
})
print(result["messages"][-1].content)Core Concepts
Configuration
The LocalAgentConfig class provides a unified interface for configuring:
- Ollama connection: Model selection, server URL, temperature
- LangGraph settings: Recursion limits, checkpointing, streaming
- Environment variables: Load from
.envfiles
from langgraph_ollama_local import LocalAgentConfig
config = LocalAgentConfig()
llm = config.create_chat_client(model="llama3.2:7b")RAG (Retrieval-Augmented Generation)
Build knowledge-grounded agents with document retrieval:
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader
# Index documents
indexer = DocumentIndexer()
indexer.index_directory("sources/")
# Grade relevance
grader = DocumentGrader(llm)
is_relevant = grader.grade(document, question)Multi-Agent Collaboration
Coordinate multiple specialized agents with supervisors:
from langgraph_ollama_local.agents import (
create_multi_agent_graph,
run_multi_agent_task
)
graph = create_multi_agent_graph(llm)
result = run_multi_agent_task(
graph,
task="Create a Python function to validate email addresses"
)Advanced Patterns
Implement sophisticated multi-agent patterns:
Swarm Pattern
from langgraph_ollama_local.patterns.swarm import SwarmAgent, create_swarm_graph
agents = [
SwarmAgent(name="researcher", system_prompt="...", connections=["analyst"]),
SwarmAgent(name="analyst", system_prompt="...", connections=["writer"]),
SwarmAgent(name="writer", system_prompt="...", connections=[]),
]
graph = create_swarm_graph(llm, agents)Handoff Pattern
from langgraph_ollama_local.patterns.handoffs import create_handoff_tool
handoff_to_support = create_handoff_tool("support", "Transfer for technical issues")Map-Reduce Pattern
from langgraph_ollama_local.patterns.map_reduce import create_map_reduce_graph
graph = create_map_reduce_graph(llm, num_workers=5)Common Use Cases
RAG Applications
Build question-answering systems with document grounding:
from langgraph_ollama_local import LocalAgentConfig
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader
config = LocalAgentConfig()
llm = config.create_chat_client()
# Index documents
indexer = DocumentIndexer()
indexer.index_directory("docs/")
# Create grader for quality control
grader = DocumentGrader(llm)Related tutorials:
Multi-Agent Systems
Build teams of specialized agents:
from langgraph_ollama_local.agents import create_multi_agent_graph
# Supervisor coordinates researcher, coder, and reviewer
graph = create_multi_agent_graph(
llm,
researcher_tools=[search_tool],
coder_tools=[python_executor],
reviewer_tools=[linter]
)Related tutorials:
Evaluation & Testing
Automate agent testing with simulated users:
from langgraph_ollama_local.patterns.evaluation import (
SimulatedUser,
create_evaluation_graph,
run_multiple_evaluations
)
user_config = SimulatedUser(
persona="Frustrated customer with billing issue",
goals=["Get refund", "Express dissatisfaction"],
behavior="impatient"
)
graph = create_evaluation_graph(llm, my_agent, user_config)
results = run_multiple_evaluations(graph, num_sessions=10)Related tutorial:
API Structure
By Category
Configuration & Setup
Document Processing
Quality Grading
Multi-Agent Graphs
Pattern Graphs
State Types
Utility Functions
Model Management
from langgraph_ollama_local import (
pull_model,
list_models,
ensure_model,
create_quick_client
)
# Pull a model
pull_model("llama3.2:3b")
# List available models
models = list_models()
# Ensure model is available
ensure_model("llama3.2:3b")
# Quick client for prototyping
llm = create_quick_client(model="llama3.2:1b")Environment Configuration
All settings can be configured via environment variables:
# Ollama settings
OLLAMA_HOST=192.168.1.100
OLLAMA_PORT=11434
OLLAMA_MODEL=llama3.2:7b
OLLAMA_TEMPERATURE=0.7
# LangGraph settings
LANGGRAPH_RECURSION_LIMIT=50
LANGGRAPH_CHECKPOINT_DIR=.checkpoints
# RAG settings
RAG_CHUNK_SIZE=1500
RAG_CHUNK_OVERLAP=300
RAG_COLLECTION_NAME=my_docs
RAG_EMBEDDING_MODEL=all-mpnet-base-v2Or use a .env file:
from langgraph_ollama_local import LocalAgentConfig
# Automatically loads .env file
config = LocalAgentConfig()Examples by Complexity
Beginner: Simple Agent
from langgraph_ollama_local import create_quick_client
from langgraph.prebuilt import create_react_agent
llm = create_quick_client()
agent = create_react_agent(llm, [])
result = agent.invoke({"messages": [("user", "Hello!")]})Intermediate: RAG System
from langgraph_ollama_local import LocalAgentConfig
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader
config = LocalAgentConfig()
llm = config.create_chat_client()
indexer = DocumentIndexer()
indexer.index_directory("sources/")
grader = DocumentGrader(llm)
relevant, irrelevant = grader.grade_documents(docs, question)Advanced: Multi-Agent System
from langgraph_ollama_local.agents import create_hierarchical_graph, create_team_graph
research_team = create_team_graph(llm, "research", [
("searcher", "Search information", [search_tool]),
("analyst", "Analyze findings", None)
])
dev_team = create_team_graph(llm, "development", [
("frontend", "Build UI", [ui_tool]),
("backend", "Build API", [db_tool])
])
graph = create_hierarchical_graph(llm, {
"research": research_team,
"development": dev_team
})Best Practices
1. Configuration Management
# Good: Use unified configuration
from langgraph_ollama_local import LocalAgentConfig
config = LocalAgentConfig()
llm = config.create_chat_client()
# Good: Use environment variables for deployment
# Set OLLAMA_MODEL=llama3.2:70b in production2. State Persistence
# Good: Use appropriate backend for your use case
checkpointer = config.create_checkpointer(backend="sqlite") # Development
checkpointer = config.create_checkpointer(backend="postgres") # Production3. Error Handling
# Good: Handle failures gracefully
try:
result = agent.invoke(input)
except Exception as e:
logger.error(f"Agent failed: {e}")
# Fallback logic4. Resource Management
# Good: Set appropriate limits
graph = create_multi_agent_graph(llm)
result = run_multi_agent_task(graph, task, max_iterations=10)Migration Guide
From LangChain
If you're familiar with LangChain, this package provides opinionated wrappers:
# LangChain way
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3.2:3b", base_url="http://localhost:11434")
# This package's way (with configuration management)
from langgraph_ollama_local import LocalAgentConfig
config = LocalAgentConfig()
llm = config.create_chat_client() # Loads from .env automaticallyAdding Multi-Agent Support
# Start with simple agent
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools)
# Upgrade to multi-agent
from langgraph_ollama_local.agents import create_multi_agent_graph
graph = create_multi_agent_graph(llm, researcher_tools=tools)Troubleshooting
Common Issues
Model not found:
from langgraph_ollama_local import ensure_model
ensure_model("llama3.2:3b") # Pulls if neededChromaDB issues:
pip install langgraph-ollama-local[rag]Connection errors:
config = LocalAgentConfig(
ollama=OllamaConfig(host="192.168.1.100", timeout=180)
)Further Reading
Tutorials
- Getting Started - Introduction and setup
- Core Patterns - Basic agent patterns
- RAG Patterns - Document retrieval
- Multi-Agent - Agent collaboration
External Resources
Support
- GitHub Issues: Report bugs or request features
- Discussions: Ask questions and share ideas