Skip to content

API Reference

Complete reference documentation for the langgraph-ollama-local package. All functions, classes, and patterns for building local AI agents with LangGraph and Ollama.

Early Preview

This documentation is in active development. If you encounter issues, please report them on GitHub.

Quick Navigation

Configuration

Setup and configuration for local LLMs and LangGraph

  • LocalAgentConfig - Main configuration class
  • OllamaConfig - Ollama server settings
  • LangGraphConfig - LangGraph execution settings
  • Environment variable reference

RAG

Document loading, indexing, and retrieval

  • DocumentLoader - Multi-format document loading
  • DocumentIndexer - ChromaDB indexing pipeline
  • LocalEmbeddings - Local embedding models
  • Graders: Document, Hallucination, Answer, Query Router

Multi-Agent

Multi-agent collaboration patterns

  • create_multi_agent_graph() - Supervisor-based coordination
  • create_hierarchical_graph() - Nested team structures
  • MultiAgentState, TeamState, HierarchicalState
  • Supervisor and team patterns

Patterns

Advanced multi-agent patterns

  • Swarm - Decentralized agent networks
  • Handoff - Peer-to-peer agent transfers
  • Map-Reduce - Parallel agent execution
  • Evaluation - Automated agent testing

State Types

TypedDict state schemas for all patterns

  • Multi-Agent states
  • Pattern-specific states
  • Custom state creation
  • Reducer reference

Getting Started

Installation & Setup

bash
# Install the package
pip install langgraph-ollama-local

# Install with RAG support
pip install langgraph-ollama-local[rag]

# Install with all features
pip install langgraph-ollama-local[all]

Basic Configuration

python
from langgraph_ollama_local import LocalAgentConfig

# Create configuration
config = LocalAgentConfig()

# Create LLM client
llm = config.create_chat_client()

# Create checkpointer for persistence
checkpointer = config.create_checkpointer(backend="memory")

Your First Agent

python
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool

@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

# Create agent
agent = create_react_agent(llm, [multiply])

# Run agent
result = agent.invoke({
    "messages": [("user", "What is 25 times 17?")]
})

print(result["messages"][-1].content)

Core Concepts

Configuration

The LocalAgentConfig class provides a unified interface for configuring:

  • Ollama connection: Model selection, server URL, temperature
  • LangGraph settings: Recursion limits, checkpointing, streaming
  • Environment variables: Load from .env files
python
from langgraph_ollama_local import LocalAgentConfig

config = LocalAgentConfig()
llm = config.create_chat_client(model="llama3.2:7b")

Learn more →


RAG (Retrieval-Augmented Generation)

Build knowledge-grounded agents with document retrieval:

python
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader

# Index documents
indexer = DocumentIndexer()
indexer.index_directory("sources/")

# Grade relevance
grader = DocumentGrader(llm)
is_relevant = grader.grade(document, question)

Learn more →


Multi-Agent Collaboration

Coordinate multiple specialized agents with supervisors:

python
from langgraph_ollama_local.agents import (
    create_multi_agent_graph,
    run_multi_agent_task
)

graph = create_multi_agent_graph(llm)

result = run_multi_agent_task(
    graph,
    task="Create a Python function to validate email addresses"
)

Learn more →


Advanced Patterns

Implement sophisticated multi-agent patterns:

Swarm Pattern

python
from langgraph_ollama_local.patterns.swarm import SwarmAgent, create_swarm_graph

agents = [
    SwarmAgent(name="researcher", system_prompt="...", connections=["analyst"]),
    SwarmAgent(name="analyst", system_prompt="...", connections=["writer"]),
    SwarmAgent(name="writer", system_prompt="...", connections=[]),
]

graph = create_swarm_graph(llm, agents)

Handoff Pattern

python
from langgraph_ollama_local.patterns.handoffs import create_handoff_tool

handoff_to_support = create_handoff_tool("support", "Transfer for technical issues")

Map-Reduce Pattern

python
from langgraph_ollama_local.patterns.map_reduce import create_map_reduce_graph

graph = create_map_reduce_graph(llm, num_workers=5)

Learn more →


Common Use Cases

RAG Applications

Build question-answering systems with document grounding:

python
from langgraph_ollama_local import LocalAgentConfig
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader

config = LocalAgentConfig()
llm = config.create_chat_client()

# Index documents
indexer = DocumentIndexer()
indexer.index_directory("docs/")

# Create grader for quality control
grader = DocumentGrader(llm)

Related tutorials:


Multi-Agent Systems

Build teams of specialized agents:

python
from langgraph_ollama_local.agents import create_multi_agent_graph

# Supervisor coordinates researcher, coder, and reviewer
graph = create_multi_agent_graph(
    llm,
    researcher_tools=[search_tool],
    coder_tools=[python_executor],
    reviewer_tools=[linter]
)

Related tutorials:


Evaluation & Testing

Automate agent testing with simulated users:

python
from langgraph_ollama_local.patterns.evaluation import (
    SimulatedUser,
    create_evaluation_graph,
    run_multiple_evaluations
)

user_config = SimulatedUser(
    persona="Frustrated customer with billing issue",
    goals=["Get refund", "Express dissatisfaction"],
    behavior="impatient"
)

graph = create_evaluation_graph(llm, my_agent, user_config)
results = run_multiple_evaluations(graph, num_sessions=10)

Related tutorial:


API Structure

By Category

Configuration & Setup

Document Processing

Quality Grading

Multi-Agent Graphs

Pattern Graphs

State Types


Utility Functions

Model Management

python
from langgraph_ollama_local import (
    pull_model,
    list_models,
    ensure_model,
    create_quick_client
)

# Pull a model
pull_model("llama3.2:3b")

# List available models
models = list_models()

# Ensure model is available
ensure_model("llama3.2:3b")

# Quick client for prototyping
llm = create_quick_client(model="llama3.2:1b")

Learn more →


Environment Configuration

All settings can be configured via environment variables:

bash
# Ollama settings
OLLAMA_HOST=192.168.1.100
OLLAMA_PORT=11434
OLLAMA_MODEL=llama3.2:7b
OLLAMA_TEMPERATURE=0.7

# LangGraph settings
LANGGRAPH_RECURSION_LIMIT=50
LANGGRAPH_CHECKPOINT_DIR=.checkpoints

# RAG settings
RAG_CHUNK_SIZE=1500
RAG_CHUNK_OVERLAP=300
RAG_COLLECTION_NAME=my_docs
RAG_EMBEDDING_MODEL=all-mpnet-base-v2

Or use a .env file:

python
from langgraph_ollama_local import LocalAgentConfig

# Automatically loads .env file
config = LocalAgentConfig()

Learn more →


Examples by Complexity

Beginner: Simple Agent

python
from langgraph_ollama_local import create_quick_client
from langgraph.prebuilt import create_react_agent

llm = create_quick_client()
agent = create_react_agent(llm, [])

result = agent.invoke({"messages": [("user", "Hello!")]})

Intermediate: RAG System

python
from langgraph_ollama_local import LocalAgentConfig
from langgraph_ollama_local.rag import DocumentIndexer, DocumentGrader

config = LocalAgentConfig()
llm = config.create_chat_client()

indexer = DocumentIndexer()
indexer.index_directory("sources/")

grader = DocumentGrader(llm)
relevant, irrelevant = grader.grade_documents(docs, question)

Advanced: Multi-Agent System

python
from langgraph_ollama_local.agents import create_hierarchical_graph, create_team_graph

research_team = create_team_graph(llm, "research", [
    ("searcher", "Search information", [search_tool]),
    ("analyst", "Analyze findings", None)
])

dev_team = create_team_graph(llm, "development", [
    ("frontend", "Build UI", [ui_tool]),
    ("backend", "Build API", [db_tool])
])

graph = create_hierarchical_graph(llm, {
    "research": research_team,
    "development": dev_team
})

Best Practices

1. Configuration Management

python
# Good: Use unified configuration
from langgraph_ollama_local import LocalAgentConfig

config = LocalAgentConfig()
llm = config.create_chat_client()

# Good: Use environment variables for deployment
# Set OLLAMA_MODEL=llama3.2:70b in production

2. State Persistence

python
# Good: Use appropriate backend for your use case
checkpointer = config.create_checkpointer(backend="sqlite")  # Development
checkpointer = config.create_checkpointer(backend="postgres")  # Production

3. Error Handling

python
# Good: Handle failures gracefully
try:
    result = agent.invoke(input)
except Exception as e:
    logger.error(f"Agent failed: {e}")
    # Fallback logic

4. Resource Management

python
# Good: Set appropriate limits
graph = create_multi_agent_graph(llm)
result = run_multi_agent_task(graph, task, max_iterations=10)

Migration Guide

From LangChain

If you're familiar with LangChain, this package provides opinionated wrappers:

python
# LangChain way
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3.2:3b", base_url="http://localhost:11434")

# This package's way (with configuration management)
from langgraph_ollama_local import LocalAgentConfig
config = LocalAgentConfig()
llm = config.create_chat_client()  # Loads from .env automatically

Adding Multi-Agent Support

python
# Start with simple agent
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(llm, tools)

# Upgrade to multi-agent
from langgraph_ollama_local.agents import create_multi_agent_graph
graph = create_multi_agent_graph(llm, researcher_tools=tools)

Troubleshooting

Common Issues

Model not found:

python
from langgraph_ollama_local import ensure_model
ensure_model("llama3.2:3b")  # Pulls if needed

ChromaDB issues:

bash
pip install langgraph-ollama-local[rag]

Connection errors:

python
config = LocalAgentConfig(
    ollama=OllamaConfig(host="192.168.1.100", timeout=180)
)

Further Reading

Tutorials

External Resources


Support