Provider Chains¶
Provider chains allow automatic fallback between multiple providers for reliability and cost optimization.
Quick Start¶
from strutex import DocumentProcessor, ProviderChain, local_first_chain
# Use a pre-built chain
processor = DocumentProcessor(provider=local_first_chain())
# Or create a custom chain
from strutex import GeminiProvider, OllamaProvider, OpenAIProvider
chain = ProviderChain([
OllamaProvider(model="llama3.2-vision"), # Try local first
GeminiProvider(), # Then Gemini
OpenAIProvider() # Finally OpenAI
])
processor = DocumentProcessor(provider=chain)
How It Works¶
When you call process() on a ProviderChain:
- The first provider attempts the extraction
- If it fails (any exception), the chain moves to the next provider
- Process continues until one succeeds or all fail
- If all fail, raises
ProviderChainErrorwith details
graph LR
A[Request] --> B[Provider 1]
B -->|Success| C[Return Result]
B -->|Fail| D[Provider 2]
D -->|Success| C
D -->|Fail| E[Provider 3]
E -->|Success| C
E -->|Fail| F[ProviderChainError]
Pre-built Chains¶
local_first_chain()¶
Prefers local/free providers:
cost_optimized_chain()¶
Ordered by cost (cheapest first):
from strutex import cost_optimized_chain
chain = cost_optimized_chain()
# Order: Ollama (free) → Gemini → Anthropic → OpenAI
Custom Chains¶
Using Provider Instances¶
from strutex import ProviderChain, OllamaProvider, GeminiProvider
chain = ProviderChain([
OllamaProvider(model="llama3.2-vision", timeout=30),
GeminiProvider(model="gemini-3-flash-preview")
])
Using Provider Names¶
from strutex.providers import create_fallback_chain
# Resolves provider names automatically
chain = create_fallback_chain("ollama", "gemini", "openai")
Fallback Callbacks¶
Get notified when fallback occurs:
def on_fallback(provider, error):
print(f"Provider {provider.__class__.__name__} failed: {error}")
# Log, send alert, etc.
chain = ProviderChain(
providers=["ollama", "gemini"],
on_fallback=on_fallback
)
Tracking Which Provider Succeeded¶
chain = ProviderChain(["ollama", "gemini", "openai"])
result = chain.process(file_path, prompt, schema, mime_type)
# Check which provider was used
print(f"Used: {chain.last_provider.__class__.__name__}")
Async Support¶
import asyncio
async def extract():
chain = ProviderChain(["ollama", "gemini"])
result = await chain.aprocess(file_path, prompt, schema, mime_type)
return result
result = asyncio.run(extract())
Error Handling¶
from strutex.providers import ProviderChain, ProviderChainError
chain = ProviderChain(["ollama", "gemini"])
try:
result = chain.process(...)
except ProviderChainError as e:
print(f"All providers failed: {e}")
# Access individual errors
for provider, error in e.errors:
print(f" - {provider.__class__.__name__}: {error}")
Available Providers¶
| Provider | Cost | Priority | Capabilities |
|---|---|---|---|
OllamaProvider |
0.0 | 40 | vision, local |
GroqProvider |
0.3 | 45 | fast, vision |
GeminiProvider |
1.0 | 50 | vision |
AnthropicProvider |
1.5 | 55 | vision, large_context |
OpenAIProvider |
2.0 | 60 | vision, function_calling |
Best Practices¶
- Put cheapest/fastest providers first for cost optimization
- Put most reliable providers last as final fallback
- Set appropriate timeouts on each provider
- Use callbacks to log/alert on failures
- Check
last_providerto understand usage patterns