observe, then move to enforcement once you understand the live runtime behavior.
Integration Matrix
| Framework | Language | Package | Integration Type | Budget Gating | Tool Gating | Traces |
|---|---|---|---|---|---|---|
| LangChain | Python, TS | cascadeflow[langchain], @cascadeflow/langchain | Callback handler | Yes | No | Yes |
| OpenAI Agents SDK | Python | cascadeflow[openai-agents] | ModelProvider | Yes | Yes | Yes |
| CrewAI | Python | cascadeflow[crewai] | llm_hooks | Yes | No | Yes |
| Google ADK | Python | cascadeflow[google-adk] | BasePlugin | Yes | No | Yes |
| n8n | TypeScript | @cascadeflow/n8n-nodes-cascadeflow | Community node | Yes | Yes | Yes |
| Vercel AI SDK | TypeScript | @cascadeflow/vercel-ai | Middleware | Yes | No | Yes |
Integration Patterns
Each integration follows the same principle: wrap the framework’s extension point with cascadeflow’s harness, without modifying agent code.Python
TypeScript
Choosing an Integration
- LangChain/LangGraph: Use if you have existing LangChain chains or agents. The callback handler wraps any
BaseChatModel. - OpenAI Agents SDK: Use if you’re building with OpenAI’s Agents SDK. The
ModelProvidersupports model candidates and tool gating. - CrewAI: Use if you’re building multi-agent crews. The
llm_hooksintegration tracks all crew steps. - Google ADK: Use if you’re building with Google’s Agent Development Kit. The plugin integrates with
Runner. - n8n: Use if you’re building no-code workflows. The community node adds cascade routing to any n8n flow.
- Vercel AI SDK: Use if you’re building TypeScript server-side agents. The middleware wraps AI SDK streams.
What Stays Consistent Across Frameworks
- The harness sees runtime state inside the workflow, not only the request boundary
- Budgets, traces, and policy logic remain first-class across integrations
- The goal is governable agent behavior, not isolated cost routing
- GitHub examples remain the secondary deep-dive layer when implementation detail is needed