CascadeFlowModelProvider that integrates with the OpenAI Agents SDK as an explicit ModelProvider. This is a strong fit for the runtime-intelligence direction because model selection, tool gating, and budget control stay inside the agent loop where the SDK is already making decisions.
Install
Quick Start
Features
- Model candidates: List of models the provider can select from based on harness scoring
- Tool gating: Block tool calls when
max_tool_callsis reached - Scoped runs: Use
cascadeflow.run()for per-task budget tracking - Decision traces: Full audit trail of model selection and tool gating decisions
- Fail-open: If the harness encounters an error, execution continues with the default model
Why This Integration Matters
- The model provider sits directly on a core agent decision boundary
- Budget and tool controls become actionable, not only observable
- Traces explain why the runtime allowed, switched, or blocked a step
Configuration
Session Metrics
After a run,session.summary() includes:
cost_total: cumulative USD spentbudget_remaining: USD left in the budgetstep_count: number of LLM callstool_calls: number of tool executionslatency_used_ms: total latencyenergy_used: total energy units