🎉 ai-infra v1.0 is here — Production-ready AI/LLM infrastructure
What's new
nfrax logonfrax

Infrastructure that just works. Ship products, not boilerplate.

Frameworks

  • svc-infra
  • ai-infra
  • fin-infra
  • robo-infra

Resources

  • Getting Started
  • What's New
  • Contributing

Community

  • GitHub

© 2026 nfrax. All rights reserved.

nfrax logonfrax
Start HereWhat's New
GitHub
ai-infra / API Reference

CallbackManager

from ai_infra.callbacks import CallbackManager
View source
ai_infra.callbacks

Manages multiple callback handlers. Dispatches events to all registered callbacks. Errors in callbacks are caught and logged, not propagated.

Example

manager = CallbackManager([ LoggingCallbacks(), MetricsCallbacks(), ]) # Fire event manager.on_llm_start(LLMStartEvent(...)) # Or use context manager for timing with manager.llm_call("openai", "gpt-4o", messages) as ctx: response = await do_llm_call() ctx.set_response(response, tokens=150)

Constructor
CallbackManager(callbacks: Sequence[Callbacks] | None = None, critical_callbacks: Sequence[Callbacks] | None = None)
ParameterTypeDefaultDescription
callbacksSequence[Callbacks] |NoneNoneList of callback handlers (errors logged but not propagated)
critical_callbacksSequence[Callbacks] |NoneNoneList of critical callback handlers (errors propagate). Use for security audit callbacks that MUST succeed.

Methods

On This Page

Constructoraddllm_callon_graph_node_endon_graph_node_erroron_graph_node_starton_llm_endon_llm_end_asyncasyncon_llm_erroron_llm_error_asyncasyncon_llm_starton_llm_start_asyncasyncon_llm_tokenon_llm_token_asyncasyncon_mcp_connecton_mcp_disconnecton_mcp_loggingon_mcp_logging_asyncasyncon_mcp_progresson_mcp_progress_asyncasyncon_tool_endon_tool_end_asyncasyncon_tool_erroron_tool_error_asyncasyncon_tool_starton_tool_start_asyncasyncremovetool_call