Skip to content

feat(agents): Add on_llm_start and on_llm_end Lifecycle Hooks #987

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

uzair330
Copy link

@uzair330 uzair330 commented Jul 1, 2025

Motivation

Currently, the AgentHooks provide valuable lifecycle events for the start/end of an agent run and for tool execution (on_tool_start/on_tool_end). However, developers lack the ability to observe the agent's execution at the language model level.

This PR introduces two new hooks, on_llm_start and on_llm_end, to provide this deeper level of observability. This change enables several key use cases:

  • Performance Monitoring: Precisely measure the latency of LLM calls.
  • Debugging & Logging: Log the exact prompts sent to and raw responses received from the model.
  • Implementing Custom Logic: Trigger actions (e.g., updating a UI, saving state) immediately before or after the agent "thinks."

Summary of Changes

This is a focused contribution with the following changes:

  • src/agents/lifecycle.py: Added two new async method definitions, on_llm_start and on_llm_end, to the AgentHooks base class. The naming convention aligns with the existing on_..._start/on_..._end pattern.
  • src/agents/run.py: Inserted calls to these new hooks within AgentRunner._get_new_response, directly wrapping the model.get_response() call to ensure accurate timing and data capture.
  • tests/test_agent_llm_hooks.py: Added a new unit test file that validates the new functionality. It uses a mock model and a spy hook class to verify:
    • The correct hook sequence for a conversational turn.
    • The correct sequence for a tool-using turn, ensuring the new LLM hooks fire correctly around the existing tool hooks.
    • That the system runs without error when agent.hooks is None.

Usage Example

Here is how a developer could use the new hooks:

from agents import Agent, AgentHooks
from agents.items import ModelResponse

class LLMTrackerHooks(AgentHooks):
    async def on_llm_start(self, context, agent, system_prompt, input_items):
        print(f"Agent '{agent.name}' is calling the LLM...")

    async def on_llm_end(self, context, agent, response: ModelResponse):
        # Log token usage from the model's response
        if response.usage:
            print(f"LLM call finished. Tokens used: {response.usage.total_tokens}")

# Assign hooks to an agent instance
my_agent = Agent(
    name="MyMonitoredAgent",
    model=my_model,
    hooks=LLMTrackerHooks()
)

Checklist

  • My code follows the style guidelines of this project (checked with ruff).
  • I have added tests that prove my feature works.
  • All new and existing tests passed locally with my changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant