🚀 Unlocking OpenAI API Usage: A Deep Dive into examples-basic-usage-tracking.py

Welcome to an exciting exploration of OpenAI's API usage tracking! This document breaks down the examples-basic-usage-tracking.py script, revealing how you can monitor your token consumption and gain deeper insights into your API interactions.

📜 Introduction

The examples-basic-usage-tracking.py script is a powerful demonstration of how to track token usage when interacting with OpenAI models. By the end of this guide, you'll understand how to programmatically access valuable usage data for every API call you make. This is crucial for managing costs, optimizing performance, and understanding your application's interaction patterns with the OpenAI API.

核心概念 (Core Concepts)

This script revolves around a few key concepts from the agents library:

💻 Code Walkthrough

Let's dissect the script, piece by piece, to understand its magic.

1. Setup and Imports

from dotenv import load_dotenv
load_dotenv()

import asyncio

from pydantic import BaseModel

from agents import Agent, Runner, Usage, function_tool

The script starts by loading environment variables (like your OpenAI API key) and importing the necessary components from the asyncio, pydantic, and agents libraries.

2. Defining a Tool

class Weather(BaseModel):
    city: str
    temperature_range: str
    conditions: str


@function_tool
def get_weather(city: str) -> Weather:
    """Get the current weather information for a specified city."""
    return Weather(city=city, temperature_range="14-20C", conditions="Sunny with wind.")

A simple get_weather function is defined and decorated with @function_tool. This makes it available for the agent to call. The Weather Pydantic model ensures the data returned by the tool is structured.

3. The print_usage Helper

def print_usage(usage: Usage) -> None:
    print("\n=== Usage ===")
    print(f"Input tokens: {usage.input_tokens}")
    print(f"Output tokens: {usage.output_tokens}")
    print(f"Total tokens: {usage.total_tokens}")
    print(f"Requests: {usage.requests}")
    for i, request in enumerate(usage.request_usage_entries):
        print(f"  {i + 1}: {request.input_tokens} input, {request.output_tokens} output")

This is the heart of our usage tracking! The function takes a Usage object and prints a formatted summary. It displays the total token counts and also breaks down the usage for each individual request made during the agent's run.

4. The Main Execution Block

async def main() -> None:
    agent = Agent(
        name="Usage Demo",
        instructions="You are a concise assistant. Use tools if needed.",
        tools=[get_weather],
    )

    result = await Runner.run(agent, "What's the weather in Tokyo?")

    print("\nFinal output:")
    print(result.final_output)

    # Access usage from the run context
    print_usage(result.context_wrapper.usage)

In the main asynchronous function: 1. An Agent is instantiated with a name, instructions, and our get_weather tool. 2. The Runner.run method is called with the agent and a prompt. This starts the interaction. 3. The final output from the agent is printed. 4. Crucially, result.context_wrapper.usage is accessed to get the Usage object, which is then passed to our print_usage function.

📊 Output Analysis

Let's look at the output from examples-basic-usage-tracking.py.txt and see how it connects to the code.

Final output:
The weather in Tokyo is sunny with wind, and the temperature ranges from 14 to 20°C.

=== Usage ===
Input tokens: 181
Output tokens: 38
Total tokens: 219
Requests: 2
  1: 70 input, 15 output
  2: 111 input, 23 output

✨ Conclusion

The examples-basic-usage-tracking.py script provides a clear and effective way to monitor your OpenAI API token usage. By accessing the Usage object from the result of a Runner.run, you can get detailed, request-by-request breakdowns of your token consumption. This is an indispensable tool for any developer building applications with OpenAI, helping you to build more efficient and cost-effective AI solutions.