Skip to content

AgentOps-AI/tokencost

Repository files navigation

Tokencost

Clientside token counting + price estimation for LLM apps and AI agents.

Python Version

🐦 Twitter   •   📢 Discord   •   🖇️ AgentOps

TokenCost

License: MIT PyPI - Version X (formerly Twitter) Follow

Tokencost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions.

Building AI agents? Check out AgentOps

Features

  • LLM Price Tracking Major LLM providers frequently add new models and update pricing. This repo helps track the latest price changes
  • Token counting Accurately count prompt tokens before sending OpenAI requests
  • Easy integration Get the cost of a prompt or completion with a single function

Example usage:

from tokencost import calculate_prompt_cost, calculate_completion_cost

model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Hello world"}]
completion = "How may I assist you today?"

prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)

print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
# 0.0000135 + 0.000014 = 0.0000275

Installation

Recommended: PyPI:

pip install tokencost

Usage

Cost estimates

Calculating the cost of prompts and completions from OpenAI requests

from openai import OpenAI

client = OpenAI()
model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Say this is a test"}]

chat_completion = client.chat.completions.create(
    messages=prompt, model=model
)

completion = chat_completion.choices[0].message.content
# "This is a test."

prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
# 0.0000180 + 0.000010 = 0.0000280

Calculating cost using string prompts instead of messages:

from tokencost import calculate_prompt_cost

prompt_string = "Hello world" 
response = "How may I assist you today?"
model= "gpt-3.5-turbo"

prompt_cost = calculate_prompt_cost(prompt_string, model)
print(f"Cost: ${prompt_cost}")
# Cost: $3e-06

Counting tokens

from tokencost import count_message_tokens, count_string_tokens

message_prompt = [{ "role": "user", "content": "Hello world"}]
# Counting tokens in prompts formatted as message lists
print(count_message_tokens(message_prompt, model="gpt-3.5-turbo"))
# 9

# Alternatively, counting tokens in string prompts
print(count_string_tokens(prompt="Hello world", model="gpt-3.5-turbo"))
# 2

How tokens are counted

Under the hood, strings and ChatML messages are tokenized using Tiktoken, OpenAI's official tokenizer. Tiktoken splits text into tokens (which can be parts of words or individual characters) and handles both raw strings and message formats with additional tokens for message formatting and roles.

For Anthropic models above version 3 (i.e. Sonnet 3.5, Haiku 3.5, and Opus 3), we use the Anthropic beta token counting API to ensure accurate token counts. For older Claude models, we approximate using Tiktoken with the cl100k_base encoding.

Cost table

Units denominated in USD. All prices can be located here.