Skip to content

feat: Open-Telemtry docs #375

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,13 @@
"group": "Observability",
"pages": [
"product/observability",
{
"group": "OpenTelemetry",
"pages": [
"product/observability/opentelemetry",
"product/observability/opentelemetry/list-of-supported-otel-instrumenters"
]
},
"product/observability/logs",
"product/observability/traces",
"product/observability/analytics",
Expand Down Expand Up @@ -415,7 +422,12 @@
"group": "Tracing Providers",
"pages": [
"integrations/tracing-providers/arize",
"integrations/tracing-providers/logfire"
"integrations/tracing-providers/phoenix",
"integrations/tracing-providers/logfire",
"integrations/tracing-providers/ml-flow",
"integrations/tracing-providers/openlit",
"integrations/tracing-providers/opentelemetry-python-sdk",
"integrations/tracing-providers/traceloop"
]
}
]
Expand Down Expand Up @@ -2319,6 +2331,10 @@
{
"source": "/product/guardrails/bring-your-own-guardrails",
"destination": "/integrations/guardrails/bring-your-own-guardrails"
},
{
"source": "/integrations/tracing-providers",
"destination": "/product/observability/opentelemetry/list-of-supported-otel-instrumenters"
}
],
"seo": {
Expand Down
2 changes: 1 addition & 1 deletion integrations/ecosystem.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ title: "Integrations"
<Card icon="brain-circuit" title="AI Apps" href="/integrations/ai-apps" />
<Card icon="book" title="Libraries" href="/integrations/libraries" />
<Card icon="cloud" title="Cloud Platforms" href="/integrations/cloud" />
<Card icon="list" title="Tracing Providers" href="/integrations/tracing-providers" />
<Card icon="list" title="Tracing Providers" href="/product/observability/opentelemetry/list-of-supported-otel-instrumenters" />

</CardGroup>

Expand Down
9 changes: 0 additions & 9 deletions integrations/tracing-providers.mdx

This file was deleted.

144 changes: 138 additions & 6 deletions integrations/tracing-providers/logfire.mdx
Original file line number Diff line number Diff line change
@@ -1,17 +1,149 @@
---
title: "Pydantic Logfire"
description: "Logfire is a tool for comprehensive observability of your LLM applications with OpenTelemetry."
description: "Modern Python observability with automatic OpenAI instrumentation and intelligent gateway routing"
---

[Pydantic Logfire](https://pydantic.dev/logfire) is a modern observability platform from the creators of Pydantic, designed specifically for Python applications. It provides automatic instrumentation for popular libraries including OpenAI, Anthropic, and other LLM providers, making it an excellent choice for AI application monitoring.

[Logfire](https://pydantic.dev/logfire) and any opentelemetry compatible tracing library works out of the box with Portkey.
<Info>
Logfire's automatic instrumentation combined with Portkey's intelligent gateway creates a powerful observability stack where every trace is enriched with routing decisions, cache performance, and cost optimization data.
</Info>

All you need to do is set the following environment variables in your application:
## Why Logfire + Portkey?

```sh
OTEL_EXPORTER_OTLP_ENDPOINT = "https://api.portkey.com/v1/otel"
OTEL_EXPORTER_OTLP_HEADERS = "x-portkey-api-key={YOUR_PORTKEY_API_KEY}"
<CardGroup cols={2}>
<Card title="Zero-Code OpenAI Instrumentation" icon="wand-magic-sparkles">
Logfire automatically instruments OpenAI SDK calls without any code changes
</Card>
<Card title="Gateway Intelligence" icon="brain">
Portkey adds routing context, fallback decisions, and cache performance to every trace
</Card>
<Card title="Python-First Design" icon="python">
Built by the Pydantic team specifically for Python developers
</Card>
<Card title="Real-Time Insights" icon="bolt">
See traces immediately with actionable optimization opportunities
</Card>
</CardGroup>

## Quick Start

### Prerequisites

- Python
- Portkey account with API key
- OpenAI API key (or use Portkey's virtual keys)

### Step 1: Install Dependencies

Install the required packages for Logfire and Portkey integration:

```bash
pip install logfire openai portkey-ai
```

### Step 2: Basic Setup - Send Traces to Portkey

First, let's configure Logfire to send traces to Portkey's OpenTelemetry endpoint:

```python
import os
import logfire

# Configure OpenTelemetry export to Portkey
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"

# Initialize Logfire
logfire.configure(
service_name='my-llm-app',
send_to_logfire=False, # Disable sending to Logfire cloud
)

# Instrument OpenAI globally
logfire.instrument_openai()
```

### Step 3: Complete Setup - Use Portkey's Gateway

For the best experience, route your LLM calls through Portkey's gateway to get automatic optimizations:

```python
import logfire
import os
from portkey_ai import createHeaders
from openai import OpenAI

# Configure OpenTelemetry export
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://api.portkey.ai/v1/logs/otel"
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = "x-portkey-api-key=YOUR_PORTKEY_API_KEY"

# Initialize Logfire
logfire.configure(
service_name='my-llm-app',
send_to_logfire=False,
)

# Create OpenAI client with Portkey's gateway
client = OpenAI(
api_key="YOUR_OPENAI_API_KEY", # Or use a dummy value with virtual keys
base_url="https://api.portkey.ai/v1",
default_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_VIRTUAL_KEY" # Optional: Use Portkey's secure key management
)
)

# Instrument the Portkey-configured client
logfire.instrument_openai(client)
```

### Step 4: Make Instrumented LLM Calls

Now your LLM calls are automatically traced by Logfire and enhanced by Portkey:

```python
# Simple chat completion - automatically traced
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Explain the benefits of observability in LLM applications"
}
],
temperature=0.7
)

print(response.choices[0].message.content)
```




## Next Steps

<CardGroup cols={2}>
<Card title="Configure Gateway" icon="gear" href="/product/ai-gateway/configs">
Set up intelligent routing, fallbacks, and caching
</Card>
<Card title="Explore Virtual Keys" icon="key" href="/product/ai-gateway/virtual-keys">
Secure your API keys with Portkey's vault
</Card>
<Card title="View Analytics" icon="chart-line" href="/product/observability/analytics">
Analyze costs, performance, and usage patterns
</Card>
<Card title="Set Up Budget & Rate Limts" icon="bell" href="/product/administration/enforce-budget-and-rate-limit">
Set Rate and Budget Limits per model/user/api-key
</Card>
</CardGroup>

---

## See Your Traces in Action

Once configured, navigate to the [Portkey dashboard](https://app.portkey.ai/logs) to see your Logfire instrumentation combined with gateway intelligence:

<Frame>
<img src="/images/product/opentelemetry.png" alt="OpenTelemetry traces in Portkey" />
</Frame>
Loading