---
title: "OpenAI"
description: "Learn about using Sentry for OpenAI."
url: https://docs.sentry.io/platforms/python/integrations/openai/
---

# OpenAI | Sentry for Python

This integration connects Sentry with the [OpenAI Python SDK](https://github.com/openai/openai-python).

Once you've installed this SDK, you can use Sentry AI Agents Monitoring, a Sentry dashboard that helps you understand what's going on with your AI requests.

Sentry AI Monitoring will automatically collect information about prompts, tools, tokens, and models. Learn more about the [AI Agents Dashboard](https://docs.sentry.io/ai/monitoring/agents.md).

##### Latest Features

AI frameworks are moving fast, and so are our integrations. To get the most out of our AI Agents Monitoring dashboard, use Python SDK version 2.41.0 or later.

## [Install](https://docs.sentry.io/platforms/python/integrations/openai.md#install)

Install `sentry-sdk` from PyPI:

```bash
pip install sentry-sdk
```

## [Configure](https://docs.sentry.io/platforms/python/integrations/openai.md#configure)

If you have the `openai` package in your dependencies, the OpenAI integration will be enabled automatically when you initialize the Sentry SDK.

An additional dependency, `tiktoken`, is required if you want to calculate token usage for streaming chat responses.

Error Monitoring\[ ]Tracing\[ ]Profiling\[ ]Logs

```python
import sentry_sdk

sentry_sdk.init(
    dsn="___PUBLIC_DSN___",
    # Add data like request headers and IP for users, if applicable;
    # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info
    send_default_pii=True,
    # ___PRODUCT_OPTION_START___ performance
    # Set traces_sample_rate to 1.0 to capture 100%
    # of transactions for tracing.
    traces_sample_rate=1.0,
    # ___PRODUCT_OPTION_END___ performance
    # ___PRODUCT_OPTION_START___ profiling
    # To collect profiles for all profile sessions,
    # set `profile_session_sample_rate` to 1.0.
    profile_session_sample_rate=1.0,
    # Profiles will be automatically collected while
    # there is an active span.
    profile_lifecycle="trace",
    # ___PRODUCT_OPTION_END___ profiling
    # ___PRODUCT_OPTION_START___ logs

    # Enable logs to be sent to Sentry
    enable_logs=True,
    # ___PRODUCT_OPTION_END___ logs
)
```

## [Verify](https://docs.sentry.io/platforms/python/integrations/openai.md#verify)

Verify that the integration works by making a chat request to OpenAI.

```python
import sentry_sdk
from openai import OpenAI

sentry_sdk.init(...)  # same as above

client = OpenAI(api_key="(your OpenAI key)")

def my_llm_stuff():
    with sentry_sdk.start_transaction(
        name="The result of the AI inference",
        op="ai-inference",
    ):
      print(
          client.chat.completions.create(
              model="gpt-3.5", messages=[{"role": "system", "content": "say hello"}]
          )
          .choices[0]
          .message.content
      )
```

After running this script, the resulting data should show up in the `"AI Spans"` tab on the `"Explore" > "Traces"` page on Sentry.io.

If you manually created an [Invoke Agent Span](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation/ai-agents-module.md#invoke-agent-span) (not done in the example above) the data will also show up in the [AI Agents Dashboard](https://docs.sentry.io/ai/monitoring/agents.md).

It may take a couple of moments for the data to appear in [sentry.io](https://sentry.io).

## [Behavior](https://docs.sentry.io/platforms/python/integrations/openai.md#behavior)

* The OpenAI integration will connect Sentry with all supported OpenAI methods automatically.

* All exceptions leading to an `OpenAIException` are reported.

* The supported modules are currently `responses.create`, `chat.completions.create`, and `embeddings.create`.

* Sentry considers LLM and tokenizer inputs/outputs as PII (Personally identifiable information) and doesn't include PII data by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the [Options section](https://docs.sentry.io/platforms/python/integrations/openai.md#options) below.

## [Options](https://docs.sentry.io/platforms/python/integrations/openai.md#options)

By adding `OpenAIIntegration` to your `sentry_sdk.init()` call explicitly, you can set options for `OpenAIIntegration` to change its behavior:

```python
import sentry_sdk
from sentry_sdk.integrations.openai import OpenAIIntegration

sentry_sdk.init(
    # ...
    # Add data like inputs and responses;
    # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info
    send_default_pii=True,
    integrations=[
        OpenAIIntegration(
            include_prompts=False,  # LLM/tokenizer inputs/outputs will be not sent to Sentry, despite send_default_pii=True
            tiktoken_encoding_name="cl100k_base",
        ),
    ],
)
```

You can pass the following keyword arguments to `OpenAIIntegration()`:

* `include_prompts`:

  Whether LLM and tokenizer inputs and outputs should be sent to Sentry. Sentry considers this data personal identifiable data (PII) by default. If you want to include the data, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False`.

  The default is `True`.

* `tiktoken_encoding_name`:

  If you want to calculate token usage for streaming chat responses you need to have an additional dependency, [tiktoken](https://pypi.org/project/tiktoken/) installed and specify the `tiktoken_encoding_name` that you use for tokenization. See the [OpenAI Cookbook](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for possible values.

  The default is `None`.

## [Supported Versions](https://docs.sentry.io/platforms/python/integrations/openai.md#supported-versions)

* OpenAI: 1.0+
* tiktoken: 0.3.0+
* Python: 3.9+
* Sentry Python SDK 2.41.0+
