Skip to main content
When building AI applications, you need real-world signals about response quality to improve prompts, catch regressions, and understand what users find helpful. User Feedback lets you collect positive/negative ratings on LLM responses, enabling data-driven improvements to your AI systems based on actual user satisfaction.

Why use User Feedback

  • Improve response quality: Identify patterns in poorly-rated responses to refine prompts and model selection
  • Catch regressions early: Monitor feedback trends to detect when changes negatively impact user experience
  • Build training datasets: Use highly-rated responses as examples for fine-tuning or few-shot prompting

Quick Start

1

Make a request and capture the ID

Capture the Helicone request ID from your LLM response:
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://oai.helicone.ai/v1",
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

// Use a custom request ID for feedback tracking
const customId = crypto.randomUUID();

const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Explain quantum computing" }]
}, {
  headers: {
    "Helicone-Request-Id": customId
  }
});

// Use your custom ID for feedback
const heliconeId = customId;
You can also try to get the Helicone ID from response headers, though this may not always be available:
const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Explain quantum computing" }]
});

// Try to get the Helicone request ID from response headers
const heliconeId = response.response?.headers?.get("helicone-id");

// If not available, you'll need to use a custom ID approach
if (!heliconeId) {
  console.log("Helicone ID not found in headers, use custom ID approach instead");
}
2

Submit feedback rating

Send a positive or negative rating for the response:
const feedback = await fetch(
  `https://api.helicone.ai/v1/request/${heliconeId}/feedback`,
  {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${process.env.HELICONE_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      rating: true  // true = positive, false = negative
    }),
  }
);
3

View feedback analytics

Access feedback metrics in your Helicone dashboard to analyze response quality trends and identify areas for improvement.

Configuration Options

Feedback collection requires minimal configuration:
ParameterTypeDescriptionDefaultExample
ratingbooleanUser’s feedback on the responseN/Atrue (positive) or false (negative)
helicone-idstringRequest ID to attach feedback toN/AUUID
When you need to submit feedback for multiple requests, use parallel API calls:
// Note: There is no bulk feedback endpoint - each rating requires a separate API call
const feedbackBatch = [
  { requestId: "f47ac10b-58cc-4372-a567-0e02b2c3d479", rating: true },
  { requestId: "6ba7b810-9dad-11d1-80b4-00c04fd430c8", rating: false },
  { requestId: "6ba7b811-9dad-11d1-80b4-00c04fd430c8", rating: true }
];

// Submit feedback in parallel for better performance
const feedbackPromises = feedbackBatch.map(({ requestId, rating }) =>
  fetch(`https://api.helicone.ai/v1/request/${requestId}/feedback`, {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${process.env.HELICONE_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ rating }),
  })
);

// Wait for all feedback submissions to complete
const results = await Promise.all(feedbackPromises);

// Check for any failed submissions
results.forEach((result, index) => {
  if (!result.ok) {
    console.error(`Failed to submit feedback for request ${feedbackBatch[index].requestId}`);
  }
});

Use Cases

Track user satisfaction with AI assistant responses:
import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://oai.helicone.ai/v1",
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

// In your chat handler
async function handleChatMessage(userId: string, message: string) {
  const requestId = crypto.randomUUID();
  
  const response = await openai.chat.completions.create(
    {
      model: "gpt-4o-mini",
      messages: [
        { role: "system", content: "You are a helpful assistant." },
        { role: "user", content: message }
      ]
    },
    {
      headers: {
        "Helicone-Request-Id": requestId,
        "Helicone-User-Id": userId,
        "Helicone-Property-Feature": "chat"
      }
    }