---
title: DurableAgent
description: Create AI agents that maintain state, call tools, and handle interruptions gracefully.
type: reference
summary: Use DurableAgent to build AI agents that maintain state across steps and survive interruptions.
prerequisites:
  - /docs/ai
related:
  - /docs/ai/defining-tools
---

# DurableAgent



<Callout type="warn">
  The `@workflow/ai` package is currently in active development and should be considered experimental.
</Callout>

The `DurableAgent` class enables you to create AI-powered agents that can maintain state across workflow steps, call tools, and gracefully handle interruptions and resumptions.

Tool calls can be implemented as workflow steps for automatic retries, or as regular workflow-level logic utilizing core library features such as [`sleep()`](/docs/api-reference/workflow/sleep) and [Hooks](/docs/foundations/hooks).

```typescript lineNumbers
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function getWeather({ city }: { city: string }) {
  "use step";

  return `Weather in ${city} is sunny`;
}

async function myAgent() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    instructions: "You are a helpful weather assistant.",
    temperature: 0.7,
    tools: {
      getWeather: {
        description: "Get weather for a city",
        inputSchema: z.object({ city: z.string() }),
        execute: getWeather,
      },
    },
  });

  // The agent will stream its output to the workflow
  // run's default output stream
  const writable = getWritable<UIMessageChunk>();

  const result = await agent.stream({
    messages: [{ role: "user", content: "How is the weather in San Francisco?" }],
    writable,
  });

  // result contains messages, steps, and optional structured output
  console.log(result.messages);
}
```

## API Signature

### Class

<TSDoc
  definition={`
import { DurableAgent } from "@workflow/ai/agent";
export default DurableAgent;`}
/>

### DurableAgentOptions

<TSDoc
  definition={`
import type { DurableAgentOptions } from "@workflow/ai/agent";
export default DurableAgentOptions;`}
/>

### DurableAgentStreamOptions

<TSDoc
  definition={`
import type { DurableAgentStreamOptions } from "@workflow/ai/agent";
export default DurableAgentStreamOptions;`}
/>

### DurableAgentStreamResult

The result returned from the `stream()` method:

<TSDoc
  definition={`
import type { DurableAgentStreamResult } from "@workflow/ai/agent";
export default DurableAgentStreamResult;`}
/>

### GenerationSettings

Settings that control model generation behavior. These can be set on the constructor or overridden per-stream call:

<TSDoc
  definition={`
import type { GenerationSettings } from "@workflow/ai/agent";
export default GenerationSettings;`}
/>

### PrepareStepInfo

Information passed to the `prepareStep` callback:

<TSDoc
  definition={`
import type { PrepareStepInfo } from "@workflow/ai/agent";
export default PrepareStepInfo;`}
/>

### PrepareStepResult

Return type from the `prepareStep` callback:

<TSDoc
  definition={`
import type { PrepareStepResult } from "@workflow/ai/agent";
export default PrepareStepResult;`}
/>

### TelemetrySettings

Configuration for observability and telemetry:

<TSDoc
  definition={`
import type { TelemetrySettings } from "@workflow/ai/agent";
export default TelemetrySettings;`}
/>

### Callbacks

#### StreamTextOnFinishCallback

Called when streaming completes:

<TSDoc
  definition={`
import type { StreamTextOnFinishCallback } from "@workflow/ai/agent";
export default StreamTextOnFinishCallback;`}
/>

#### StreamTextOnErrorCallback

Called when an error occurs:

<TSDoc
  definition={`
import type { StreamTextOnErrorCallback } from "@workflow/ai/agent";
export default StreamTextOnErrorCallback;`}
/>

#### StreamTextOnAbortCallback

Called when the operation is aborted:

<TSDoc
  definition={`
import type { StreamTextOnAbortCallback } from "@workflow/ai/agent";
export default StreamTextOnAbortCallback;`}
/>

### Advanced Types

#### ToolCallRepairFunction

Function to repair malformed tool calls:

<TSDoc
  definition={`
import type { ToolCallRepairFunction } from "@workflow/ai/agent";
export default ToolCallRepairFunction;`}
/>

#### StreamTextTransform

Transform applied to the stream:

<TSDoc
  definition={`
import type { StreamTextTransform } from "@workflow/ai/agent";
export default StreamTextTransform;`}
/>

#### OutputSpecification

Specification for structured output parsing:

<TSDoc
  definition={`
import type { OutputSpecification } from "@workflow/ai/agent";
export default OutputSpecification;`}
/>

## Key Features

* **Durable Execution**: Agents can be interrupted and resumed without losing state
* **Flexible Tool Implementation**: Tools can be implemented as workflow steps for automatic retries, or as regular workflow-level logic
* **Stream Processing**: Handles streaming responses and tool calls in a structured way
* **Workflow Native**: Fully integrated with Workflow SDK for production-grade reliability
* **AI SDK Parity**: Supports the same options as AI SDK's `streamText` including generation settings, callbacks, and structured output

## Good to Know

* Tools can be implemented as workflow steps (using `"use step"` for automatic retries), or as regular workflow-level logic
* Tools can use core library features like `sleep()` and Hooks within their `execute` functions
* The agent processes tool calls iteratively until completion or `maxSteps` is reached
* **Default `maxSteps` is unlimited** - set a value to limit the number of LLM calls
* The `stream()` method returns `{ messages, steps, experimental_output, uiMessages }` containing the full conversation history, step details, optional structured output, and optionally accumulated UI messages
* Use `collectUIMessages: true` to accumulate `UIMessage[]` during streaming, useful for persisting conversation state without re-reading the stream
* The `prepareStep` callback runs before each step and can modify model, messages, generation settings, tool choice, and context
* Generation settings (temperature, maxOutputTokens, etc.) can be set on the constructor and overridden per-stream call
* Use `activeTools` to limit which tools are available for a specific stream call
* The `onFinish` callback is called when all steps complete; `onAbort` is called if aborted

## Examples

### Basic Agent with Tools

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function getWeather({ location }: { location: string }) {
  "use step";
  // Fetch weather data
  const response = await fetch(`https://api.weather.com?location=${location}`);
  return response.json();
}

async function weatherAgentWorkflow(userQuery: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      getWeather: {
        description: "Get current weather for a location",
        inputSchema: z.object({ location: z.string() }),
        execute: getWeather,
      },
    },
    instructions: "You are a helpful weather assistant. Always provide accurate weather information.",
  });

  await agent.stream({
    messages: [
      {
        role: "user",
        content: userQuery,
      },
    ],
    writable: getWritable<UIMessageChunk>(),
  });
}
```

### Multiple Tools

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function getWeather({ location }: { location: string }) {
  "use step";
  return `Weather in ${location}: Sunny, 72°F`;
}

async function searchEvents({ location, category }: { location: string; category: string }) {
  "use step";
  return `Found 5 ${category} events in ${location}`;
}

async function multiToolAgentWorkflow(userQuery: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      getWeather: {
        description: "Get weather for a location",
        inputSchema: z.object({ location: z.string() }),
        execute: getWeather,
      },
      searchEvents: {
        description: "Search for upcoming events in a location",
        inputSchema: z.object({ location: z.string(), category: z.string() }),
        execute: searchEvents,
      },
    },
  });

  await agent.stream({
    messages: [
      {
        role: "user",
        content: userQuery,
      },
    ],
    writable: getWritable<UIMessageChunk>(),
  });
}
```

### Multi-turn Conversation

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessageChunk } from "ai";
import { z } from "zod";

async function searchProducts({ query }: { query: string }) {
  "use step";
  // Search product database
  return `Found 3 products matching "${query}"`;
}

async function multiTurnAgentWorkflow() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      searchProducts: {
        description: "Search for products",
        inputSchema: z.object({ query: z.string() }),
        execute: searchProducts,
      },
    },
  });

  const writable = getWritable<UIMessageChunk>();

  // First user message
  //   - Result is streamed to the provided `writable` stream
  //   - Message history is returned in `messages` for LLM context
  let { messages } = await agent.stream({
    messages: [
      { role: "user", content: "Find me some laptops" }
    ],
    writable,
  });

  // Continue the conversation with the accumulated message history
  const result = await agent.stream({
    messages: [
      ...messages,
      { role: "user", content: "Which one has the best battery life?" }
    ],
    writable,
  });

  // result.messages now contains the complete conversation history
  return result.messages;
}
```

### Tools with Workflow Library Features

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { sleep, defineHook, getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

// Define a reusable hook type
const approvalHook = defineHook<{ approved: boolean; reason: string }>();

async function scheduleTask({ delaySeconds }: { delaySeconds: number }) {
  // Note: No "use step" for this tool call,
  // since `sleep()` is a workflow level function
  await sleep(`${delaySeconds}s`);
  return `Slept for ${delaySeconds} seconds`;
}

async function requestApproval({ message }: { message: string }) {
  // Note: No "use step" for this tool call either,
  // since hooks are awaited at the workflow level

  // Utilize a Hook for Human-in-the-loop approval
  const hook = approvalHook.create({
    metadata: { message }
  });

  console.log(`Approval needed - token: ${hook.token}`);

  // Wait for the approval payload
  const approval = await hook;

  if (approval.approved) {
    return `Request approved: ${approval.reason}`;
  } else {
    throw new Error(`Request denied: ${approval.reason}`);
  }
}

async function agentWithLibraryFeaturesWorkflow(userRequest: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      scheduleTask: {
        description: "Pause the workflow for the specified number of seconds",
        inputSchema: z.object({
          delaySeconds: z.number(),
        }),
        execute: scheduleTask,
      },
      requestApproval: {
        description: "Request approval for an action",
        inputSchema: z.object({ message: z.string() }),
        execute: requestApproval,
      },
    },
  });

  await agent.stream({
    messages: [{ role: "user", content: userRequest }],
    writable: getWritable<UIMessageChunk>(),
  });
}
```

### Dynamic Context with prepareStep

Use `prepareStep` to modify settings before each step in the agent loop:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessageChunk } from "ai";

async function agentWithPrepareStep(userMessage: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "openai/gpt-4.1-mini", // Default model
    instructions: "You are a helpful assistant.",
  });

  await agent.stream({
    messages: [{ role: "user", content: userMessage }],
    writable: getWritable<UIMessageChunk>(),
    prepareStep: async ({ stepNumber, messages }) => {
      // Switch to a stronger model for complex reasoning after initial steps
      if (stepNumber > 2 && messages.length > 10) {
        return {
          model: "anthropic/claude-sonnet-4.5",
        };
      }

      // Trim context if messages grow too large
      if (messages.length > 20) {
        return {
          messages: [
            messages[0], // Keep system message
            ...messages.slice(-10), // Keep last 10 messages
          ],
        };
      }

      return {}; // No changes
    },
  });
}
```

### Message Injection with prepareStep

Inject messages from external sources (like hooks) before each LLM call:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable, defineHook } from "workflow";
import type { UIMessageChunk } from "ai";

const messageHook = defineHook<{ message: string }>();

async function agentWithMessageQueue(initialMessage: string) {
  "use workflow";

  const messageQueue: Array<{ role: "user"; content: string }> = [];

  // Listen for incoming messages via hook
  const hook = messageHook.create();
  hook.then(({ message }) => {
    messageQueue.push({ role: "user", content: message });
  });

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    instructions: "You are a helpful assistant.",
  });

  await agent.stream({
    messages: [{ role: "user", content: initialMessage }],
    writable: getWritable<UIMessageChunk>(),
    prepareStep: ({ messages }) => {
      // Inject queued messages before the next step
      if (messageQueue.length > 0) {
        const newMessages = messageQueue.splice(0);
        return {
          messages: [
            ...messages,
            ...newMessages.map(m => ({
              role: m.role,
              content: [{ type: "text" as const, text: m.content }],
            })),
          ],
        };
      }
      return {};
    },
  });
}
```

### Generation Settings

Configure model generation parameters at the constructor or stream level:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessageChunk } from "ai";

async function agentWithGenerationSettings() {
  "use workflow";

  // Set default generation settings in constructor
  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    temperature: 0.7,
    maxOutputTokens: 2000,
    topP: 0.9,
  });

  // Override settings per-stream call
  await agent.stream({
    messages: [{ role: "user", content: "Write a creative story" }],
    writable: getWritable<UIMessageChunk>(),
    temperature: 0.9, // More creative for this call
    maxSteps: 1,
  });

  // Use different settings for a different task
  await agent.stream({
    messages: [{ role: "user", content: "Summarize this document precisely" }],
    writable: getWritable<UIMessageChunk>(),
    temperature: 0.1, // More deterministic
    maxSteps: 1,
  });
}
```

### Limiting Steps with maxSteps

By default, the agent loops until completion. Use `maxSteps` to limit the number of LLM calls:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function searchWeb({ query }: { query: string }) {
  "use step";
  return `Results for "${query}": ...`;
}

async function analyzeResults({ data }: { data: string }) {
  "use step";
  return `Analysis: ${data}`;
}

async function multiStepAgent() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      searchWeb: {
        description: "Search the web for information",
        inputSchema: z.object({ query: z.string() }),
        execute: searchWeb,
      },
      analyzeResults: {
        description: "Analyze search results",
        inputSchema: z.object({ data: z.string() }),
        execute: analyzeResults,
      },
    },
  });

  // Limit to 10 steps for safety on complex research tasks
  const result = await agent.stream({
    messages: [{ role: "user", content: "Research the latest AI trends and provide an analysis" }],
    writable: getWritable<UIMessageChunk>(),
    maxSteps: 10,
  });

  // Access step-by-step details
  console.log(`Completed in ${result.steps.length} steps`);
}
```

### Callbacks for Monitoring

Use callbacks to monitor streaming progress, handle errors, and react to completion:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessageChunk } from "ai";

async function agentWithCallbacks() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
  });

  await agent.stream({
    messages: [{ role: "user", content: "Hello!" }],
    writable: getWritable<UIMessageChunk>(),
    maxSteps: 5,

    // Called after each step completes
    onStepFinish: async (step) => {
      console.log(`Step finished: ${step.finishReason}`);
      console.log(`Tokens used: ${step.usage.totalTokens}`);
    },

    // Called when streaming completes
    onFinish: async ({ steps, messages }) => {
      console.log(`Completed with ${steps.length} steps`);
      console.log(`Final message count: ${messages.length}`);
    },

    // Called on errors
    onError: async ({ error }) => {
      console.error("Stream error:", error);
    },
  });
}
```

### Structured Output

Parse structured data from the LLM response using `Output.object`:

```typescript
import { DurableAgent, Output } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function agentWithStructuredOutput() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
  });

  const result = await agent.stream({
    messages: [{ role: "user", content: "Analyze the sentiment of: 'I love this product!'" }],
    writable: getWritable<UIMessageChunk>(),
    experimental_output: Output.object({
      schema: z.object({
        sentiment: z.enum(["positive", "negative", "neutral"]),
        confidence: z.number().min(0).max(1),
        reasoning: z.string(),
      }),
    }),
  });

  // Access the parsed structured output
  console.log(result.experimental_output);
  // { sentiment: "positive", confidence: 0.95, reasoning: "..." }
}
```

### Tool Choice Control

Control when and which tools the model can use:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

async function agentWithToolChoice() {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      calculator: {
        description: "Perform calculations",
        inputSchema: z.object({ expression: z.string() }),
        execute: async ({ expression }) => `Calculated: ${expression}`,
      },
      search: {
        description: "Search for information",
        inputSchema: z.object({ query: z.string() }),
        execute: async ({ query }) => `Results for: ${query}`,
      },
    },
    toolChoice: "auto", // Default: model decides
  });

  // Force the model to use a tool
  await agent.stream({
    messages: [{ role: "user", content: "What is 2 + 2?" }],
    writable: getWritable<UIMessageChunk>(),
    toolChoice: "required",
    maxSteps: 2,
  });

  // Prevent tool usage
  await agent.stream({
    messages: [{ role: "user", content: "Just chat with me" }],
    writable: getWritable<UIMessageChunk>(),
    toolChoice: "none",
  });

  // Force a specific tool
  await agent.stream({
    messages: [{ role: "user", content: "Calculate something" }],
    writable: getWritable<UIMessageChunk>(),
    toolChoice: { type: "tool", toolName: "calculator" },
    maxSteps: 2,
  });

  // Limit available tools for this call
  await agent.stream({
    messages: [{ role: "user", content: "Just search, don't calculate" }],
    writable: getWritable<UIMessageChunk>(),
    activeTools: ["search"],
    maxSteps: 2,
  });
}
```

### Passing Context to Tools

Use `experimental_context` to pass shared context to tool executions:

```typescript
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import { z } from "zod";
import type { UIMessageChunk } from "ai";

interface UserContext {
  userId: string;
  permissions: string[];
}

async function agentWithContext(userId: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    tools: {
      getUserData: {
        description: "Get user data",
        inputSchema: z.object({}),
        execute: async (_, { experimental_context }) => {
          const ctx = experimental_context as UserContext;
          return { userId: ctx.userId, permissions: ctx.permissions };
        },
      },
    },
  });

  await agent.stream({
    messages: [{ role: "user", content: "What are my permissions?" }],
    writable: getWritable<UIMessageChunk>(),
    maxSteps: 2,
    experimental_context: {
      userId,
      permissions: ["read", "write"],
    } as UserContext,
  });
}
```

### Collecting UI Messages

Use `collectUIMessages` to accumulate `UIMessage[]` during streaming. This is useful when you need to persist the conversation without re-reading the run's output stream:

```typescript lineNumbers
import { DurableAgent } from "@workflow/ai/agent";
import { getWritable } from "workflow";
import type { UIMessage, UIMessageChunk } from "ai";

async function agentWithUIMessages(userMessage: string) {
  "use workflow";

  const agent = new DurableAgent({
    model: "anthropic/claude-haiku-4.5",
    instructions: "You are a helpful assistant.",
  });

  const result = await agent.stream({
    messages: [{ role: "user", content: userMessage }],
    writable: getWritable<UIMessageChunk>(),
    collectUIMessages: true, // [!code highlight]
  });

  // Access the accumulated UI messages
  const uiMessages: UIMessage[] = result.uiMessages ?? []; // [!code highlight]

  // Persist messages to a database
  await saveConversation(uiMessages);

  return result;
}

async function saveConversation(messages: UIMessage[]) {
  "use step";
  // Save to database...
}
```

<Callout type="info">
  The `uiMessages` property is only available when `collectUIMessages` is set to `true`. When disabled, `uiMessages` is `undefined`.
</Callout>

## See Also

* [Building Durable AI Agents](/docs/ai) - Complete guide to creating durable agents
* [Queueing User Messages](/docs/ai/message-queueing) - Using prepareStep for message injection
* [WorkflowChatTransport](/docs/api-reference/workflow-ai/workflow-chat-transport) - Transport layer for AI SDK streams
* [Workflows and Steps](/docs/foundations/workflows-and-steps) - Understanding workflow fundamentals
* [AI SDK Loop Control](https://ai-sdk.dev/docs/agents/loop-control) - AI SDK's agent loop control patterns


## Sitemap
[Overview of all docs pages](/sitemap.md)
