S2 integrates with the Vercel AI SDK through theDocumentation Index
Fetch the complete documentation index at: https://s2.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
@s2-dev/resumable-stream package. The integration persists AI SDK
UIMessageChunk events to S2 and replays them through the AI SDK’s standard
chat transport.
@s2-dev/resumable-stream/aisdk:useChatstreams can be made resumable, allowing clients to reconnect mid-generation.- Completed messages can be kept on a per-session or per-conversation stream for replay and history.
- TypeScript SDK examples — agent sessions, chat persistence, and multi-agent patterns using the S2 TypeScript SDK directly.
AI SDK resumable streams
The@s2-dev/resumable-stream/aisdk entrypoint provides
createResumableChat for AI SDK useChat streams. makeResumable tees the
UIMessageChunk stream: one branch streams directly to the client as SSE, and
the other persists to S2 for later replay.
Prerequisites
ai >= 5.0. Create an S2 access token and basin first:
- Sign up here, generate an access token and set it as
S2_ACCESS_TOKENin your env. - Create a basin from the Basins tab with Create Stream on Append enabled, and set it as
S2_BASINin your env.
Setup
Create acreateResumableChat instance once and share it across routes:
lib/s2.ts
Server: POST route
app/api/chat/route.ts
Server: GET route (reconnect)
app/api/chat/[id]/stream/route.ts
${api}/${chatId}/stream is the default that DefaultChatTransport reconnects to, so no transport customization is needed.
Client
app/page.tsx
resume: true triggers useChat’s reconnectToStream on mount, which hits the GET route. If there’s an in-flight generation, it tails it from S2; otherwise it no-ops.
Configuration
Relevant options oncreateResumableChat:
| option | default | what it controls |
|---|---|---|
mode | "single-use" | "single-use": one S2 stream per generation, self-cleans via a final trim. "shared": one S2 stream reused across generations, trimmed on each new claim. |
leaseDurationMs | 5000 | Only for shared mode. Max pause within an active generation before a new claim can take it over. |
onError | generic message | Maps upstream errors to the errorText shown to the client. Default emits "An error occurred."; provide to sanitize or forward details. |
batchSize / lingerDuration | 10 / 50ms | S2 append batching knobs. |
End-to-end demo
A runnable Bun server + vanilla-JS client demonstrating the full flow, including transcript persistence and mid-generation refresh recovery, lives in the SDK repo:examples/ai-sdk-resumable-chat.

