Streams grow when records are appended to them. Streams are an “append-only” data structure, which just means that records are always added to the end, or tail, of the stream. When you append records, S2 responds with an acknowledgement only once your data is fully durable. That acknowledgement contains details about what position your data occupies in the stream, and the sequence number that will be assigned to the next appended record in the future.Documentation Index
Fetch the complete documentation index at: https://s2.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
start— position of the first record appended.end— one past the last record appended (soend.seq_num - start.seq_numis the number of records).tail— current tail of the stream, which can exceedendif there have been concurrent appends.
Batches
The append API accepts batches of records. A single batch can contain up to 1000 records or 1 MiB of data. For payloads larger than the 1 MiB record size limit, the typical approach is to store the data externally (e.g. in object storage) and append a pointer to it as a record. You can also serialize large messages across multiple records — see this blog post for patterns and examples with the TypeScript SDK. A single client connection is rate-limited to 200 batches per second, so you should not attempt to write more frequently than every 5 milliseconds, but batch records within that time. See below for high-level abstractions to make this easy!Auto-batching and the Producer API
The SDKs provide a Producer API that handles batching automatically. You submit individual records, and the producer groups them into batches based on configurable thresholds (linger time, record count, byte size). See Tuning for details on batching parameters and session-level performance.See also
SDK
Append sessions, Producer API, backpressure
API Reference
HTTP endpoint, batch semantics, concurrency control

