Ingestion

Batch ingestion

POST /api/v1/batch — send track + people + alias operations in one request. Mixed-type, ordered (aliases first, then people, then events), and counted as one against the rate limit.

The batch endpoint accepts a mixed list of track, people, and alias operations in a single HTTP request. It's the high-throughput path — every SDK's flush uses this endpoint by default — and it's the right answer when you need to ingest hundreds or thousands of operations without burning rate-limit budget.

POST/api/v1/batch

Authentication

Required header: x-api-key: sk_live_… or x-api-key: sk_test_…. See Authentication.

Request body

JSON
{
"operations": [
  {
    "type": "track" | "people" | "alias",
    "payload": { /* full payload for the operation type */ }
  }
]
}
operationsarray<object>Required
Ordered list of operations. Empty array → 400 Bad Request. No documented maximum length; practical limit is the 500 MB body cap.
operations[].typestringRequired
One of `track`, `people`, or `alias`. Any other value → 400 Bad Request.
operations[].payloadobjectRequired
The full payload for the operation type. For `track`, the [AnalyticsEvent](/api/ingestion/track) shape. For `people`, the [PersonProfile](/api/ingestion/people) shape. For `alias`, the [PersonAlias](/api/ingestion/alias) shape.

Example request

bash
curl -X POST https://api.sankofa.dev/api/v1/batch \
-H "Content-Type: application/json" \
-H "x-api-key: sk_live_..." \
-d '{
  "operations": [
    {
      "type": "alias",
      "payload": {
        "alias_id": "anon_a3b9ff",
        "distinct_id": "user_123"
      }
    },
    {
      "type": "people",
      "payload": {
        "distinct_id": "user_123",
        "properties": {
          "email": "ada@example.com",
          "plan": "pro"
        }
      }
    },
    {
      "type": "track",
      "payload": {
        "event_name": "checkout_started",
        "distinct_id": "user_123",
        "properties": { "cart_value": 49.99 }
      }
    },
    {
      "type": "track",
      "payload": {
        "event_name": "checkout_completed",
        "distinct_id": "user_123",
        "properties": { "cart_value": 49.99, "currency": "USD" }
      }
    }
  ]
}'

Response

On success the engine reports per-type counts:

JSON
{
"ok": true,
"project_id": "proj_abc",
"project_name": "My App",
"environment": "live",
"operations_received": 4,
"events_received": 2,
"people_received": 1,
"aliases_received": 1,
"commands": []
}

Where the counts match the type of operations the engine accepted (after garbage-ID filtering).

Discarded responses

If every operation's distinct_id is garbage, the engine returns 202 Accepted with:

JSON
{
"ok": true,
"status": "discarded_all"
}

If only some operations have garbage IDs, the engine silently drops those and proceeds with the rest. The success response counts only accepted operations — you won't see per-item rejection reporting.

Processing order

The engine processes operations in this order regardless of how they appear in your operations array:

  1. Aliases first

    All type: "alias" operations are queued before anything else. This matters because subsequent people updates and track events on the new distinct_id will land cleanly after the alias is in flight.

  2. People next

    All type: "people" operations queued after aliases.

  3. Events last

    All type: "track" operations queued last.

Within each category, operations preserve the order you sent them — so two events on the same user keep their relative ordering even though they're queued after aliases / people.

Validation

The handler runs these checks in order:

  1. Auth + Origin + IP

    Same as the single-event endpoints.

  2. JSON parse

    Body parses as { operations: [...] }. If not: 400.

  3. Operations not empty

    operations array is non-empty. If empty: 400 No operations provided.

  4. Per-operation type check

    Each operation has a known type. Unknown type → 400 unknown operation type (entire batch rejected).

  5. Per-operation payload parse

    Each operation's payload parses as the corresponding shape. Bad payload → 400 Invalid {track|people|alias} payload (entire batch rejected).

  6. Garbage-ID filter

    Per-item, the garbage-ID heuristic drops items silently from the batch.

  7. Queue + respond

    Aliases queued first, then people, then events; per-type counts returned.

There's no per-item error reporting — a malformed payload at index 47 fails the entire batch. If you need partial-failure semantics (some operations succeed, some fail), split into multiple batch calls or use the single-operation endpoints.

Why use batch

Single-operation endpointsBatch endpoint
1 request per operation1 request for ≤ 500 MB of operations
500 ops/min before rate-limitedEffectively unlimited (rate-limit is per-request, not per-op)
Higher per-op overhead (TLS handshake, network round-trip)Single round-trip amortized across all ops
Easy retry / dedupWhole-batch retry; per-op partial-failure handling tricky

The official SDKs use batch by default — they buffer ~50 events on mobile, ~100 on web — flushing roughly every 5 seconds or on app suspend.

Size and rate considerations

LimitValueNotes
Max operations per batchNo documented limitPractical limit is the body cap
Max body size500 MBEngine-wide HTTP body limit
Rate limit500 requests / minuteSame limit as single-op endpoints
Max event sizeNo per-event capConstrained by the 500 MB body

For most projects, batches of 100–500 operations every few seconds is the sweet spot. Larger batches reduce per-op overhead; smaller batches reduce loss exposure if the request fails before queueing.

Idempotency

Batch operations are not idempotent — retrying a successful batch creates duplicate events / aliases / people updates. Only retry on 429, 503, and network failures.

Common errors

ErrorStatusCause
No operations provided400Empty operations array
unknown operation type400An operation's type is not track / people / alias
Invalid track payload400A track operation's payload doesn't parse as AnalyticsEvent
Invalid people payload400A people operation's payload doesn't parse as PersonProfile
Invalid alias payload400An alias operation's payload doesn't parse as PersonAlias
Rate limit exceeded429Hit the 500 req/min cap. See Rate limits
Service temporarily unavailable503Engine ingest buffer full — retry with backoff

What's next

Edit this page on GitHub