Event Processing

Relay provides powerful server-side event processing features that transform, route, and aggregate events before they reach subscribers.


Event Deduplication

Prevent duplicate events by sending an idempotency key:

POST /api/v1/apps/{appId}/events
{
  "name": "order.created",
  "channels": ["orders"],
  "data": "{\"id\": 42}",
  "idempotency_key": "order-42-created"
}

If the same idempotency key is seen within the dedup window (default 5 minutes), the event is silently dropped and a success response is returned.

Enable per app in SettingsEvent Deduplication.


Scheduled Events

Publish events with a future delivery time:

POST /api/v1/apps/{appId}/events
{
  "name": "reminder",
  "channels": ["user-123"],
  "data": "{\"message\": \"Meeting in 5 minutes\"}",
  "deliver_at": "2024-03-15T14:00:00Z"
}

The event is held by Relay and delivered at the specified time. You can cancel scheduled events before delivery.


Content-Based Routing

Route events to additional channels based on payload content:

Example rule: If data.priority == "urgent", also publish to alerts.*

Configure in app dashboard → Event Routes.


Event Bridges (Cross-App)

Route events between apps in the same organization:

  • App A's orders.created → App B's external.order
  • Pattern matching on source channels and events
  • Optional event name transformation

Channel State Machines

Define state machines for channels to enforce valid event sequences:

{
  "states": ["idle", "active", "paused", "closed"],
  "initial_state": "idle",
  "transitions": [
    {"from": "idle", "to": "active", "event": "start"},
    {"from": "active", "to": "paused", "event": "pause"},
    {"from": "paused", "to": "active", "event": "resume"},
    {"from": "active", "to": "closed", "event": "close"}
  ]
}

If reject_invalid_transitions is enabled, events that don't match a valid transition are rejected with a 422.


Aggregation Pipelines

Define rolling aggregations over event streams:

Example: Every 30 seconds, compute {count, total_amount, avg_amount} from orders.* events and publish to stats.orders.

Pipeline config:

  • Source — channel and event pattern to aggregate
  • Window — time window in seconds
  • Aggregations — count, sum, avg, min, max
  • Group by — optional payload field grouping
  • Output — channel and event name for results

Event Sourcing

Enable append-only event logging for channels that need full replay capability:

  • Every event gets a sequence number
  • Events are never modified or deleted
  • Replay from any sequence number
  • Causation and correlation IDs for tracing
GET /api/v1/apps/{appId}/intelligence/event-store/orders?from_sequence=100

Channel Groups

Group channels by pattern for shared configuration:

{
  "name": "Order Channels",
  "channel_patterns": ["orders-*", "checkout-*"],
  "config": {
    "message_ttl_seconds": 3600,
    "max_channels": 1000
  }
}

All channels matching the patterns inherit the group's rate limits, TTL, and schema validation settings.


Dead Letter Queue

When events fail processing (schema validation, webhook delivery, edge function error), they're sent to the DLQ:

  • Inspect the original payload and error
  • Retry delivery (up to 3 attempts)
  • Discard if no longer needed

DLQ is accessible from the app dashboard.


Fan-Out Controls

Control how events are delivered to subscribers:

Mode Behavior
all Deliver to all subscribers (default)
first_n Only first N subscribers receive it
round_robin Rotate delivery across subscribers
random Randomly select N subscribers

Useful for task queues and competing consumer patterns.


Event Priority

Assign priority levels (0-9) to events. During backpressure, high-priority events are delivered first.

Configure per channel pattern in app settings.