Connect to MySQL binlog or PostgreSQL logical replication. Wildcard table patterns supported.
Processors
Chain native and scripted processors in order. Here: Filter drops draft orders in native Rust, then Flatten normalizes nested JSON to parent__child keys before delivery.
Deliver
Fan-out to Kafka, Redis, and NATS simultaneously. Each sink is independent — one failure won't block the others.
Route
Template strings resolve per-event: cdc.${source.table} routes each table to its own topic.
Envelope
Choose the wire format consumers expect: Native, Debezium-compatible, or CloudEvents 1.0. One config field, no consumer code changes.
Checkpoint
End-to-end exactly-once (Kafka) or at-least-once with dedup (NATS, Redis). Per-sink independent checkpoints — fastest sink is never held back by the slowest.
Why DeltaForge
Built for production (CDC)
Stream database changes reliably with minimal overhead.
High Performance
Single binary, no JVM, no GC pauses. 151K events/s from MySQL, 57K from Postgres on a single developer machine — even with exactly-once enabled (134K / 53K). Sub-millisecond latency with minimal memory footprint.
Delivery Guarantees
End-to-end exactly-once for Kafka (transactional producer, ~7-11% overhead). At-least-once with server dedup for NATS, consumer dedup for Redis. Per-sink independent checkpoints — each sink advances at its own pace. Configurable commit policies: all, required, or quorum.
Initial Load & Snapshot
Consistent snapshot of existing table data before streaming begins. Tables run in parallel with chunked range reads for large datasets. Lock-free InnoDB approach — no RELOAD privilege required. Resumes at table granularity after interruption.
Dynamic Routing
Route events to per-table topics, streams, or subjects using template strings or JavaScript logic. No pipeline restart required when routing rules change.
Multi-Sink Fan-Out
Deliver concurrently to Kafka, Redis, NATS, and HTTP/Webhooks in a single pipeline. Per-sink independent checkpoints — each sink advances at its own pace. A slow Redis cache won't hold back your Kafka stream.
Schema Discovery
Automatically infer and track schemas from live event payloads. Detects structural drift in nested JSON and alerts before it breaks downstream consumers.
Avro + Schema Registry
Confluent wire format with DDL-derived type-accurate schemas. Safe defaults for unsigned integers, enums, and timestamps. Schema Registry failure handled gracefully with cached fallback. Native to Kafka Connect, ksqlDB, and Flink.
Live Observability
Prometheus metrics, structured logging, and health probes built in. Pause, resume, patch, and inspect running pipelines via REST API — no redeployment needed.
Zero-Downtime DDL
Detects schema changes and reloads automatically. No pipeline restart, no manual intervention, no dropped events during table alterations.
Graceful Failover
Detects primary server changes via server identity comparison, verifies that prior checkpoints are covered by the new primary's transaction history, reconciles schema drift, and resumes streaming automatically. Configurable halt-on-drift policy for stricter environments.
Dead Letter Queue
Poison events (serialization or routing failures) are routed to a DLQ instead of blocking the pipeline. Inspect, filter, and ack via REST API. Configurable overflow policies. Built on existing storage — no additional infrastructure.
Kubernetes Ready
Helm chart with StatefulSet, persistent checkpoints, health probes, and Prometheus ServiceMonitor. Secrets stay in K8s Secrets — only credentials, not connection details. One command to deploy.
Output Formats
Three envelope formats, one config field
Match the wire format your consumers expect. Switch with a single line of YAML.
NativeLowest overhead
Minimal envelope for DeltaForge-native consumers. No wrapper overhead.
All envelopes support both JSON and Avro wire encoding. Avro uses the Confluent wire format with Schema Registry for compact binary payloads and schema governance.
Dynamic Routing
Route every event to the right destination
Template strings for simple per-table routing. JavaScript for conditional business logic.
# Routes each table to its own topic automaticallysinks:
- type: kafkaconfig:
topic: cdc.${source.db}.${source.table}key: ${after.customer_id}
- type: redisconfig:
stream: events:${source.table}key: ${after.id}
- type: natsconfig:
subject: cdc.${source.db}.${source.table}# Available: ${source.table}, ${source.db}, ${op},# ${after.<field>}, ${before.<field>}, ${tenant_id}
functionprocessBatch(events) {
for (const ev of events) {
if (!ev.after) continue;
if (ev.after.total_amount > 10000) {
// High-value orders → priority topic
ev.route({
topic: "orders.priority",
key: String(ev.after.customer_id),
headers: {
"x-tier": "high-value",
"x-amount": String(ev.after.total_amount)
}
});
} else {
ev.route({
topic: "orders.standard",
key: String(ev.after.customer_id)
});
}
}
return events;
}