A versatile, high-performance Change Data Capture engine built in Rust. Stream database changes to Kafka, Redis, NATS and etc.

⚠️ Active development

Release Arch License
Built with
Rust Rust
Sources
MySQL MySQL
PostgreSQL PostgreSQL
Processors
JavaScript JavaScript
Outbox
Flatten
Filter
Sinks
Kafka Kafka
Redis Redis
NATS NATS
HTTP
Formats
JSON Avro · Native Debezium CloudEvents
Define a pipeline in YAML. Run it.
No boilerplate code, no JVM, no Zookeeper. One config file, one binary.
pipeline.yaml
metadata: name: orders-to-kafka tenant: acme spec: source: type: mysql config: dsn: ${MYSQL_DSN} tables: [shop.orders] processors: - type: filter ops: [create, update] fields: - path: status op: ne value: "draft" - type: flatten separator: "__" empty_list: drop sinks: - type: kafka config: brokers: ${KAFKA_BROKERS} topic: cdc.${source.table} envelope: type: debezium
Capture
Connect to MySQL binlog or PostgreSQL logical replication. Wildcard table patterns supported.
Processors
Chain native and scripted processors in order. Here: Filter drops draft orders in native Rust, then Flatten normalizes nested JSON to parent__child keys before delivery.
Deliver
Fan-out to Kafka, Redis, and NATS simultaneously. Each sink is independent — one failure won't block the others.
Route
Template strings resolve per-event: cdc.${source.table} routes each table to its own topic.
Envelope
Choose the wire format consumers expect: Native, Debezium-compatible, or CloudEvents 1.0. One config field, no consumer code changes.
Checkpoint
End-to-end exactly-once (Kafka) or at-least-once with dedup (NATS, Redis). Per-sink independent checkpoints — fastest sink is never held back by the slowest.
Built for production (CDC)
Stream database changes reliably with minimal overhead.
High Performance
Single binary, no JVM, no GC pauses. 151K events/s from MySQL, 57K from Postgres on a single developer machine — even with exactly-once enabled (134K / 53K). Sub-millisecond latency with minimal memory footprint.
Delivery Guarantees
End-to-end exactly-once for Kafka (transactional producer, ~7-11% overhead). At-least-once with server dedup for NATS, consumer dedup for Redis. Per-sink independent checkpoints — each sink advances at its own pace. Configurable commit policies: all, required, or quorum.
Initial Load & Snapshot
Consistent snapshot of existing table data before streaming begins. Tables run in parallel with chunked range reads for large datasets. Lock-free InnoDB approach — no RELOAD privilege required. Resumes at table granularity after interruption.
Dynamic Routing
Route events to per-table topics, streams, or subjects using template strings or JavaScript logic. No pipeline restart required when routing rules change.
Multi-Sink Fan-Out
Deliver concurrently to Kafka, Redis, NATS, and HTTP/Webhooks in a single pipeline. Per-sink independent checkpoints — each sink advances at its own pace. A slow Redis cache won't hold back your Kafka stream.
Schema Discovery
Automatically infer and track schemas from live event payloads. Detects structural drift in nested JSON and alerts before it breaks downstream consumers.
Avro + Schema Registry
Confluent wire format with DDL-derived type-accurate schemas. Safe defaults for unsigned integers, enums, and timestamps. Schema Registry failure handled gracefully with cached fallback. Native to Kafka Connect, ksqlDB, and Flink.
Live Observability
Prometheus metrics, structured logging, and health probes built in. Pause, resume, patch, and inspect running pipelines via REST API — no redeployment needed.
Zero-Downtime DDL
Detects schema changes and reloads automatically. No pipeline restart, no manual intervention, no dropped events during table alterations.
Graceful Failover
Detects primary server changes via server identity comparison, verifies that prior checkpoints are covered by the new primary's transaction history, reconciles schema drift, and resumes streaming automatically. Configurable halt-on-drift policy for stricter environments.
Dead Letter Queue
Poison events (serialization or routing failures) are routed to a DLQ instead of blocking the pipeline. Inspect, filter, and ack via REST API. Configurable overflow policies. Built on existing storage — no additional infrastructure.
Kubernetes Ready
Helm chart with StatefulSet, persistent checkpoints, health probes, and Prometheus ServiceMonitor. Secrets stay in K8s Secrets — only credentials, not connection details. One command to deploy.
Three envelope formats, one config field
Match the wire format your consumers expect. Switch with a single line of YAML.
Native Lowest overhead
Minimal envelope for DeltaForge-native consumers. No wrapper overhead.
{ "op": "c", "after": { "id": 1, "status": "new" }, "source": { "table": "orders" } }
Debezium Drop-in compatible
Schemaless mode compatible with JsonConverter schemas.enable=false.
{ "schema": null, "payload": { "op": "c", "after": { "id": 1, "status": "new" } } }
CloudEvents CNCF standard
CNCF CloudEvents 1.0 spec for event-driven and serverless architectures.
{ "specversion": "1.0", "type": "com.acme.orders.create", "data": { "after": { "id": 1, "status": "new" } } }
All envelopes support both JSON and Avro wire encoding. Avro uses the Confluent wire format with Schema Registry for compact binary payloads and schema governance.
Route every event to the right destination
Template strings for simple per-table routing. JavaScript for conditional business logic.
# Routes each table to its own topic automatically sinks: - type: kafka config: topic: cdc.${source.db}.${source.table} key: ${after.customer_id} - type: redis config: stream: events:${source.table} key: ${after.id} - type: nats config: subject: cdc.${source.db}.${source.table} # Available: ${source.table}, ${source.db}, ${op}, # ${after.<field>}, ${before.<field>}, ${tenant_id}
function processBatch(events) { for (const ev of events) { if (!ev.after) continue; if (ev.after.total_amount > 10000) { // High-value orders → priority topic ev.route({ topic: "orders.priority", key: String(ev.after.customer_id), headers: { "x-tier": "high-value", "x-amount": String(ev.after.total_amount) } }); } else { ev.route({ topic: "orders.standard", key: String(ev.after.customer_id) }); } } return events; }
Up and running in seconds
Single binary. No runtime dependencies.

Docker

Recommended
docker pull ghcr.io/vnvo/deltaforge:latest copy

Docker Hub

Alternative registry
docker pull vnvohub/deltaforge:latest copy

From Source

Requires Rust 1.89+
cargo build --release -p runner copy