14. 04. 2026
When you're building fast and iterating hard, Redis is almost always the right first choice for async message queuing. It's already in the stack, configuration is trivial, and Symfony Messenger's Redis transport just works. That's exactly the position we were in — one broker handling everything: emails, DNS checks, notifications, and event ingestion, all running as Redis Streams.
Then the event ingestion pipeline started showing cracks.
Redis Streams is a genuinely good general-purpose queue. The problem isn't Redis — it's the specific demands of a high-volume, time-series event pipeline. Three issues compounded over time:
Ordering guarantees disappear at scale. When multiple consumer groups process messages concurrently, there's no mechanism to enforce causal ordering. For event ingestion, that's a real correctness problem: a later snapshot can overwrite an earlier one, silently corrupting the timeline.
No replay. Redis only retains a message until it's acknowledged. If a worker crashes and restarts, those unprocessed messages are gone — there's no way to replay what was in-flight. For transactional queues like email, that's tolerable (you retry explicitly). For an ordered event log, it's a fundamental gap.
Memory pressure under load. Redis keeps everything in RAM. A worker outage means the queue depth grows unbounded until the container buckles. On a small dev or staging stack, this is a real operational risk.
Observability is thin. XINFO STREAM gives you the basics, but compared to Kafka-native tooling — rpk, Kafka UI, schema registry integration — it's a significant step down.
None of these are Redis's fault. It's doing exactly what it was designed to do. The pipeline had simply outgrown the use case.
The options were Kafka (heavyweight, JVM dependency, complex cluster setup) or Redpanda — a Kafka API-compatible broker that ships as a single binary, no JVM required, with Docker-native ergonomics. For a lean MVP stack, the choice was easy.
Redpanda gives you the full Kafka semantics — partitioned, ordered, append-only log; consumer group offsets; configurable retention — without the operational overhead. The Kafka API compatibility means every tool in the ecosystem (rpk, Kafka UI, schema registry) works out of the box.
The infrastructure changes were surgical. A new redpanda service in docker-compose.yml — single-node, dev-container mode, 512 MB heap, with a healthcheck via rpk cluster health. A named volume redpanda_data handles persistence across restarts.
For local development, a separate compose.development.yaml maps the ports you actually need day-to-day: Kafka API on 19092, REST Proxy on 18082, Schema Registry on 18081, and Admin on 9644. The CI/CD pipeline was updated to wait on Redpanda's healthcheck before starting worker containers — a straightforward dependency ordering change.
The Makefile picked up new targets: logs-redpanda, and extensions to messenger-stats and messenger-purge to accommodate Kafka transports alongside the existing Redis ones.
The application-level change was the cleanest part. Symfony Messenger's transport layer is already abstracted behind a DSN environment variable. Switching the event ingestion pipeline from Redis to Redpanda is a single line:
# Redis (default — still used for all other transports)
MESSENGER_EVENT_INGESTION_TRANSPORT_DSN=redis://redis:6379/event-ingestion
# Redpanda / Kafka
MESSENGER_EVENT_INGESTION_TRANSPORT_DSN=kafka://redpanda:9092?topics[]=event_ingestion
No application code changed. No message handler changes, no serializer changes, nothing. The transport abstraction held perfectly.
It's worth being explicit: Redis is still in the stack, and it should be. Email delivery, DNS checks, and notification queues are short-lived, transactional messages where Redis is the right tool. They don't need replay semantics or strict ordering — they need to be fast and simple, and Redis delivers that. It also continues to serve the session store.
The stack now has two brokers with a clear division of responsibility.
The architecture is now in place. Redis handles short, transactional queues. Redpanda holds the event ingestion log — ordered, replayable, and backed by a tooling ecosystem that makes debugging and introspection significantly more practical.
The event_ingestion transport is fully configured and ready to flip. What remains is load testing the consumer behavior under realistic write volume and tuning partition counts before the pipeline goes live in production. That's a story for the next post.
This is part of an ongoing series documenting infrastructure decisions made during the development of an unreleased product. Names and project-specific details have been omitted.