I’ve spent years designing Rails systems that respond to events in real time. Event-driven architectures let applications scale smoothly while maintaining clear boundaries between components. Here are seven patterns I regularly use to build robust systems:
Event Publishers Inside Transactions
Wrapping event emission in database transactions prevents inconsistent states. If the main operation fails, no events fire. I implement it like this:
class PaymentService
def capture(payment_id)
Payment.transaction do
payment = Payment.lock.find(payment_id)
payment.capture!
EventPublisher.publish(:payment_captured, payment.attributes)
end
end
end
Routing with Event Buses
Centralized routing logic keeps publishers decoupled from subscribers. My bus implementations map event types to handlers:
class EventBus
HANDLERS = {
invoice_approved: [Notifications::EmailSender, Accounting::LedgerUpdater],
user_registered: [Analytics::Tracker, Onboarding::SequenceStarter]
}
def self.dispatch(event)
HANDLERS[event.name]&.each { |handler| handler.process(event.payload) }
end
end
Idempotency Keys for Safety
Duplicate events are inevitable in distributed systems. I use Redis-based checks to guarantee single processing:
class WebhookReceiver
def receive(request)
return if Redis.exists?("processed:#{request.idempotency_key}")
ActiveRecord::Base.transaction do
process_event(request.payload)
Redis.setex("processed:#{request.idempotency_key}", 86_400, 1)
end
end
end
Event-Sourced Aggregates
For critical entities like orders, I store state changes as immutable events:
class Order
def apply_event(event)
case event.type
when :item_added
items << event.payload[:item]
when :quantity_changed
find_item(event.item_id).update_quantity(event.quantity)
end
end
end
# Rebuilding state
order = Order.new
OrderEvent.where(order_id: order_id).each { |e| order.apply_event(e) }
Transactional Outbox Pattern
To prevent lost events during failures, I combine database commits with event persistence:
class Outbox
def record_events(aggregate)
aggregate.events.each do |event|
OutboxMessage.create!(
topic: 'orders',
payload: event.to_json,
created_at: Time.current
)
end
end
end
# Separate worker process
OutboxMessage.where(processed: false).find_each do |msg|
KafkaProducer.deliver(msg.topic, msg.payload)
msg.update!(processed: true)
end
Dead Letter Handling
When events repeatedly fail, I isolate them for investigation:
class EventConsumer
rescue_from(StandardError) do |exception|
if retry_count(exceeded: 3)
DeadLetter.create!(
original_event: event_json,
error: exception.message,
failed_at: Time.current
)
else
retry_job(wait: exponential_backoff)
end
end
end
Circuit Breakers for Fault Tolerance
I protect against cascading failures with state-aware proxies:
class InventoryServiceClient
def reserve_stock(items)
return :service_unavailable if circuit_breaker.open?
begin
response = HTTP.post(inventory_url, json: items)
circuit_breaker.success
response
rescue Timeout::Error
circuit_breaker.failure
:timeout
end
end
end
Testing these patterns requires specific approaches. I verify idempotency with duplicate event simulations and use contract tests for event schemas. For monitoring, I track key metrics like event delivery latency and dead letter queue sizes. Schema evolution is managed through versioned payloads:
# Versioned event schema
EventPublisher.publish(:order_created, {
schema_version: '1.2',
order: {
id: order.id,
# New fields added at bottom
discount_type: 'loyalty'
}
})
These patterns form a toolkit for building resilient systems. They’ve helped me design applications that process thousands of events per second while maintaining data integrity. The key is starting simple with transactional publishers, then layering complexity as needed.