Building real-time features in Rails applications requires thoughtful architecture. Action Cable provides the foundation, but effective implementation demands specific techniques. I’ve refined these approaches through numerous production deployments.
Secure connections form the bedrock of real-time systems. Here’s how I handle authentication:
# app/channels/application_cable/connection.rb
class ApplicationCable::Connection < ActionCable::Connection::Base
identified_by :current_user
def connect
self.current_user = verify_user
track_connection_metrics
end
private
def verify_user
User.find_by(verification_token: request.params[:token]) || reject_unauthorized_connection
end
def track_connection_metrics
MetricsCollector.record_connection(current_user.id)
logger.info "Verified connection for #{current_user.email}"
end
end
Targeted channel streams prevent data leaks between users. I implement granular resource targeting:
# app/channels/project_updates_channel.rb
class ProjectUpdatesChannel < ApplicationCable::Channel
def subscribed
project = Project.find(params[:project_id])
authorize_project_access(project)
stream_for project
track_subscription(project)
end
private
def authorize_project_access(project)
unless current_user.projects.include?(project)
reject_subscription
AuditLog.record_access_violation(current_user, project)
end
end
def track_subscription(project)
ProjectAnalytics.new(project).log_subscriber(current_user)
end
end
Background broadcasting keeps applications responsive. I decouple processing from delivery:
# app/services/realtime_broadcaster.rb
class RealtimeBroadcaster
BROADCAST_QUEUE = :critical
def self.deliver_update(channel, payload)
ActionCable.server.broadcast(channel, compress_payload(payload))
rescue StandardError => e
ErrorTracker.notify(e)
schedule_retry(channel, payload)
end
private
def self.compress_payload(payload)
return payload if payload[:size] < 1.kilobyte
Zlib::Deflate.deflate(payload.to_json)
end
def self.schedule_retry(channel, payload)
BroadcastRetryJob.set(queue: BROADCAST_QUEUE).perform_later(channel, payload)
end
end
# app/jobs/update_project_status_job.rb
class UpdateProjectStatusJob < ApplicationJob
queue_as :realtime
def perform(project_id)
project = Project.find(project_id)
RealtimeBroadcaster.deliver_update(
"project_#{project.id}",
ProjectStatusSerializer.new(project).as_json
)
end
end
Client-side handling requires robust validation. I implement schema checks:
// app/javascript/channels/project_updates.js
import consumer from "./consumer"
const channel = consumer.subscriptions.create(
{ channel: "ProjectUpdatesChannel", project_id: projectId },
{
received(data) {
if (this.validateSchema(data)) {
this.updateUI(data)
}
},
validateSchema(data) {
const requiredKeys = ['id', 'status', 'updated_at'];
return requiredKeys.every(key => data.hasOwnProperty(key));
},
updateUI(data) {
// DOM manipulation logic
}
}
)
Scalability requires infrastructure planning. My deployment configuration includes:
# config/cable.yml
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
channel_prefix: app_production
worker_pool: <%= ENV.fetch("CABLE_WORKERS", 4).to_i %>
Performance optimization prevents bottlenecks. I use connection monitoring:
# lib/connection_monitor.rb
class ConnectionMonitor
INTERVAL = 30.seconds
def initialize
@timer = Concurrent::TimerTask.new(execution_interval: INTERVAL) { check_resources }
end
def start
@timer.execute
end
private
def check_resources
monitor_memory_usage
terminate_stale_connections
end
def monitor_memory_usage
usage = ConnectionAnalyzer.memory_per_connection
AlertManager.notify if usage > 100 # MB
end
def terminate_stale_connections
StaleConnectionCleaner.new.clean
end
end
Message validation prevents injection attacks. I enforce strict payload rules:
# app/channels/chat_channel.rb
class ChatChannel < ApplicationCable::Channel
def receive(data)
sanitized = MessageSanitizer.process(data)
return log_rejection unless sanitized.valid?
persist_message(sanitized)
broadcast_to_recipients(sanitized)
end
private
def persist_message(message)
Message.create!(
content: message.content,
user: current_user,
room_id: message.room_id
)
end
def broadcast_to_recipients(message)
RealtimeBroadcaster.deliver_update(
"room_#{message.room_id}",
message.broadcast_payload
)
end
end
Subscription management handles resource cleanup. I implement expiration policies:
# app/models/action_cable/subscription.rb
class ActionCable::Subscription
EXPIRATION = 2.hours
after_create :schedule_expiration
private
def schedule_expiration
SubscriptionExpirationJob
.set(wait: EXPIRATION)
.perform_later(self.id)
end
end
# app/jobs/subscription_expiration_job.rb
class SubscriptionExpirationJob < ApplicationJob
def perform(subscription_id)
subscription = Subscription.find(subscription_id)
return if subscription.recent_activity?
subscription.terminate
AuditLog.record_expiration(subscription)
end
end
These methods support diverse real-time applications. Collaborative editing systems benefit from operational transformations. Live dashboards require efficient data diffing. Instant messaging systems need read receipts and typing indicators.
Connection recovery strategies maintain user experience during network issues. I implement automatic reconnection with backoff:
// app/javascript/channels/consumer.js
let reconnectAttempts = 0;
const MAX_ATTEMPTS = 5;
function createSocket() {
return new WebSocket(connectionURL);
}
function connectWithBackoff() {
const socket = createSocket();
socket.onclose = () => {
const delay = Math.min(1000 * (2 ** reconnectAttempts), 30000);
setTimeout(connectWithBackoff, delay);
reconnectAttempts++;
};
socket.onopen = () => {
reconnectAttempts = 0;
initializeChannels();
};
}
Payload compression reduces bandwidth consumption. For high-frequency updates:
# app/serializers/compact_project_serializer.rb
class CompactProjectSerializer
def initialize(project)
@project = project
end
def as_json
{
i: @project.id,
s: @project.status_code,
u: @project.updated_at.to_i
}
end
end
These techniques balance performance with functionality. Resource-efficient streaming keeps server costs manageable. Connection pooling prevents memory bloat. Selective broadcasting reduces unnecessary network traffic. I’ve found that combining these approaches creates resilient real-time systems that scale gracefully under load.