Database connections are the lifeblood of most Rails applications. They’re also finite resources that can become serious bottlenecks if not managed carefully. I’ve spent years tuning Rails applications under heavy load, and I’ve learned that smart connection pool management often makes the difference between a responsive application and one that grinds to a halt.
The foundation starts with proper configuration. Your database.yml file isn’t just paperwork—it’s the first line of defense against connection issues.
production:
adapter: postgresql
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
timeout: 5000
checkout_timeout: 30
reaping_frequency: 60
prepared_statements: false
Matching your pool size to your thread concurrency is crucial. If you’re using Puma with 5 threads, your pool should be at least 5. I’ve seen applications where developers set arbitrary pool sizes that either wasted resources or caused connection waits during traffic spikes.
Connection validation is another area where proactive measures pay dividends. Stale connections can accumulate and cause unexpected timeouts.
class ConnectionValidator
def initialize(pool, validation_timeout: 30)
@pool = pool
@validation_timeout = validation_timeout
end
def validate_connections
@pool.connections.each do |conn|
next unless conn.lease_timeout?(@validation_timeout)
begin
conn.verify!
rescue ActiveRecord::ConnectionNotEstablished
@pool.remove(conn)
end
end
end
end
I schedule this validation to run periodically, typically every minute. It catches connections that have gone bad without waiting for a user request to discover the problem.
Explicit connection management has saved me from numerous leaks over the years. While Rails handles connections automatically in most cases, there are scenarios where manual control becomes necessary.
class ConnectionManager
def self.with_connection(&block)
connection = ActiveRecord::Base.connection_pool.checkout
begin
yield connection
ensure
ActiveRecord::Base.connection_pool.checkin(connection)
end
end
def self.connection_metrics
pool = ActiveRecord::Base.connection_pool
{
size: pool.size,
connections: pool.connections.size,
in_use: pool.connections.count(&:in_use?),
waiting: pool.num_waiting_in_queue
}
end
end
The with_connection pattern ensures that every checkout is matched with a checkin, even if exceptions occur. I use this extensively in background jobs and service objects where connection lifetime needs to be tightly controlled.
Monitoring connection metrics gives you visibility into how your pool is performing under real load. I’ve integrated these metrics into our monitoring dashboard to track patterns over time.
# Usage example
ConnectionManager.with_connection do |conn|
conn.execute("SELECT pg_sleep(1)")
end
# Checking metrics
metrics = ConnectionManager.connection_metrics
puts "Pool utilization: #{metrics[:in_use]}/#{metrics[:size]}"
Static pool sizes work for predictable loads, but real-world traffic is rarely consistent. I’ve implemented adaptive pooling that responds to current demand.
class AdaptivePoolManager
def initialize(min_pool: 1, max_pool: 20, scale_factor: 2)
@min_pool = min_pool
@max_pool = max_pool
@scale_factor = scale_factor
end
def adjust_pool_size
current_metrics = ConnectionManager.connection_metrics
waiting_ratio = current_metrics[:waiting].to_f / current_metrics[:size]
if waiting_ratio > 0.5 && current_metrics[:size] < @max_pool
new_size = [current_metrics[:size] * @scale_factor, @max_pool].min
resize_pool(new_size)
elsif waiting_ratio < 0.1 && current_metrics[:size] > @min_pool
new_size = [current_metrics[:size] / @scale_factor, @min_pool].max
resize_pool(new_size)
end
end
private
def resize_pool(new_size)
ActiveRecord::Base.connection_pool.disconnect!
ActiveRecord::Base.establish_connection(
ActiveRecord::Base.connection_config.merge(pool: new_size)
)
end
end
This adaptive approach has helped us handle traffic spikes without over-provisioning resources during quiet periods. The key is finding the right thresholds for your specific workload.
Connection leaks are insidious problems that often only surface under production load. I’ve learned to implement proactive leak detection rather than waiting for users to report timeouts.
class LeakDetector
def initialize(check_interval: 60)
@check_interval = check_interval
@monitored_pools = {}
end
def monitor_pool(pool_name, pool)
@monitored_pools[pool_name] = {
pool: pool,
last_check: Time.now,
last_count: pool.connections.count(&:in_use?)
}
end
def check_leaks
@monitored_pools.each do |name, data|
current_in_use = data[:pool].connections.count(&:in_use?)
duration = Time.now - data[:last_check]
if current_in_use > data[:last_count] && duration > @check_interval
Rails.logger.warn "Possible connection leak in #{name}: #{current_in_use} connections in use"
end
data[:last_count] = current_in_use
data[:last_check] = Time.now
end
end
end
This leak detector has caught several issues early, from missing checkin calls to connection retention in long-running processes. The time-based comparison helps distinguish legitimate usage increases from actual leaks.
Prepared statement management becomes important at scale. While they can improve performance, they also consume database resources.
class StatementManager
def initialize(max_statements: 1000)
@max_statements = max_statements
@statement_cache = {}
@access_times = {}
end
def prepare_statement(sql)
if @statement_cache.size >= @max_statements
least_used = @access_times.min_by { |_, time| time }
@statement_cache.delete(least_used.first)
@access_times.delete(least_used.first)
end
@statement_cache[sql] ||= ActiveRecord::Base.connection.prepare_statement(sql)
@access_times[sql] = Time.now
@statement_cache[sql]
end
def clear_old_statements(older_than: 1.hour)
@access_times.delete_if do |sql, time|
if time < older_than.ago
@statement_cache.delete(sql)
true
end
end
end
end
The LRU eviction policy ensures we keep the most useful statements cached. I’ve found that combining this with time-based cleanup prevents memory bloat while maintaining performance benefits.
Modern applications often work with multiple databases, each requiring its own connection management strategy.
class MultiDatabaseManager
def initialize(configs)
@pools = configs.transform_values do |config|
ActiveRecord::ConnectionAdapters::ConnectionPool.new(config)
end
end
def with_connection(database_name, &block)
pool = @pools[database_name]
connection = pool.checkout
begin
yield connection
ensure
pool.checkin(connection)
end
end
def shutdown
@pools.each_value(&:disconnect!)
end
end
# Configuration example
databases = {
primary: { adapter: 'postgresql', database: 'app_primary', pool: 5 },
analytics: { adapter: 'postgresql', database: 'app_analytics', pool: 3 }
}
manager = MultiDatabaseManager.new(databases)
This centralized management approach simplifies working with multiple database connections. Each pool operates independently, preventing issues in one database from affecting others.
Implementing these patterns requires careful consideration of your specific workload. Connection validation frequency should balance freshness checks with performance overhead. Pool size limits need to respect both database capabilities and application requirements.
Monitoring integration provides the feedback loop necessary to tune these parameters effectively. I track connection wait times, pool utilization rates, and error rates to identify when adjustments are needed.
The goal is always to maintain sufficient connections for current demand without overwhelming the database. This balancing act becomes more complex as applications scale, but the patterns I’ve described provide a solid foundation for managing that complexity.
Connection pool management isn’t a set-it-and-forget-it configuration. It requires ongoing attention and adjustment as your application evolves. The patterns I use today continue to evolve as I encounter new challenges and learn better approaches.
The most important lesson I’ve learned is to treat connection management as a first-class concern rather than an implementation detail. Proactive management prevents many performance issues before they impact users.
These approaches have served me well across numerous production applications. They provide the reliability needed for critical systems while maintaining the flexibility to adapt to changing requirements.