When your Rails application starts slowing down, it can feel like searching for a needle in a haystack. Is it the database? A slow third-party API? Your own code? This is where performance monitoring tools become essential. They act like a dashboard for your application’s health, showing you exactly where time is being spent. I’ve found that having the right metrics transforms guesswork into a clear plan for improvement. Today, I want to talk about seven specific Ruby gems that help you build that understanding. I’ll show you how to use them with straightforward code you can apply immediately.
Let’s start with Skylight. This gem is built specifically for Rails. Its biggest strength is how little it interferes with your application while giving you a detailed picture. Once installed, it automatically tracks your database queries, view rendering, and external HTTP calls without you writing extra code. You can see which controllers and actions are the slowest at a glance. I find it particularly useful for getting a high-level overview quickly. Beyond the automatic tracking, you can also wrap your important business logic to see how long specific operations take.
Here’s how you typically set it up. You add the gem to your Gemfile and configure it with an authentication key, usually keeping it active only in production to avoid development overhead.
# Gemfile
gem 'skylight'
# config/initializers/skylight.rb
Skylight.configure do |config|
config.environments = ['production']
config.authentication = Rails.application.credentials.skylight_authentication
end
For custom business logic, like a complex order processing method, you can instrument it directly. This helps you see if a specific part of your code is becoming a problem.
# app/services/order_processor.rb
class OrderProcessor
def process(order)
Skylight.instrument(title: 'Order Processing', category: 'app.order') do
# Your complex order logic here
calculate_tax(order)
apply_discounts(order)
charge_payment(order)
end
end
end
You can even use it within your views to see how long a particular render takes, which is great for identifying slow partials.
<!-- app/views/products/index.html.erb -->
<% Skylight.instrument(category: 'view.render', title: 'products/index') do %>
<%= render @products %> <!-- This render call is now monitored -->
<% end %>
Next, consider New Relic. It’s a powerful, comprehensive tool that goes beyond just Rails. It gives you code-level visibility, tracks errors, and can even monitor distributed systems if your application is split into multiple services. I often turn to New Relic when I need to trace a problem from the user’s browser, through my Rails app, and out to a background job or external service. Setting it up is straightforward.
# config/initializers/newrelic.rb
NewRelic::Agent.manual_start(
app_name: Rails.application.class.parent_name,
license_key: Rails.application.credentials.newrelic_license_key
)
One of its most useful features is the ability to create custom metrics. While it tracks standard web metrics automatically, you can add measurements that matter to your business.
# app/controllers/orders_controller.rb
def create
start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
@order = Order.create(order_params)
# ... more logic
end_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
processing_time = end_time - start_time
# Record a custom business metric
NewRelic::Agent.record_metric('Custom/OrderProcessing/Time', processing_time)
NewRelic::Agent.increment_metric('Custom/Orders/Processed', 1)
end
You can also control how transactions are named and grouped in the New Relic dashboard, which makes finding things easier. Sometimes you want to exclude certain endpoints, like health checks, from your performance data.
class OrdersController < ApplicationController
# This action will be ignored by New Relic
newrelic_ignore only: [:health_check]
def create
# Give this transaction a specific name
NewRelic::Agent.set_transaction_name('OrdersController/create')
# ... order creation logic
end
def health_check
render plain: 'OK'
end
end
Another excellent choice is AppSignal. It combines error tracking with performance monitoring in a single package. I like its simplicity and the way it automatically instruments your application. It also collects host-level metrics like CPU and memory usage, which helps you see if a slow request is related to server strain. Getting started involves a similar pattern.
# config/initializers/appsignal.rb
Appsignal.config = Appsignal::Config.new(
Rails.root,
Rails.env,
name: Rails.application.class.parent_name,
push_api_key: Rails.application.credentials.appsignal_key
)
Appsignal.start_logger
Appsignal.start
For monitoring specific blocks of code, you can wrap them in an instrument block.
def show
Appsignal.instrument('fetching_complex_order') do
@order = Order.includes(:customer, :line_items, :payments).find(params[:id])
end
end
AppSignal is very effective for background jobs. You can ensure your Sidekiq or Active Job performance is tracked just as thoroughly as your web requests.
# app/jobs/process_order_job.rb
class ProcessOrderJob < ApplicationJob
around_perform :monitor_performance
def monitor_performance
Appsignal.monitor_transaction('perform.process_order_job') do
yield # The job's perform method runs here
end
end
end
If your main concern is finding and fixing common Rails performance issues like N+1 queries, Scout APM is a strong contender. It automatically detects these problems and points you to the exact line of code causing them. This proactive detection has saved me hours of manual query optimization. Setup is simple.
# config/initializers/scout_apm.rb
ScoutApm::Agent.instance.config.value('key', Rails.application.credentials.scout_key)
ScoutApm::Agent.instance.config.value('name', Rails.application.class.parent_name)
ScoutApm::Agent.instance.start
A practical feature is its ability to monitor calls to external services, like a payment gateway or a weather API.
# app/clients/weather_api_client.rb
class WeatherApiClient
def fetch_forecast(zip_code)
ScoutApm::ExternalService.monitor('Weather API', 'api.weatherapi.com') do
# This HTTP call's timing will be recorded
HTTParty.get("https://api.weatherapi.com/v1/forecast.json?key=XXX&q=#{zip_code}")
end
end
end
You can also manually profile a section of code you suspect is slow. This is useful for narrowing down problems within a large method.
def generate_report
ScoutApm::Agent.instance.start_layer('Custom', 'Data Aggregation')
# Imagine this is a very slow, complex data aggregation
@report_data = slow_aggregation_method
ScoutApm::Agent.instance.stop_layer
end
For teams operating in a microservices environment or using containers, Datadog is a powerful option. Its Ruby integration offers robust distributed tracing. This means you can follow a single user request as it travels through multiple services, which is invaluable for diagnosing slowdowns in a complex system. Configuration is centralized.
# config/initializers/datadog.rb
Datadog.configure do |c|
c.tracing.enabled = true
c.env = Rails.env
c.service = 'rails-checkout-service' # Your service name
c.diagnostics.debug = false # Keep false in production
end
Creating a custom trace for a critical operation is clear and allows you to add useful tags.
# app/services/payment_service.rb
def charge(order)
Datadog::Tracing.trace('payment.process') do |span|
span.service = 'payment-service'
span.resource = 'StripeCharge'
# Add context about the transaction
span.set_tag('order.id', order.id)
span.set_tag('payment.amount', order.total_cents)
result = Stripe::Charge.create(...) # Actual payment logic
span.set_tag('payment.status', result.status)
end
end
Datadog can automatically integrate with ActiveRecord and other libraries to provide detailed insights.
Datadog.configure do |c|
c.tracing.instrument :active_record, service_name: 'postgres'
c.tracing.instrument :redis, service_name: 'session-cache'
end
Librato focuses on time-series metrics and visualization. If you need to track how a specific metric, like user signup time, changes over several days or weeks, Librato’s charts are excellent. It’s great for correlating application performance with business events. You submit metrics directly.
# After a user signs up
def create
start_time = Time.now
@user = User.create(user_params)
duration = Time.now - start_time
# Submit the duration as a metric
Librato::Metrics.submit user_signup_duration: {
value: duration,
source: 'web', # Could also be 'api', 'mobile'
measure_time: Time.now.to_i
}
end
For efficiency, you can batch multiple metric submissions together, which is recommended.
# Using the batch method is more efficient
Librato::Metrics.submit(
order_count: { value: 150, source: 'us-east' },
response_time: { value: 0.45, source: 'api' }
)
You can also add annotations to your charts, like marking when you deployed a new version, to see if the deployment caused a performance change.
# After a successful deployment
Librato::Metrics.annotate :production, 'Deployed v2.5.0', start_time: Time.now.to_i
Finally, for teams that prefer an open-source, self-managed solution, Prometheus is the standard. You host and manage the metrics database yourself. The prometheus-client gem lets you instrument your Rails app to expose metrics that a Prometheus server can scrape. This approach offers maximum control. You start by defining your metrics.
# config/initializers/prometheus.rb
require 'prometheus/client'
# Create a central registry
prometheus = Prometheus::Client.registry
# Define a histogram for request durations
http_request_duration = Prometheus::Client::Histogram.new(
:http_request_duration_seconds,
'Time spent processing HTTP requests',
# Labels let you slice the data by controller, action, and status
[:controller, :action, :status]
)
prometheus.register(http_request_duration)
Using Rack middleware is the easiest way to collect metrics for every HTTP request automatically.
# config/application.rb or config.ru
# This collects standard HTTP metrics
use Prometheus::Middleware::Collector
# This exposes a /metrics endpoint for Prometheus to scrape
use Prometheus::Middleware::Exporter
For custom business logic, you manually observe the duration.
# app/controllers/orders_controller.rb
after_action :record_request_metrics
private
def record_request_metrics
duration = (Time.now - @request_start_time).to_f
# Observe the duration, tagging it with controller, action, and HTTP status
http_request_duration.observe(
duration,
labels: {
controller: controller_name,
action: action_name,
status: response.status
}
)
end
Choosing between these tools depends on your specific needs. If you want a simple, Rails-focused view, Skylight is fantastic. For deep, code-level insights across a complex system, New Relic or Datadog are powerful. If N+1 queries are your nemesis, try Scout APM. For combining errors and performance with host metrics, AppSignal is great. If you need to track business metrics over time with great charts, consider Librato. And if you want full control and own your data, Prometheus is the path.
The key is to start somewhere. Add one of these tools to your production environment. The immediate visibility you gain into your application’s performance is the first and most important step toward making it faster and more reliable for your users. I’ve used each of these in different projects, and they have all, without exception, provided the data needed to make confident improvements.