Error handling is something I’ve learned the hard way in my career. When I first started building Ruby on Rails applications, I thought if my code worked in development, it would be fine in production. Then real users came along, and things broke in ways I never imagined. Errors happened, and without proper handling, they led to frustrated users and long debugging sessions. Over time, I discovered that good error handling isn’t just about catching mistakes; it’s about making your application resilient and user-friendly. In this article, I’ll share seven Ruby gems that have transformed how I manage errors in Rails apps. I’ll explain each one simply, with code examples from my own projects, so you can see how they work in practice.
Let’s start with why error handling matters. In any web application, things can go wrong—database queries fail, external APIs time out, or users input invalid data. If these errors aren’t handled gracefully, your app might crash, show confusing messages, or worse, expose sensitive information. I’ve seen apps where a single unhandled exception brought down the entire site. By using specialized gems, you can capture errors, log them meaningfully, and even get alerts so you can fix issues quickly. This isn’t about preventing all errors; it’s about managing them so your app stays stable and your users have a smooth experience.
Sentry is one of the first tools I integrated into my Rails projects. It’s like having a watchdog that never sleeps. Sentry automatically catches exceptions and records detailed information about what happened, including the stack trace, user actions leading up to the error, and even performance data. When I set it up, I was amazed at how much easier debugging became. Instead of guessing why something failed, I had a clear timeline of events.
Here’s how I typically configure Sentry in a Rails app. First, you add the gem to your Gemfile and run bundle install. Then, you create an initializer to set it up. I like to use credentials for sensitive data like the DSN key, which you get from your Sentry account.
# Gemfile
gem 'sentry-rails'
# config/initializers/sentry.rb
Sentry.init do |config|
config.dsn = Rails.application.credentials.sentry_dsn
config.breadcrumbs_logger = [:active_support_logger, :http_logger]
config.traces_sample_rate = 0.2
config.environment = Rails.env
end
In this code, the DSN is the unique identifier for your project. The breadcrumbs logger helps Sentry track user actions, like page visits or API calls, before an error occurs. The traces sample rate controls how often performance data is collected—I set it to 0.2, meaning 20% of requests are sampled to avoid overhead. Now, in a controller, I might handle errors manually. For example, in an orders controller, if creating an order fails due to invalid data, I capture the exception and return a user-friendly message.
class OrdersController < ApplicationController
def create
Order.create!(order_params)
rescue ActiveRecord::RecordInvalid => e
Sentry.capture_exception(e)
render json: { error: 'Order creation failed' }, status: :unprocessable_entity
end
private
def order_params
params.require(:order).permit(:amount, :user_id)
end
end
This way, the user sees a clear error, and I get a report in Sentry with all the context. In one project, this helped me identify a recurring issue where users were submitting empty forms, and I could add frontend validation to prevent it.
Bugsnag is another gem I rely on for error monitoring. What I appreciate about Bugsnag is its intelligent grouping—it clusters similar errors together, so you don’t get overwhelmed by duplicates. It also tracks releases, so you can see if a new deployment introduced bugs. I remember using it in a team where we had multiple developers pushing code; Bugsnag made it easy to pinpoint who was responsible for a regression.
Setting up Bugsnag is straightforward. After adding the gem, you configure it with your API key from the Bugsnag dashboard.
# config/initializers/bugsnag.rb
Bugsnag.configure do |config|
config.api_key = Rails.application.credentials.bugsnag_api_key
config.app_version = Rails.application.config.version
config.notify_release_stages = ['production', 'staging']
end
Here, I set the app version to tie errors to specific releases, and I only enable notifications in production and staging to avoid noise during development. In my code, I often add custom metadata to errors for better context. For instance, if a risky operation fails, I include user details to help with debugging.
begin
risky_operation
rescue => e
Bugsnag.notify(e) do |report|
report.add_metadata(:user, { id: current_user.id, email: current_user.email })
end
raise
end
By re-raising the exception, I ensure that the normal error flow continues, but Bugsnag has captured the details. In a recent e-commerce app, this helped us quickly address payment failures by linking them to specific user accounts.
Airbrake has been a staple in my toolkit for years. It focuses on aggregating errors and sending notifications, supporting various backends like email or Slack. I like how it provides rich environment data, making it easier to reproduce issues. When I first used Airbrake, it saved me from missing critical errors in a production app because it sent immediate alerts.
To use Airbrake, you install the gem and configure it with your project credentials.
# config/initializers/airbrake.rb
Airbrake.configure do |config|
config.project_id = Rails.application.credentials.airbrake_project_id
config.project_key = Rails.application.credentials.airbrake_project_key
config.environment = Rails.env
config.ignore_environments = [:development, :test]
end
I ignore development and test environments to keep reports focused on real issues. In services where I handle external integrations, like payments, I use Airbrake to report errors asynchronously. This prevents the error reporting from blocking the main workflow.
class PaymentService
def process
# Payment logic that might fail
raise PaymentGatewayError if gateway_unavailable?
rescue PaymentGatewayError => e
Airbrake.notify(e, {
parameters: { amount: @amount, user_id: @user.id },
cgi_data: ENV.to_h
})
retry_payment
end
end
The parameters and CGI data add context, such as the transaction amount and user ID, which I’ve found invaluable for debugging. In one case, this helped us identify a pattern where payments failed during high traffic periods, leading us to optimize our gateway calls.
Honeybadger is a gem that combines error monitoring with uptime checks and performance insights. I started using it in a project where we needed to ensure high availability, and it gave us a comprehensive view of both errors and system health. Its filtering capabilities are powerful—you can ignore certain errors, like record not found, to reduce noise.
Configuration for Honeybadger often uses a YAML file for flexibility. Here’s how I set it up.
# config/honeybadger.yml
production:
api_key: <%= Rails.application.credentials.honeybadger_api_key %>
environment: production
exceptions:
ignore: ['ActiveRecord::RecordNotFound']
request:
filter_parameters: ['password', 'credit_card']
This ignores common errors like missing records and filters out sensitive parameters to protect user data. In my services, I add contextual information before notifying Honeybadger. For example, in a user profile update, if something goes wrong, I set the context to include the user ID and action.
class UserService
def update_profile(user, attributes)
user.update!(attributes)
rescue => e
Honeybadger.context(user_id: user.id, action: 'profile_update')
Honeybadger.notify(e)
false
end
end
By returning false, I let the caller know the operation failed without exposing the error details to the user. This approach helped me in a social media app where profile updates occasionally failed due to network issues, and we could retry them automatically.
Rollbar is another error tracking tool I’ve used extensively. It stands out for its intelligent alerting and integrations with workflows like Slack or Jira. I remember setting it up for a large team, and it reduced our mean time to resolution by grouping errors and suggesting fixes. It also supports custom log levels, so you can track warnings alongside errors.
To integrate Rollbar, you configure it with an access token and set up async handling to avoid blocking requests.
# config/initializers/rollbar.rb
Rollbar.configure do |config|
config.access_token = Rails.application.credentials.rollbar_access_token
config.environment = Rails.env
config.use_async = true
config.async_handler = Proc.new { |payload| RollbarWorker.perform_async(payload) }
end
I use a background job for async reporting to keep response times fast. In complex services, like report generators, I handle different error types with varying severities.
class ReportGenerator
def generate
# Logic that might fail
raise DataIncompleteError if data_missing?
rescue DataIncompleteError => e
Rollbar.warning(e, report_id: @report.id)
schedule_retry
rescue => e
Rollbar.error(e, report_id: @report.id)
notify_administrators
end
end
Here, a data incomplete error is logged as a warning and triggers a retry, while other errors are treated as critical and notify admins. This granularity helped us in an analytics platform where temporary data issues were common, but we needed to act fast on systemic failures.
Exception Notification is a gem I turn to when I need simple, email-based alerts for errors. It’s lightweight and perfect for smaller apps or teams that prefer immediate notifications without a full dashboard. I used it in a startup project where we couldn’t afford complex tools, and it kept us informed about production issues.
Setting up Exception Notification involves configuring it to send emails only in production and ignoring certain errors.
# config/initializers/exception_notification.rb
require 'exception_notification/rails'
ExceptionNotification.configure do |config|
config.ignore_if do |exception, options|
not Rails.env.production? || exception.is_a?(ActiveRecord::RecordNotFound)
end
config.add_notifier :email, {
email_prefix: '[APP ERROR] ',
sender_address: '[email protected]',
exception_recipients: ['[email protected]']
}
end
This ensures we only get emails for production errors, excluding common ones like record not found. I’ve also used it to manually trigger reports for testing or specific events.
class AdminController < ApplicationController
def trigger_error_report
ExceptionNotifier.notify_exception(
StandardError.new('Manual error report'),
env: request.env,
data: { manual_trigger: true, user: current_user.email }
)
redirect_to admin_dashboard_path, notice: 'Error report sent'
end
end
This came in handy during audits or when simulating errors for training purposes. In one instance, it helped our team practice incident response without waiting for real failures.
Lograge is a gem that changed how I handle logging in Rails. By default, Rails logs are verbose and spread across multiple lines, making it hard to analyze errors in tools like ELK or Splunk. Lograge condenses them into single-line JSON format, which is easier to parse and search. I adopted it in a high-traffic app, and it significantly improved our log management efficiency.
Configuration involves enabling Lograge and customizing the output.
# config/initializers/lograge.rb
Rails.application.configure do
config.lograge.enabled = true
config.lograge.formatter = Lograge::Formatters::Json.new
config.lograge.custom_options = lambda do |event|
{
time: event.time,
user_id: event.payload[:user_id],
params: event.payload[:params].except(*%w[controller action]),
exception: event.payload[:exception]&.join(', '),
exception_object: event.payload[:exception_object]&.class&.name
}
end
end
This adds custom fields like user ID and request parameters, while filtering out controller and action names to reduce clutter. In controllers, I override a method to include additional payload data.
class ApplicationController < ActionController::Base
def append_info_to_payload(payload)
super
payload[:user_id] = current_user&.id
payload[:request_id] = request.request_id
end
end
With this, every log entry includes the user ID and request ID, making it simple to trace errors back to specific users or requests. In a support scenario, we could quickly isolate issues affecting particular users without sifting through raw logs.
Using these gems together creates a robust error handling strategy. In my experience, it’s not about picking one; it’s about layering them based on your needs. For instance, I might use Sentry for detailed error tracking, Bugsnag for release-based monitoring, and Lograge for structured logging. This multi-layered approach ensures that no error goes unnoticed, and you have the context to fix it fast.
When implementing these tools, I’ve learned to start small. Begin with one gem, like Sentry or Bugsnag, and gradually add others as your app grows. Also, consider error volume—too many alerts can lead to fatigue, so use filtering and severity levels wisely. Integrating with existing observability platforms, like Datadog or New Relic, can further enhance your monitoring.
Ultimately, effective error handling is about building trust with your users. When errors occur, a well-handled response keeps them informed and confident in your app. These gems have helped me turn potential disasters into opportunities for improvement. By investing in error management, you not only stabilize your application but also create a foundation for continuous learning and growth.