Supercharge Your Rails App: Advanced Performance Hacks for Speed Demons

Ruby on Rails optimization: Use Unicorn/Puma, optimize memory usage, implement caching, index databases, utilize eager loading, employ background jobs, and manage assets effectively for improved performance.

Supercharge Your Rails App: Advanced Performance Hacks for Speed Demons

Ruby on Rails is a powerful framework, but as your app grows, you might notice performance issues. Let’s dive into some advanced techniques to optimize memory usage and boost performance using Unicorn or Puma.

First up, let’s talk about Unicorn. It’s a multi-process server that’s been around for a while and is known for its stability. To get started with Unicorn, you’ll need to add it to your Gemfile:

gem 'unicorn'

Then, create a config file at config/unicorn.rb:

worker_processes 4
timeout 30
preload_app true

before_fork do |server, worker|
  ActiveRecord::Base.connection.disconnect!
end

after_fork do |server, worker|
  ActiveRecord::Base.establish_connection
end

This config sets up 4 worker processes, a 30-second timeout, and preloads the app for faster worker spawning. The before_fork and after_fork hooks ensure database connections are properly managed.

Now, let’s move on to Puma. It’s a bit newer and supports both multi-process and multi-threaded modes. Add it to your Gemfile:

gem 'puma'

And create a config file at config/puma.rb:

workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count

preload_app!

rackup      DefaultRackup
port        ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RAILS_ENV") { "development" }

on_worker_boot do
  ActiveRecord::Base.establish_connection
end

This config uses environment variables to set the number of workers and threads, making it easy to adjust based on your server’s capabilities.

Now that we’ve got our servers set up, let’s look at some ways to optimize memory usage. One of the biggest memory hogs in Rails apps is often ActiveRecord. When you’re working with large datasets, it’s easy to accidentally load way more data than you need into memory.

Here’s a common pitfall:

users = User.all
users.each do |user|
  # Do something with each user
end

This looks innocent enough, but it’s actually loading every single user into memory at once. Yikes! Instead, use find_each:

User.find_each do |user|
  # Do something with each user
end

This loads users in batches, significantly reducing memory usage.

Another memory-saving tip is to use pluck when you only need specific columns:

user_names = User.pluck(:name)

This is much more efficient than User.all.map(&:name), which would load full user objects into memory.

Now, let’s talk about caching. Rails has built-in caching mechanisms that can dramatically improve performance. Here’s a simple example using fragment caching:

<% cache @product do %>
  <h1><%= @product.name %></h1>
  <p><%= @product.description %></p>
<% end %>

This will cache the product details, avoiding unnecessary database queries on subsequent requests.

For more fine-grained control, you can use low-level caching:

Rails.cache.fetch("user_count", expires_in: 5.minutes) do
  User.count
end

This caches the user count for 5 minutes, which can be a big win if it’s an expensive query that’s called frequently.

Let’s dive a bit deeper into database optimizations. Proper indexing can make a huge difference in query performance. Here’s an example migration to add an index:

class AddIndexToUsersEmail < ActiveRecord::Migration[6.1]
  def change
    add_index :users, :email
  end
end

This will speed up queries that search by email. But be careful not to over-index – indexes take up space and slow down writes, so only add them where they’re really needed.

Another database optimization technique is eager loading. If you find yourself with N+1 query issues, eager loading can help. For example, instead of:

@posts = Post.all
@posts.each do |post|
  puts post.user.name
end

Use:

@posts = Post.includes(:user)
@posts.each do |post|
  puts post.user.name
end

This loads all the associated users in one query, rather than making a separate query for each post.

Now, let’s talk about background jobs. Moving time-consuming tasks out of the request cycle can greatly improve response times. Sidekiq is a popular choice for this. Add it to your Gemfile:

gem 'sidekiq'

Then create a job:

class HardWorkJob < ApplicationJob
  queue_as :default

  def perform(*args)
    # Do something time-consuming
  end
end

And call it from your controller:

HardWorkJob.perform_later

This will queue the job to be performed asynchronously, allowing your server to respond quickly.

Memory bloat can be a real issue in long-running Rails processes. One way to combat this is by using a gem like derailed_benchmarks. It can help you identify memory leaks and performance bottlenecks.

Speaking of gems, be cautious about adding too many. Each gem adds to your app’s memory footprint and can slow down boot time. Regularly review your Gemfile and remove any gems you’re not actively using.

Now, let’s talk about asset management. In production, you’ll want to make sure your assets are properly compiled and fingerprinted. Rails takes care of most of this for you, but you can further optimize by using a CDN. Here’s how you might configure Amazon CloudFront in your production.rb:

config.action_controller.asset_host = 'http://d2oek0c5zxnl2a.cloudfront.net'

This offloads asset serving to CloudFront, reducing the load on your app servers.

Another often overlooked area for optimization is your development environment. Tools like spring can significantly speed up your development workflow by keeping your app running in the background. Add it to your Gemfile:

gem 'spring', group: :development

And then run:

bundle install
bundle exec spring binstub --all

This will create binstubs that use spring, speeding up commands like rails console and rails generate.

When it comes to logging, be careful not to log sensitive information or excessive data in production. You can customize logging in config/environments/production.rb:

config.log_level = :info
config.log_tags = [ :request_id ]

This sets a reasonable log level and adds request IDs to your logs, which can be invaluable for debugging.

Remember, optimization is an ongoing process. Use tools like rack-mini-profiler to continuously monitor your app’s performance and identify bottlenecks.

Lastly, don’t forget about your frontend. While we’ve focused on backend optimizations, a slow frontend can make your app feel sluggish no matter how optimized your backend is. Consider using Turbolinks or Hotwire to speed up page loads, and make sure you’re minimizing and compressing your JavaScript and CSS.

In conclusion, optimizing a Rails app involves a multifaceted approach. From choosing the right server (Unicorn or Puma) to fine-tuning your database queries, caching strategically, and offloading heavy tasks to background jobs, there are many levers you can pull to improve performance. The key is to measure, optimize, and then measure again. Every app is unique, so what works best for one might not be ideal for another. Don’t be afraid to experiment and find the optimizations that give you the biggest bang for your buck in your specific use case.

Remember, premature optimization is the root of all evil (or so they say). Focus on writing clean, maintainable code first, and optimize when you have real performance data to work with. Happy coding!



Similar Posts
Blog Image
Should You Use a Ruby Struct or a Custom Class for Your Next Project?

Struct vs. Class in Ruby: Picking Your Ideal Data Sidekick

Blog Image
How Can Sentry Be the Superhero Your Ruby App Needs?

Error Tracking Like a Pro: Elevate Your Ruby App with Sentry

Blog Image
Can You Crack the Secret Code of Ruby's Metaclasses?

Unlocking Ruby's Secrets: Metaclasses as Your Ultimate Power Tool

Blog Image
Rust's Compile-Time Crypto Magic: Boosting Security and Performance in Your Code

Rust's const evaluation enables compile-time cryptography, allowing complex algorithms to be baked into binaries with zero runtime overhead. This includes creating lookup tables, implementing encryption algorithms, generating pseudo-random numbers, and even complex operations like SHA-256 hashing. It's particularly useful for embedded systems and IoT devices, enhancing security and performance in resource-constrained environments.

Blog Image
Rust's Const Generics: Boost Performance and Flexibility in Your Code Now

Const generics in Rust allow parameterizing types with constant values, enabling powerful abstractions. They offer flexibility in creating arrays with compile-time known lengths, type-safe functions for any array size, and compile-time computations. This feature eliminates runtime checks, reduces code duplication, and enhances type safety, making it valuable for creating efficient and expressive APIs.

Blog Image
What's the Secret Sauce Behind Ruby's Metaprogramming Magic?

Unleashing Ruby's Superpowers: The Art and Science of Metaprogramming