When I build Ruby applications, I rely heavily on third-party libraries. They save me immense time and effort. But I’ve learned that every gem I add is not just a tool; it’s a piece of code written by someone else, running inside my application. This comes with risk. A vulnerability in one of those libraries can become a vulnerability in my entire system. Over time, I’ve developed a set of practices to manage these risks systematically. Let me share a practical, layered approach to securing a Ruby application through its dependencies.
My first line of defense is knowing what I have. It sounds basic, but you can’t secure what you don’t know exists. I automate the process of checking for known vulnerabilities. I don’t do this manually; I bake it into the development workflow itself.
I use tools like bundler-audit to scan my Gemfile.lock against databases of known security issues. Here’s how I integrate it. I add it to my Gemfile, not as a runtime dependency, but as a development tool.
group :development do
gem 'bundler-audit', require: false
end
More importantly, I don’t just run it when I remember. I make it a required step. I create a Rake task that will fail if it finds a problem. This failure can then stop a build in my continuous integration pipeline.
namespace :security do
desc 'Check for vulnerable gems'
task :audit do
require 'bundler/audit/cli'
puts "Running security audit..."
exit 1 unless system('bundle audit check --update')
end
end
Now, every time I run my test suite or a build runs on the server, this check happens. If a new vulnerability is published for a gem I use, the build fails. It forces me to address the issue immediately, before the code can be deployed. This turns a reactive security task into a proactive gate.
Knowing about vulnerabilities is crucial, but preventing them starts earlier, with how I specify versions. Early on, I would use loose version constraints like gem 'rails'. This is asking for trouble. An automatic update could pull in a new major version that breaks my app, or worse, introduces an unknown issue.
I now use pessimistic version constraints. This gives me safety patches without unexpected changes.
# Good: Accepts any patch update (7.0.4.1, 7.0.4.2) but not 7.0.5 or 7.1.0
gem 'rails', '~> 7.0.4'
# Also good: A clear range for more control
gem 'sidekiq', '>= 6.5.0', '< 7.0'
The ~> operator is my friend. ~> 7.0.4 means “any version in the 7.0 series, but at least 4”. It will automatically get updates like 7.0.4.1, but it will never jump to 7.0.5 or 7.1.0 without me changing the Gemfile. This strikes a balance: I get critical security patches automatically, but I must manually review minor and major updates.
To manage updates, I run bundle outdated regularly. I treat patch-level updates (7.0.4 to 7.0.5) as routine maintenance. I apply them frequently, often automatically. Minor updates (7.0.x to 7.1.x) require running my full test suite. Major updates (7.x to 8.x) are a project. I schedule time for them, read the changelog thoroughly, and test extensively.
Where do my gems come from? By default, RubyGems.org. But what if that source is compromised, or a gem is hijacked? I think about the supply chain. I restrict where Bundler is allowed to fetch gems from.
In my Gemfile, I can be explicit about sources. I might trust the main RubyGems source for public gems, but use a private source for my company’s internal libraries.
# All gems in this block come from RubyGems
source 'https://rubygems.org' do
gem 'rails'
gem 'pg'
end
# These come from our private server
source 'https://gems.mycompany.internal' do
gem 'internal-auth-lib'
end
I also use the Bundler configuration to add another layer of safety. I can “freeze” my Gemfile.lock to prevent accidental changes, or force it to use only Ruby-platform gems to avoid pre-compiled native extensions that are harder to audit.
# In .bundle/config
---
BUNDLE_FROZEN: "1"
BUNDLE_FORCE_RUBY_PLATFORM: "1"
A different kind of risk comes from licenses. Using a gem with a restrictive license like GPL in a commercial project can have serious legal consequences. I need to know what licenses my dependencies use.
I don’t do this manually. I use a license scanner. A simple checker might look at the metadata of each installed gem and report back.
def check_licenses(allowed = ['MIT', 'Apache-2.0', 'BSD-3-Clause'])
violations = []
Bundler.load.specs.each do |gem_spec|
license = gem_spec.license
# Some gems have no license in metadata
license = detect_from_license_file(gem_spec) if license.nil? || license.empty?
unless allowed.include?(license)
violations << { name: gem_spec.name, version: gem_spec.version, license: license }
end
end
violations
end
I run this check as part of my CI pipeline for any new dependency. If a pull request adds a gem with a non-compliant license, the build fails. This prevents legal issues from sneaking in through a casual bundle add.
Static analysis happens before the code runs. But some code, especially in larger applications, might load gems dynamically at runtime using require. A malicious actor, or a bug, could try to load an unexpected library. I want to monitor what actually gets loaded when my application runs.
I can hook into the require method to log what’s being loaded and from where. This is more advanced, but it’s a powerful detection tool.
module RequireMonitor
def self.install!
kernel = class << ::Kernel; self; end
kernel.alias_method :original_require, :require
kernel.define_method(:require) do |name|
# Log the require call
puts "[Require Monitor] Loading: #{name} from #{caller.first}" if ENV['MONITOR_REQUIRES']
original_require(name)
end
end
end
# Call this during application initialization in development/staging
RequireMonitor.install! if ENV['MONITOR_REQUIRES']
In a staging environment, I can enable this and watch the log. If I see a require for a gem that’s not in my Gemfile, it’s a huge red flag. It means something in my code, or in a gem’s code, is trying to load a dependency I haven’t vetted.
Sometimes, the threat isn’t a known vulnerability in a popular gem. It’s a targeted attack where a malicious actor publishes a useful-looking gem or compromises an existing gem’s update. This is a supply chain attack.
Defending against this is difficult but starts with vigilance. I am cautious of gems with few downloads, unknown authors, or code that looks obfuscated. I also consider tools that can scan the actual source code of installed gems for suspicious patterns, like attempts to run shell commands, access the filesystem unexpectedly, or call eval on dynamic data.
While a full scanner is complex, the mindset is simple: trust, but verify. If a gem’s functionality is simple but its code is complex and hard to read, I look for an alternative.
Finally, I use Bundler’s built-in features to isolate dependencies by their purpose. This is done with groups.
gem 'rails', '~> 7.0.4'
group :development, :test do
gem 'debug'
gem 'rspec-rails'
end
group :development do
gem 'web-console' # A gem that can execute code in the browser - dangerous!
end
group :production do
gem 'newrelic_rpm'
end
The key here is that in my production environment, I only load the :default and :production groups. I explicitly do not load the :development group.
# In config/application.rb
Bundler.require(*Rails.groups) # This loads groups based on RAILS_ENV
This means a potentially dangerous gem like web-console, which is meant for debugging, is never even loaded in production. Its code cannot be accidentally invoked or exploited. This dramatically reduces the “attack surface” of my running application. I only load the absolute minimum code needed for the environment.
Security is not a single step; it’s a process woven into the entire lifecycle of the application. From the moment I type bundle add to the second the application is running in production, I have opportunities to manage risk.
I start by carefully choosing and constraining versions. I automatically and continuously check for known vulnerabilities. I control where gems come from and what licenses they use. I monitor what loads at runtime and isolate code based on its purpose. Each layer adds cost in terms of process and vigilance, but the cost of a security breach is always higher.
The goal is not to eliminate all risk—that’s impossible when using open-source software. The goal is to manage the risk intelligently, to catch problems early, and to build an application that is resilient even when one of its components has a flaw. By making these practices routine, I can build with the incredible power of Ruby’s ecosystem, while keeping my applications and users safe.