Ruby’s metaprogramming capabilities are among its most compelling features. They allow us to write code that writes code, to introspect and manipulate objects at runtime, and to build tools that help us understand, debug, and optimize our applications in ways that static languages simply cannot match. Over the years, I’ve found myself returning to a handful of patterns that consistently prove invaluable when things go wrong or when I need to understand a complex system’s inner workings.
Let’s start with method tracing. There are moments when you need to see not just what a method returns, but when it’s called, what arguments it receives, and how long it takes to run. While you can sprinkle puts
statements throughout your code, that approach is messy and temporary. A more elegant solution is to dynamically wrap methods.
module MethodTracer
def trace_method(method_name)
original_method = instance_method(method_name)
define_method(method_name) do |*args, &block|
puts "→ #{self.class}##{method_name} called with #{args.inspect}"
start_time = Time.now
result = original_method.bind(self).call(*args, &block)
duration = Time.now - start_time
puts "← #{self.class}##{method_name} returned #{result.inspect} in #{duration.round(3)}s"
result
end
end
end
class UserService
extend MethodTracer
trace_method :find_user
def find_user(id)
User.find(id)
end
end
This pattern replaces the original method with a new version that logs the call, records the time, executes the original logic, logs the return value and timing, and then returns the result. It’s non-destructive—you can always remove the tracing by redefining the method. I’ve used this to identify unexpectedly frequent calls or surprisingly slow methods in production-like environments.
Sometimes, you need to go deeper than method calls and look at the object’s internal state. Perhaps an instance variable is being set to nil
unexpectedly, or a value changes between two points in the execution flow. For these cases, having a tool that can take a snapshot of an object’s state is incredibly useful.
class StateSnapshot
def initialize(object)
@object = object
@snapshots = []
end
def capture(label = nil)
snapshot = {
timestamp: Time.now,
label: label,
state: @object.instance_variables.each_with_object({}) do |ivar, hash|
value = @object.instance_variable_get(ivar)
hash[ivar] = Marshal.dump(value) rescue "Unable to marshal: #{value.inspect}"
end
}
@snapshots << snapshot
snapshot
end
def diff(snapshot_index1, snapshot_index2)
snap1 = @snapshots[snapshot_index1]
snap2 = @snapshots[snapshot_index2]
changes = {}
snap1[:state].each do |ivar, marshaled_value1|
marshaled_value2 = snap2[:state][ivar]
changes[ivar] = { from: marshaled_value1, to: marshaled_value2 } if marshaled_value1 != marshaled_value2
end
changes
end
end
user = User.first
snapshotter = StateSnapshot.new(user)
snapshotter.capture("initial state")
user.update!(email: '[email protected]', last_login_at: Time.now)
snapshotter.capture("after update")
changes = snapshotter.diff(0, 1)
changes.each do |var, diff|
puts "Changed #{var}: was #{diff[:from]}, now #{diff[:to]}"
end
The use of Marshal.dump
here is key. It lets us compare the complete state of potentially complex objects, not just simple values. I once used this to track down a bug where a date field was being subtly altered by a background job—without a before-and-after comparison, the change was nearly invisible.
Understanding the call stack is another common debugging need. While caller
gives you the current stack, sometimes you need more context or want to track how you reached a particular method across different parts of your codebase.
module CallStackAnalyzer
def capture_stack(depth: 15, filter: nil)
stack = caller_locations(1, depth)
stack = stack.reject { |loc| filter.call(loc) } if filter
stack.map do |location|
{
path: location.absolute_path,
line: location.lineno,
label: location.label,
base_label: location.base_label
}
end
end
def find_caller_of(method_name)
capture_stack(depth: 20).find do |frame|
frame[:label] == method_name.to_s || frame[:base_label] == method_name.to_s
end
end
end
# Extend main object to use in debugging sessions
extend CallStackAnalyzer
def process_order(order)
# Imagine complex logic here
validate_order(order)
charge_customer(order)
fulfill_order(order)
end
# Later, when debugging:
stack = capture_stack(depth: 10)
puts "Current stack trace:"
stack.each { |frame| puts "#{frame[:path]}:#{frame[:line]} in #{frame[:label]}" }
I often use a filtered stack capture to exclude frames from gem directories or specific project paths, making the relevant application code easier to follow. This approach helped me isolate a problematic call chain that was triggering a race condition in a multi-threaded environment.
When performance issues arise, method-level timing might not be enough. You need to understand which methods are called most frequently, which are the slowest on average, and where the variability in response times comes from. That’s where method profiling comes in.
class MethodProfiler
def self.profile(klass, *method_names)
method_names.each do |method_name|
original_method = klass.instance_method(method_name)
klass.define_method(method_name) do |*args, &block|
start_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
result = original_method.bind(self).call(*args, &block)
end_time = Process.clock_gettime(Process::CLOCK_MONOTONIC)
duration = end_time - start_time
MethodProfiler.record_call(klass, method_name, duration)
result
end
end
end
def self.record_call(klass, method_name, duration)
@calls ||= Hash.new { |h, k| h[k] = [] }
key = "#{klass}##{method_name}"
@calls[key] << duration
end
def self.report
puts "Method profiling report:"
puts "=" * 50
@calls.each do |method, durations|
total = durations.sum
avg = total / durations.size
max = durations.max
min = durations.min
std_dev = Math.sqrt(durations.map { |d| (d - avg) ** 2 }.sum / durations.size)
puts "#{method}:"
puts " Calls: #{durations.size}"
puts " Total time: #{total.round(3)}s"
puts " Avg: #{avg.round(3)}s, Min: #{min.round(3)}s, Max: #{max.round(3)}s"
puts " Std Dev: #{std_dev.round(3)}s"
puts "-" * 30
end
end
def self.reset
@calls = nil
end
end
# Profile specific methods in development
if Rails.env.development?
MethodProfiler.profile(UserService, :find_user, :create_user, :update_user)
MethodProfiler.profile(OrderService, :calculate_total, :apply_discounts)
end
The standard deviation calculation here is something I added after dealing with an API that usually responded quickly but occasionally took seconds to complete. The high standard deviation pointed me toward resource contention issues that average timing alone would have masked.
Breakpoints are a debugger’s best friend, but conditional breakpoints are even better. Instead of breaking every time you hit a line, you can break only when specific conditions are met—when a variable reaches a certain value, when a flag is set, or when a particular object is being processed.
module ConditionalDebugger
def break_if(condition_proc, options = {})
trace = TracePoint.new(:line) do |tp|
next unless condition_proc.call(tp.binding)
puts "Break condition met at #{tp.path}:#{tp.lineno}"
if options[:message]
puts "Message: #{options[:message]}"
end
if options[:interactive] && defined?(Pry)
Pry.start(tp.binding)
elsif options[:interactive]
puts "Pry not available for interactive debugging"
end
end
trace.enable
trace
end
def break_when(variable_name, expected_value)
break_if(
proc { |b| b.local_variable_get(variable_name) == expected_value },
message: "Breakpoint: #{variable_name} == #{expected_value}",
interactive: true
)
end
def break_on_object_id(object_id)
break_if(
proc { |b| b.eval('self').object_id == object_id },
message: "Breakpoint: current object ID matches #{object_id}",
interactive: true
)
end
end
# Example usage in a Rails controller
class OrdersController < ApplicationController
extend ConditionalDebugger
def create
@order = Order.new(order_params)
break_when(:@order, @order) if @order.total > 1000
if @order.save
redirect_to @order, notice: 'Order created.'
else
render :new
end
end
end
I’ve used conditional breakpoints to catch elusive bugs that only occurred with specific data values. The ability to break interactively when a condition is met, rather than stepping through countless iterations, saved me hours of debugging time.
Dependency mapping is another powerful technique, especially when working with large, legacy codebases. Understanding how classes and modules relate to each other helps you see the big picture, identify tight coupling, and find the right places to make changes.
class DependencyMapper
def initialize(target_class)
@target = target_class
@dependencies = Set.new
end
def map
@target.instance_methods.each do |method_name|
method = @target.instance_method(method_name)
source = method.source_location
next unless source
# Read the source file around the method definition
lines = File.readlines(source[0])
start_line = [0, source[1] - 5].max
end_line = [lines.size - 1, source[1] + 5].min
relevant_code = lines[start_line..end_line].join
# Find constant references in the method
relevant_code.scan(/\b[A-Z][A-Za-z0-9_]*(?:::[A-Z][A-Za-z0-9_]*)*\b/) do |constant_name|
next if constant_name == @target.name
begin
constant = Object.const_get(constant_name)
@dependencies << constant if constant.is_a?(Class) || constant.is_a?(Module)
rescue NameError
# Constant might not be loaded or might be in a different namespace
@dependencies << constant_name
end
end
end
@dependencies
end
def visualize
puts "Dependency map for #{@target.name}:"
@dependencies.sort_by(&:to_s).each do |dep|
if dep.is_a?(String)
puts " → #{dep} (not loaded)"
else
puts " → #{dep.name}"
end
end
end
def to_graph(format: :text)
case format
when :text
visualize
when :dot
generate_dot_graph
when :json
@dependencies.map { |dep| dep.is_a?(String) ? dep : dep.name }.to_json
end
end
private
def generate_dot_graph
dot = ["digraph #{@target.name} {"]
@dependencies.each do |dep|
target_name = dep.is_a?(String) ? dep : dep.name
dot << " #{@target.name} -> #{target_name.gsub('::', '_')};"
end
dot << "}"
dot.join("\n")
end
end
# Generate a dependency map for a service class
mapper = DependencyMapper.new(PaymentProcessor)
dependencies = mapper.map
mapper.visualize
# Output as Graphviz DOT format for visualization
puts mapper.to_graph(format: :dot)
The DOT format output is particularly useful—you can pipe it to Graphviz to generate actual dependency graphs. I once used this to refactor a tightly coupled module into a more modular design by clearly seeing which dependencies were actually necessary and which were historical artifacts.
Finally, let’s look at dynamic object inspection. Sometimes you need to understand not just what methods an object has, but what its current state is, what changes have occurred, and how it behaves at runtime.
class ObjectInspector
def self.inspect(object, options = {})
inspection = {
class: object.class,
object_id: object.object_id,
frozen?: object.frozen?,
instance_variables: {}
}
object.instance_variables.each do |ivar|
value = object.instance_variable_get(ivar)
inspection[:instance_variables][ivar] = {
value: value.inspect,
class: value.class,
object_id: value.object_id
}
end
if options[:methods]
inspection[:methods] = object.methods - Object.methods
end
if options[:singleton_methods]
inspection[:singleton_methods] = object.singleton_methods
end
if options[:ancestors]
inspection[:ancestors] = object.class.ancestors
end
inspection
end
def self.track_changes(object, *attributes)
original_values = {}
attributes.each do |attr|
original_values[attr] = object.send(attr)
end
object.define_singleton_method(:inspect_changes) do
changes = {}
attributes.each do |attr|
current = send(attr)
original = original_values[attr]
changes[attr] = { from: original, to: current } if current != original
end
changes
end
end
end
# Usage example
user = User.find(1)
ObjectInspector.track_changes(user, :email, :status, :last_login_at)
user.update!(email: '[email protected]', status: 'active')
puts user.inspect_changes.inspect
# Full inspection
full_inspection = ObjectInspector.inspect(user,
methods: true,
singleton_methods: true,
ancestors: true
)
puts JSON.pretty_generate(full_inspection)
The change tracking feature has been particularly helpful when debugging complex form objects or service objects where multiple attributes might change through a process, and I need to understand exactly what changed and in what order.
These patterns represent just a fraction of what’s possible with Ruby metaprogramming for debugging and introspection. What makes them powerful is their composability—you can combine method tracing with profiling, or dependency mapping with state snapshots, to build custom debugging tools tailored to your specific needs.
The key is to use these techniques judiciously. While they’re incredibly powerful for development and debugging, many of them should not be enabled in production environments due to performance overhead. I typically gate them behind environment checks or feature flags.
I’ve found that investing time in building these kinds of introspection tools pays dividends throughout the development process. They help me understand code I didn’t write, debug problems that would otherwise require extensive logging, and optimize performance in a data-driven way. Ruby’s metaprogramming capabilities turn debugging from a frustrating exercise in guesswork into a systematic process of discovery and understanding.