When you start building APIs, you often face a simple but frustrating problem. Clients need different slices of your data, and a one-size-fits-all REST endpoint doesn’t work well. They either get too little information and have to make multiple calls, or they get too much and waste time processing unnecessary data. This is where GraphQL comes in. It lets the client ask for exactly what it needs in a single request. For Ruby on Rails developers, adding GraphQL can feel like a big shift. Over time, I’ve learned that a few key patterns can make this new system strong, fast, and easy to maintain.
Let’s begin with the foundation: your schema. Think of it as a contract between your server and the clients that use it. It clearly defines what data is available and how to ask for it. In Rails, you define this structure using types and fields.
class Types::UserType < Types::BaseObject
field :id, ID, null: false
field :name, String, null: false
field :email, String, null: true
field :created_at, GraphQL::Types::ISO8601DateTime, null: false
end
A GraphQL schema also needs a single entry point for queries. This is your Query Type. It lists all the top-level queries a client can make, like looking up a user by ID or getting a list of products. Setting it up is straightforward.
class Types::QueryType < Types::BaseObject
field :user, Types::UserType, null: true do
argument :id, ID, required: true
end
field :products, [Types::ProductType], null: false
def user(id:)
User.find_by(id: id)
end
def products
Product.all
end
end
With just that, a client can send a query asking for a user’s name and email, and get back only that. It’s a powerful change. But this basic setup can lead to a common performance issue. Imagine a query that fetches a list of users and their latest orders. If you’re not careful, the server might make one query for the users, and then a separate query for the orders of each user. This is called the N+1 query problem. It can slow your API down a lot.
To fix this, I use a technique called batch loading. Instead of fetching data for each record one by one, you collect all the needed IDs and fetch the related data in one go. Here’s a loader that can fetch any ActiveRecord model.
class Loaders::RecordLoader < GraphQL::Batch::Loader
def initialize(model)
@model = model
end
def perform(keys)
@model.where(id: keys).each { |record| fulfill(record.id, record) }
keys.each { |key| fulfill(key, nil) unless fulfilled?(key) }
end
end
You use it inside your field resolvers. When GraphQL is resolving many fields at the same level, it will use this loader to combine their requests.
field :orders, [Types::OrderType], null: false
def orders
Loaders::AssociationLoader.for(User, :orders).load(object)
end
This pattern keeps your database calls efficient, which is critical as your application grows. The next challenge is handling actions that change data, known as mutations. In REST, these would be POST or PATCH requests. In GraphQL, they are special fields. I structure mutations to be clear and consistent.
I start with a base class that handles common tasks like finding the current user and checking permissions.
module Mutations
class BaseMutation < GraphQL::Schema::Mutation
def current_user
context[:current_user]
end
def authorize(user, record, action)
# Your authorization logic here, e.g., Pundit
raise GraphQL::ExecutionError, "Not allowed" unless policy.allowed?
end
end
end
A concrete mutation, like creating an order, then builds on this base. It defines its arguments and what it will return.
class Mutations::CreateOrder < Mutations::BaseMutation
argument :product_id, ID, required: true
argument :quantity, Int, required: true
field :order, Types::OrderType, null: true
field :errors, [String], null: false
def resolve(product_id:, quantity:)
product = Product.find(product_id)
order = current_user.orders.build(product: product, quantity: quantity)
if order.save
{ order: order, errors: [] }
else
{ order: nil, errors: order.errors.full_messages }
end
end
end
This structure gives clients a predictable way to know if the action worked and to understand what went wrong if it didn’t. Once your API is live, you need to know how it’s performing. Which queries are slow? Are clients asking for too much data? I add instrumentation to find out.
You can wrap field resolvers to time how long they take.
module Instruments::Timing
def self.instrument(type, field)
old_resolve = field.resolve_proc
field.redefine do
resolve ->(obj, args, ctx) do
start = Time.now
result = old_resolve.call(obj, args, ctx)
duration = Time.now - start
Rails.logger.info("#{field.owner.name}.#{field.name}: #{duration}s")
result
end
end
end
end
You can also analyze the incoming query itself to prevent very complex requests from overloading your server. You might limit the depth of nested fields or the total number of fields requested.
class MySchema < GraphQL::Schema
max_depth 10
max_complexity 200
end
For modern applications, static data isn’t enough. Users expect live updates. GraphQL subscriptions provide this real-time layer. When something changes on the server, it can push that update to subscribed clients. In Rails, this often works with Action Cable.
First, you define a subscription type, which is like a query that listens for events.
class Subscriptions::OrderShipped < Subscriptions::BaseSubscription
field :order, Types::OrderType, null: false
field :tracking_number, String, null: true
def subscribe
# Code to authorize the subscription
end
def update
# Code to send data when the event occurs
{ order: object, tracking_number: object.tracking_code }
end
end
You trigger this subscription from elsewhere in your code, like in an Order model after_save callback.
def broadcast_shipped
Subscriptions::OrderShipped.trigger(order_id: id)
end
Performance isn’t just about database queries. Often, you can avoid calculating the same response twice by using a cache. GraphQL caching can happen at different levels. You can cache the entire response for a given query.
class GraphQLController < ApplicationController
def execute
cache_key = generate_cache_key(params[:query], params[:variables])
result = Rails.cache.fetch(cache_key, expires_in: 5.minutes) do
MySchema.execute(params[:query], variables: params[:variables])
end
render json: result
end
end
You can also cache at the field level. This is useful for expensive calculations that don’t change often.
field :weekly_report, String, null: false
def weekly_report
Rails.cache.fetch(["weekly_report", object.id], expires_in: 1.hour) do
object.generate_complex_report
end
end
Finally, APIs change. New features need new fields. Old fields become outdated. You need a plan for this evolution. In GraphQL, you can deprecate fields without removing them right away. This gives client developers time to update their code.
field :old_email, String, null: true,
deprecation_reason: "Use the 'email' field instead."
For larger changes, you might need to version your entire schema. One approach is to run multiple schemas side-by-side, routing requests based on a version header from the client.
class ApiController < ApplicationController
def execute
version = request.headers['X-Api-Version'] || 'v1'
schema = version == 'v2' ? V2Schema : V1Schema
result = schema.execute(params[:query], variables: params[:variables])
render json: result
end
end
These patterns—structured schemas, batch loading, clear mutations, monitoring, real-time subscriptions, caching, and versioning—form a toolkit. They help you build a GraphQL API in Rails that is not just functional, but also robust and scalable. Each one addresses a specific challenge you’ll meet as your application grows from a simple idea to a platform serving many clients. Start with a solid schema, protect it from slow queries, make changes safely, and keep an eye on its health. This approach has served me well, and it can provide a strong foundation for your projects too.