Caching is one of those topics in software that feels both simple and impossibly complex. It promises great speed but threatens you with subtle bugs from stale data. Many developers either avoid it entirely or sprinkle cache calls everywhere hoping for the best. Both are mistakes.
The goal of caching is not to eliminate all database calls. The goal is to eliminate the handful of slow expensive ones that are run over and over again. The secret to doing this well in Rails is not learning a dozen different caching strategies. It is having a simple mental model for making the decision.
Before you write a single Rails.cache.fetch
you have to ask two questions. Is this code slow? And is it run often?
If the answer is not a clear yes to both then you should not cache it. Caching adds a layer of indirection and complexity. It is a trade off. You are trading simplicity for speed. If you are not getting a lot of speed in return for that complexity it is a bad trade.
Code that runs in 20 milliseconds does not need to be cached. A report that is run once a month by an admin probably does not need to be cached. But a query that takes 500 milliseconds and is shown on every user’s dashboard is a perfect candidate.
Premature optimization is a common trap. Caching is a form of optimization. So do not start caching things until you have evidence they are a problem. Your application logs or a performance monitoring tool will tell you which queries or view partials are your slowest. Start there. Before you cache anything be sure the underlying query is as fast as it can be. Sometimes a simple database index is all you need. You can learn more about finding these bottlenecks in A Simple Guide to PostgreSQL EXPLAIN ANALYZE.
Rails offers several ways to cache. There is page caching fragment caching and low level caching. It can be confusing to know where to start.
Forget the others for a moment and focus only on low level caching with Rails.cache.fetch
. It is the most direct and versatile tool and it is all you need for most problems.
This method works like a simple key value store. You give it a unique key and a block of code. If the key exists in the cache it returns the stored value. If it does not it runs the block saves the result to the cache with that key and then returns the result. It is beautifully simple.
Rails.cache.fetch(“some_unique_key”) do
# This block only runs if the key is not in the cache.
# Usually a slow database query or calculation.
Product.where(featured: true).limit(5)
end
The real work is figuring out what that some_unique_key
should be. This brings us to the hardest problem in computer science.
There is a famous saying that there are only two hard things in computer science. Cache invalidation and naming things.
Cache invalidation is the problem of what to do when the underlying data changes. If you cache a list of featured products and then someone changes which products are featured your cache is now stale. It is serving wrong information. This is the source of all caching bugs.
Most developers solve this by manually writing code to delete the cache key whenever the data changes. They use ActiveRecord callbacks or background jobs to say Rails.cache.delete(“some_unique_key”)
. This is brittle. You will forget a callback. You will introduce a new way to change the data and forget to update the invalidation logic. It leads to bugs.
There is a better way. Instead of deleting the key change the key.
Rails models have id
and updated_at
timestamps. These are perfect ingredients for a cache key. A timestamp automatically changes whenever the record is updated. If we include it in our key the key will automatically change when the data changes. This is called key based invalidation.
Let’s say we have a Category
model that has many Product
models. We want to cache the count of products in a category. This can be slow if there are many products.
# in Category.rb
def product_count_cached
cache_key = “category-#{self.id}-#{self.updated_at.to_i}/product_count”
Rails.cache.fetch(cache_key) do
self.products.count
end
end
This is a good start but it is not complete. What happens when a product is added or removed from the category? The category’s updated_at
does not change. So the cache key stays the same and the count becomes stale.
This is where touch: true
comes in. By adding touch: true
to the belongs_to
association on the Product
model you tell Rails to update the parent Category
's updated_at
timestamp whenever a product belonging to it is saved or destroyed.
# in Product.rb
class Product < ApplicationRecord
belongs_to :category, touch: true
end
With that one line our cache key strategy now works perfectly. When a product is changed Rails touches the category. The category’s updated_at
is updated. The cache key for product_count_cached
is now different. The next time it is called the old value is ignored and the block is run again to get the fresh count. It is automatic and robust.
This pattern of using an object’s ID and timestamp in the cache key combined with touch: true
on associations can solve the vast majority of your caching needs. It is simple reliable and easy to reason about. If the logic gets more complex you might consider moving it out of the model itself. I have written more about that pattern here When Rails Models Get Too Big.
Caching does not need to be complicated. You do not need complex expiration strategies or manual cache clearing code for most common problems.
Start with a simple mental model.
Rails.cache.fetch
.updated_at
timestamp.touch: true
on associations to automatically invalidate parent caches.This simple approach will make your app faster without adding the kind of complexity that leads to subtle bugs down the line.
— Rishi Banerjee
September 2025